ERIC Educational Resources Information Center
Anderson Koenig, Judith; Roberts, James S.
2007-01-01
Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…
Mining gene link information for survival pathway hunting.
Jing, Gao-Jian; Zhang, Zirui; Wang, Hong-Qiang; Zheng, Hong-Mei
2015-08-01
This study proposes a gene link-based method for survival time-related pathway hunting. In this method, the authors incorporate gene link information to estimate how a pathway is associated with cancer patient's survival time. Specifically, a gene link-based Cox proportional hazard model (Link-Cox) is established, in which two linked genes are considered together to represent a link variable and the association of the link with survival time is assessed using Cox proportional hazard model. On the basis of the Link-Cox model, the authors formulate a new statistic for measuring the association of a pathway with survival time of cancer patients, referred to as pathway survival score (PSS), by summarising survival significance over all the gene links in the pathway, and devise a permutation test to test the significance of an observed PSS. To evaluate the proposed method, the authors applied it to simulation data and two publicly available real-world gene expression data sets. Extensive comparisons with previous methods show the effectiveness and efficiency of the proposed method for survival pathway hunting.
Decoupling Identification for Serial Two-Link Two-Inertia System
NASA Astrophysics Data System (ADS)
Oaki, Junji; Adachi, Shuichi
The purpose of our study is to develop a precise model by applying the technique of system identification for the model-based control of a nonlinear robot arm, under taking joint-elasticity into consideration. We previously proposed a systematic identification method, called “decoupling identification,” for a “SCARA-type” planar two-link robot arm with elastic joints caused by the Harmonic-drive® reduction gears. The proposed method serves as an extension of the conventional rigid-joint-model-based identification. The robot arm is treated as a serial two-link two-inertia system with nonlinearity. The decoupling identification method using link-accelerometer signals enables the serial two-link two-inertia system to be divided into two linear one-link two-inertia systems. The MATLAB®'s commands for state-space model estimation are utilized in the proposed method. Physical parameters such as motor inertias, link inertias, joint-friction coefficients, and joint-spring coefficients are estimated through the identified one-link two-inertia systems using a gray-box approach. This paper describes accuracy evaluations using the two-link arm for the decoupling identification method under introducing closed-loop-controlled elements and varying amplitude-setup of identification-input. Experimental results show that the identification method also works with closed-loop-controlled elements. Therefore, the identification method is applicable to a “PUMA-type” vertical robot arm under gravity.
The effect of inertial coupling in the dynamics and control of flexible robotic manipulators
NASA Technical Reports Server (NTRS)
Tesar, Delbert; Curran, Carol Cockrell; Graves, Philip Lee
1988-01-01
A general model of the dynamics of flexible robotic manipulators is presented, including the gross motion of the links, the vibrations of the links and joints, and the dynamic coupling between the gross motions and vibrations. The vibrations in the links may be modeled using lumped parameters, truncated modal summation, a component mode synthesis method, or a mixture of these methods. The local link inertia matrix is derived to obtain the coupling terms between the gross motion of the link and the vibrations of the link. Coupling between the motions of the links results from the kinematic model, which utilizes the method of kinematic influence. The model is used to simulate the dynamics of a flexible space-based robotic manipulator which is attached to a spacecraft, and is free to move with respect to the inertial reference frame. This model may be used to study the dynamic response of the manipulator to the motions of its joints, or to externally applied disturbances.
A VGI data integration framework based on linked data model
NASA Astrophysics Data System (ADS)
Wan, Lin; Ren, Rongrong
2015-12-01
This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.
Control method and system for hydraulic machines employing a dynamic joint motion model
Danko, George [Reno, NV
2011-11-22
A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.
Directly linking air quality and watershed models could provide an effective method for estimating spatially-explicit inputs of atmospheric contaminants to watershed biogeochemical models. However, to adequately link air and watershed models for wet deposition estimates, each mod...
Combined node and link partitions method for finding overlapping communities in complex networks
Jin, Di; Gabrys, Bogdan; Dang, Jianwu
2015-01-01
Community detection in complex networks is a fundamental data analysis task in various domains, and how to effectively find overlapping communities in real applications is still a challenge. In this work, we propose a new unified model and method for finding the best overlapping communities on the basis of the associated node and link partitions derived from the same framework. Specifically, we first describe a unified model that accommodates node and link communities (partitions) together, and then present a nonnegative matrix factorization method to learn the parameters of the model. Thereafter, we infer the overlapping communities based on the derived node and link communities, i.e., determine each overlapped community between the corresponding node and link community with a greedy optimization of a local community function conductance. Finally, we introduce a model selection method based on consensus clustering to determine the number of communities. We have evaluated our method on both synthetic and real-world networks with ground-truths, and compared it with seven state-of-the-art methods. The experimental results demonstrate the superior performance of our method over the competing ones in detecting overlapping communities for all analysed data sets. Improved performance is particularly pronounced in cases of more complicated networked community structures. PMID:25715829
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
Link prediction measures considering different neighbors’ effects and application in social networks
NASA Astrophysics Data System (ADS)
Luo, Peng; Wu, Chong; Li, Yongli
Link prediction measures have been attracted particular attention in the field of mathematical physics. In this paper, we consider the different effects of neighbors in link prediction and focus on four different situations: only consider the individual’s own effects; consider the effects of individual, neighbors and neighbors’ neighbors; consider the effects of individual, neighbors, neighbors’ neighbors, neighbors’ neighbors’ neighbors and neighbors’ neighbors’ neighbors’ neighbors; consider the whole network participants’ effects. Then, according to the four situations, we present our link prediction models which also take the effects of social characteristics into consideration. An artificial network is adopted to illustrate the parameter estimation based on logistic regression. Furthermore, we compare our methods with the some other link prediction methods (LPMs) to examine the validity of our proposed model in online social networks. The results show the superior of our proposed link prediction methods compared with others. In the application part, our models are applied to study the social network evolution and used to recommend friends and cooperators in social networks.
A method for modelling GP practice level deprivation scores using GIS
Strong, Mark; Maheswaran, Ravi; Pearson, Tim; Fryers, Paul
2007-01-01
Background A measure of general practice level socioeconomic deprivation can be used to explore the association between deprivation and other practice characteristics. An area-based categorisation is commonly chosen as the basis for such a deprivation measure. Ideally a practice population-weighted area-based deprivation score would be calculated using individual level spatially referenced data. However, these data are often unavailable. One approach is to link the practice postcode to an area-based deprivation score, but this method has limitations. This study aimed to develop a Geographical Information Systems (GIS) based model that could better predict a practice population-weighted deprivation score in the absence of patient level data than simple practice postcode linkage. Results We calculated predicted practice level Index of Multiple Deprivation (IMD) 2004 deprivation scores using two methods that did not require patient level data. Firstly we linked the practice postcode to an IMD 2004 score, and secondly we used a GIS model derived using data from Rotherham, UK. We compared our two sets of predicted scores to "gold standard" practice population-weighted scores for practices in Doncaster, Havering and Warrington. Overall, the practice postcode linkage method overestimated "gold standard" IMD scores by 2.54 points (95% CI 0.94, 4.14), whereas our modelling method showed no such bias (mean difference 0.36, 95% CI -0.30, 1.02). The postcode-linked method systematically underestimated the gold standard score in less deprived areas, and overestimated it in more deprived areas. Our modelling method showed a small underestimation in scores at higher levels of deprivation in Havering, but showed no bias in Doncaster or Warrington. The postcode-linked method showed more variability when predicting scores than did the GIS modelling method. Conclusion A GIS based model can be used to predict a practice population-weighted area-based deprivation measure in the absence of patient level data. Our modelled measure generally had better agreement with the population-weighted measure than did a postcode-linked measure. Our model may also avoid an underestimation of IMD scores in less deprived areas, and overestimation of scores in more deprived areas, seen when using postcode linked scores. The proposed method may be of use to researchers who do not have access to patient level spatially referenced data. PMID:17822545
Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks
Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S.
2017-01-01
Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a=(u,v) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages. PMID:28771201
Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S
2017-08-03
Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a = ( u , v ) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages.
Mathematical Modeling For Control Of A Flexible Manipulator
NASA Technical Reports Server (NTRS)
Hu, Anren
1996-01-01
Improved method of mathematical modeling of dynamics of flexible robotic manipulators developed for use in controlling motions of manipulators. Involves accounting for effect, upon modes of vibration of manipulator, of changes in configuration of manipulator and manipulated payload(s). Flexible manipulator has one or more long, slender articulated link(s), like those used in outer space, method also applicable to terrestrial industrial robotic manipulators with relatively short, stiff links, or to such terrestrial machines as construction cranes.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Cross-Link Guided Molecular Modeling with ROSETTA
Leitner, Alexander; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
2013-01-01
Chemical cross-links identified by mass spectrometry generate distance restraints that reveal low-resolution structural information on proteins and protein complexes. The technology to reliably generate such data has become mature and robust enough to shift the focus to the question of how these distance restraints can be best integrated into molecular modeling calculations. Here, we introduce three workflows for incorporating distance restraints generated by chemical cross-linking and mass spectrometry into ROSETTA protocols for comparative and de novo modeling and protein-protein docking. We demonstrate that the cross-link validation and visualization software Xwalk facilitates successful cross-link data integration. Besides the protocols we introduce XLdb, a database of chemical cross-links from 14 different publications with 506 intra-protein and 62 inter-protein cross-links, where each cross-link can be mapped on an experimental structure from the Protein Data Bank. Finally, we demonstrate on a protein-protein docking reference data set the impact of virtual cross-links on protein docking calculations and show that an inter-protein cross-link can reduce on average the RMSD of a docking prediction by 5.0 Å. The methods and results presented here provide guidelines for the effective integration of chemical cross-link data in molecular modeling calculations and should advance the structural analysis of particularly large and transient protein complexes via hybrid structural biology methods. PMID:24069194
Real time markerless motion tracking using linked kinematic chains
Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM
2007-08-14
A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.
ERIC Educational Resources Information Center
Lorenz, Frederick O.; And Others
1991-01-01
Examined effects of method variance on models linking family economic pressure, marital quality, and expressions of hostility and warmth among 76 couples. Observer reports yielded results linking economic pressure to marital quality indirectly through interactional processes such as hostility. Self-reports or spouses' reports made it difficult to…
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1990-01-01
The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.
A Complex Systems Model Approach to Quantified Mineral Resource Appraisal
Gettings, M.E.; Bultman, M.W.; Fisher, F.S.
2004-01-01
For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.
An efficient link prediction index for complex military organization
NASA Astrophysics Data System (ADS)
Fan, Changjun; Liu, Zhong; Lu, Xin; Xiu, Baoxin; Chen, Qing
2017-03-01
Quality of information is crucial for decision-makers to judge the battlefield situations and design the best operation plans, however, real intelligence data are often incomplete and noisy, where missing links prediction methods and spurious links identification algorithms can be applied, if modeling the complex military organization as the complex network where nodes represent functional units and edges denote communication links. Traditional link prediction methods usually work well on homogeneous networks, but few for the heterogeneous ones. And the military network is a typical heterogeneous network, where there are different types of nodes and edges. In this paper, we proposed a combined link prediction index considering both the nodes' types effects and nodes' structural similarities, and demonstrated that it is remarkably superior to all the 25 existing similarity-based methods both in predicting missing links and identifying spurious links in a real military network data; we also investigated the algorithms' robustness under noisy environment, and found the mistaken information is more misleading than incomplete information in military areas, which is different from that in recommendation systems, and our method maintained the best performance under the condition of small noise. Since the real military network intelligence must be carefully checked at first due to its significance, and link prediction methods are just adopted to purify the network with the left latent noise, the method proposed here is applicable in real situations. In the end, as the FINC-E model, here used to describe the complex military organizations, is also suitable to many other social organizations, such as criminal networks, business organizations, etc., thus our method has its prospects in these areas for many tasks, like detecting the underground relationships between terrorists, predicting the potential business markets for decision-makers, and so on.
Predicting the evolution of complex networks via similarity dynamics
NASA Astrophysics Data System (ADS)
Wu, Tao; Chen, Leiting; Zhong, Linfeng; Xian, Xingping
2017-01-01
Almost all real-world networks are subject to constant evolution, and plenty of them have been investigated empirically to uncover the underlying evolution mechanism. However, the evolution prediction of dynamic networks still remains a challenging problem. The crux of this matter is to estimate the future network links of dynamic networks. This paper studies the evolution prediction of dynamic networks with link prediction paradigm. To estimate the likelihood of the existence of links more accurate, an effective and robust similarity index is presented by exploiting network structure adaptively. Moreover, most of the existing link prediction methods do not make a clear distinction between future links and missing links. In order to predict the future links, the networks are regarded as dynamic systems in this paper, and a similarity updating method, spatial-temporal position drift model, is developed to simulate the evolutionary dynamics of node similarity. Then the updated similarities are used as input information for the future links' likelihood estimation. Extensive experiments on real-world networks suggest that the proposed similarity index performs better than baseline methods and the position drift model performs well for evolution prediction in real-world evolving networks.
Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron
NASA Astrophysics Data System (ADS)
Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.
2010-07-01
Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
Modeling Protein Excited-state Structures from "Over-length" Chemical Cross-links.
Ding, Yue-He; Gong, Zhou; Dong, Xu; Liu, Kan; Liu, Zhu; Liu, Chao; He, Si-Min; Dong, Meng-Qiu; Tang, Chun
2017-01-27
Chemical cross-linking coupled with mass spectroscopy (CXMS) provides proximity information for the cross-linked residues and is used increasingly for modeling protein structures. However, experimentally identified cross-links are sometimes incompatible with the known structure of a protein, as the distance calculated between the cross-linked residues far exceeds the maximum length of the cross-linker. The discrepancies may persist even after eliminating potentially false cross-links and excluding intermolecular ones. Thus the "over-length" cross-links may arise from alternative excited-state conformation of the protein. Here we present a method and associated software DynaXL for visualizing the ensemble structures of multidomain proteins based on intramolecular cross-links identified by mass spectrometry with high confidence. Representing the cross-linkers and cross-linking reactions explicitly, we show that the protein excited-state structure can be modeled with as few as two over-length cross-links. We demonstrate the generality of our method with three systems: calmodulin, enzyme I, and glutamine-binding protein, and we show that these proteins alternate between different conformations for interacting with other proteins and ligands. Taken together, the over-length chemical cross-links contain valuable information about protein dynamics, and our findings here illustrate the relationship between dynamic domain movement and protein function. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Astrophysics Data System (ADS)
Ma, Chuang; Bao, Zhong-Kui; Zhang, Hai-Feng
2017-10-01
So far, many network-structure-based link prediction methods have been proposed. However, these methods only highlight one or two structural features of networks, and then use the methods to predict missing links in different networks. The performances of these existing methods are not always satisfied in all cases since each network has its unique underlying structural features. In this paper, by analyzing different real networks, we find that the structural features of different networks are remarkably different. In particular, even in the same network, their inner structural features are utterly different. Therefore, more structural features should be considered. However, owing to the remarkably different structural features, the contributions of different features are hard to be given in advance. Inspired by these facts, an adaptive fusion model regarding link prediction is proposed to incorporate multiple structural features. In the model, a logistic function combing multiple structural features is defined, then the weight of each feature in the logistic function is adaptively determined by exploiting the known structure information. Last, we use the "learnt" logistic function to predict the connection probabilities of missing links. According to our experimental results, we find that the performance of our adaptive fusion model is better than many similarity indices.
To trade or not to trade: Link prediction in the virtual water network
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca
2017-12-01
In the international trade network, links express the (temporary) presence of a commercial exchange of goods between any two countries. Given the dynamical behaviour of the trade network, where links are created and dismissed every year, predicting the link activation/deactivation is an open research question. Through the international trade network of agricultural goods, water resources are 'virtually' transferred from the country of production to the country of consumption. We propose a novel methodology for link prediction applied to the network of virtual water trade. Starting from the assumption of having links between any two countries, we estimate the associated virtual water flows by means of a gravity-law model using country and link characteristics as drivers. We consider the links with estimated flows higher than 1000 m3/year as active links, while the others as non-active links. Flows traded along estimated active links are then re-estimated using a similar but differently-calibrated gravity-law model. We were able to correctly model 84% of the existing links and 93% of the non-existing links in year 2011. It is worth to note that the predicted active links carry 99% of the global virtual water flow; hence, missed links are mainly those where a minimum volume of virtual water is exchanged. Results indicate that, over the period from 1986 to 2011, population, geographical distances between countries, and agricultural efficiency (through fertilizers use) are the major factors driving the link activation and deactivation. As opposed to other (network-based) models for link prediction, the proposed method is able to reconstruct the network architecture without any prior knowledge of the network topology, using only the nodes and links attributes; it thus represents a general method that can be applied to other networks such as food or value trade networks.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zheng, Chunmiao; Hill, Mary Catherine; Hsieh, Paul A.
2001-01-01
MODFLOW-2000, the newest version of MODFLOW, is a computer program that numerically solves the three-dimensional ground-water flow equation for a porous medium using a finite-difference method. MT3DMS, the successor to MT3D, is a computer program for modeling multi-species solute transport in three-dimensional ground-water systems using multiple solution techniques, including the finite-difference method, the method of characteristics (MOC), and the total-variation-diminishing (TVD) method. This report documents a new version of the Link-MT3DMS Package, which enables MODFLOW-2000 to produce the information needed by MT3DMS, and also discusses new visualization software for MT3DMS. Unlike the Link-MT3D Packages that coordinated previous versions of MODFLOW and MT3D, the new Link-MT3DMS Package requires an input file that, among other things, provides enhanced support for additional MODFLOW sink/source packages and allows list-directed (free) format for the flow model produced flow-transport link file. The report contains four parts: (a) documentation of the Link-MT3DMS Package Version 6 for MODFLOW-2000; (b) discussion of several issues related to simulation setup and input data preparation for running MT3DMS with MODFLOW-2000; (c) description of two test example problems, with comparison to results obtained using another MODFLOW-based transport program; and (d) overview of post-simulation visualization and animation using the U.S. Geological Survey?s Model Viewer.
New ghost-node method for linking different models with varied grid refinement
James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.
2006-01-01
A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.
An Extension of Least Squares Estimation of IRT Linking Coefficients for the Graded Response Model
ERIC Educational Resources Information Center
Kim, Seonghoon
2010-01-01
The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…
NASA Astrophysics Data System (ADS)
Carlsohn, Elisabet; Ångström, Jonas; Emmett, Mark R.; Marshall, Alan G.; Nilsson, Carol L.
2004-05-01
Chemical cross-linking of proteins is a well-established method for structural mapping of small protein complexes. When combined with mass spectrometry, cross-linking can reveal protein topology and identify contact sites between the peptide surfaces. When applied to surface-exposed proteins from pathogenic organisms, the method can reveal structural details that are useful in vaccine design. In order to investigate the possibilities of applying cross-linking on larger protein complexes, we selected the urease enzyme from Helicobacter pylori as a model. This membrane-associated protein complex consists of two subunits: [alpha] (26.5 kDa) and [beta] (61.7 kDa). Three ([alpha][beta]) heterodimers form a trimeric ([alpha][beta])3 assembly which further associates into a unique dodecameric 1.1 MDa complex composed of four ([alpha][beta])3 units. Cross-linked peptides from trypsin-digested urease complex were analyzed by Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) and molecular modeling. Two potential cross-linked peptides (present in the cross-linked sample but undetectable in [alpha], [beta], and native complex) were assigned. Molecular modeling of urease [alpha][beta] complex and trimeric urease units ([alpha][beta])3 revealed a linkage site between the [alpha]-subunit and the [beta]-subunit, and an internal cross-linkage in the [beta]-subunit.
Navascués, Miguel; Hardy, Olivier J; Burgarella, Concetta
2009-03-01
This work extends the methods of demographic inference based on the distribution of pairwise genetic differences between individuals (mismatch distribution) to the case of linked microsatellite data. Population genetics theory describes the distribution of mutations among a sample of genes under different demographic scenarios. However, the actual number of mutations can rarely be deduced from DNA polymorphisms. The inclusion of mutation models in theoretical predictions can improve the performance of statistical methods. We have developed a maximum-pseudolikelihood estimator for the parameters that characterize a demographic expansion for a series of linked loci evolving under a stepwise mutation model. Those loci would correspond to DNA polymorphisms of linked microsatellites (such as those found on the Y chromosome or the chloroplast genome). The proposed method was evaluated with simulated data sets and with a data set of chloroplast microsatellites that showed signal for demographic expansion in a previous study. The results show that inclusion of a mutational model in the analysis improves the estimates of the age of expansion in the case of older expansions.
A link prediction method for heterogeneous networks based on BP neural network
NASA Astrophysics Data System (ADS)
Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu
2018-04-01
Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.
NASA Astrophysics Data System (ADS)
Pan, Kok-Kwei
We have generalized the linked cluster expansion method to solve more many-body quantum systems, such as quantum spin systems with crystal-field potentials and the Hubbard model. The technique sums up all connected diagrams to a certain order of the perturbative Hamiltonian. The modified multiple-site Wick reduction theorem and the simple tau dependence of the standard basis operators have been used to facilitate the evaluation of the integration procedures in the perturbation expansion. Computational methods are developed to calculate all terms in the series expansion. As a first example, the perturbation series expansion of thermodynamic quantities of the single-band Hubbard model has been obtained using a linked cluster series expansion technique. We have made corrections to all previous results of several papers (up to fourth order). The behaviors of the three dimensional simple cubic and body-centered cubic systems have been discussed from the qualitative analysis of the perturbation series up to fourth order. We have also calculated the sixth-order perturbation series of this model. As a second example, we present the magnetic properties of spin-one Heisenberg model with arbitrary crystal-field potential using a linked cluster series expansion. The calculation of the thermodynamic properties using this method covers the whole range of temperature, in both magnetically ordered and disordered phases. The series for the susceptibility and magnetization have been obtained up to fourth order for this model. The method sums up all perturbation terms to certain order and estimates the result using a well -developed and highly successful extrapolation method (the standard ratio method). The dependence of critical temperature on the crystal-field potential and the magnetization as a function of temperature and crystal-field potential are shown. The critical behaviors at zero temperature are also shown. The range of the crystal-field potential for Ni(2+) compounds is roughly estimated based on this model using known experimental results.
Manipulators with flexible links: A simple model and experiments
NASA Technical Reports Server (NTRS)
Shimoyama, Isao; Oppenheim, Irving J.
1989-01-01
A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.
ESEA Title I Linking Project. Final Report.
ERIC Educational Resources Information Center
Holmes, Susan E.
The Rasch model for test score equating was compared with three other equating procedures as methods for implementing the norm referenced method (RMC Model A) of evaluating ESEA Title I projects. The Rasch model and its theoretical limitations were described. The three other equating methods used were: linear observed score equating, linear true…
Hanson, R.T.; Flint, L.E.; Flint, A.L.; Dettinger, M.D.; Faunt, C.C.; Cayan, D.; Schmid, W.
2012-01-01
Potential climate change effects on aspects of conjunctive management of water resources can be evaluated by linking climate models with fully integrated groundwater-surface water models. The objective of this study is to develop a modeling system that links global climate models with regional hydrologic models, using the California Central Valley as a case study. The new method is a supply and demand modeling framework that can be used to simulate and analyze potential climate change and conjunctive use. Supply-constrained and demand-driven linkages in the water system in the Central Valley are represented with the linked climate models, precipitation-runoff models, agricultural and native vegetation water use, and hydrologic flow models to demonstrate the feasibility of this method. Simulated precipitation and temperature were used from the GFDL-A2 climate change scenario through the 21st century to drive a regional water balance mountain hydrologic watershed model (MHWM) for the surrounding watersheds in combination with a regional integrated hydrologic model of the Central Valley (CVHM). Application of this method demonstrates the potential transition from predominantly surface water to groundwater supply for agriculture with secondary effects that may limit this transition of conjunctive use. The particular scenario considered includes intermittent climatic droughts in the first half of the 21st century followed by severe persistent droughts in the second half of the 21st century. These climatic droughts do not yield a valley-wide operational drought but do cause reduced surface water deliveries and increased groundwater abstractions that may cause additional land subsidence, reduced water for riparian habitat, or changes in flows at the Sacramento-San Joaquin River Delta. The method developed here can be used to explore conjunctive use adaptation options and hydrologic risk assessments in regional hydrologic systems throughout the world.
Thomas C. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Joshua L. Lawler
2002-01-01
We describe our collective efforts to develop and apply methods for using FIA data to model forest resources and wildlife habitat. Our work demonstrates how flexible regression techniques, such as generalized additive models, can be linked with spatially explicit environmental information for the mapping of forest type and structure. We illustrate how these maps of...
ERIC Educational Resources Information Center
Karkee, Thakur B.; Wright, Karen R.
2004-01-01
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
Image segmentation algorithm based on improved PCNN
NASA Astrophysics Data System (ADS)
Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui
2017-11-01
A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.
Discrete time modeling and stability analysis of TCP Vegas
NASA Astrophysics Data System (ADS)
You, Byungyong; Koo, Kyungmo; Lee, Jin S.
2007-12-01
This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.
Battaile, Brian C; Trites, Andrew W
2013-01-01
We propose a method to model the physiological link between somatic survival and reproductive output that reduces the number of parameters that need to be estimated by models designed to determine combinations of birth and death rates that produce historic counts of animal populations. We applied our Reproduction and Somatic Survival Linked (RSSL) method to the population counts of three species of North Pacific pinnipeds (harbor seals, Phoca vitulina richardii (Gray, 1864); northern fur seals, Callorhinus ursinus (L., 1758); and Steller sea lions, Eumetopias jubatus (Schreber, 1776))--and found our model outperformed traditional models when fitting vital rates to common types of limited datasets, such as those from counts of pups and adults. However, our model did not perform as well when these basic counts of animals were augmented with additional observations of ratios of juveniles to total non-pups. In this case, the failure of the ratios to improve model performance may indicate that the relationship between survival and reproduction is redefined or disassociated as populations change over time or that the ratio of juveniles to total non-pups is not a meaningful index of vital rates. Overall, our RSSL models show advantages to linking survival and reproduction within models to estimate the vital rates of pinnipeds and other species that have limited time-series of counts.
Earth-Space Link Attenuation Estimation via Ground Radar Kdp
NASA Technical Reports Server (NTRS)
Bolen, Steven M.; Benjamin, Andrew L.; Chandrasekar, V.
2003-01-01
A method of predicting attenuation on microwave Earth/spacecraft communication links, over wide areas and under various atmospheric conditions, has been developed. In the area around the ground station locations, a nearly horizontally aimed polarimetric S-band ground radar measures the specific differential phase (Kdp) along the Earth-space path. The specific attenuation along a path of interest is then computed by use of a theoretical model of the relationship between the measured S-band specific differential phase and the specific attenuation at the frequency to be used on the communication link. The model includes effects of rain, wet ice, and other forms of precipitation. The attenuation on the path of interest is then computed by integrating the specific attenuation over the length of the path. This method can be used to determine statistics of signal degradation on Earth/spacecraft communication links. It can also be used to obtain real-time estimates of attenuation along multiple Earth/spacecraft links that are parts of a communication network operating within the radar coverage area, thereby enabling better management of the network through appropriate dynamic routing along the best combination of links.
Flexible robot control: Modeling and experiments
NASA Technical Reports Server (NTRS)
Oppenheim, Irving J.; Shimoyama, Isao
1989-01-01
Described here is a model and its use in experimental studies of flexible manipulators. The analytical model uses the equivalent of Rayleigh's method to approximate the displaced shape of a flexible link as the static elastic displacement which would occur under end rotations as applied at the joints. The generalized coordinates are thereby expressly compatible with joint motions and rotations in serial link manipulators, because the amplitude variables are simply the end rotations between the flexible link and the chord connecting the end points. The equations for the system dynamics are quite simple and can readily be formulated for the multi-link, three-dimensional case. When the flexible links possess mass and (polar moment of) inertia which are small compared to the concentrated mass and inertia at the joints, the analytical model is exact and displays the additional advantage of reduction in system dimension for the governing equations. Four series of pilot tests have been completed. Studies on a planar single-link system were conducted at Carnegie-Mellon University, and tests conducted at Toshiba Corporation on a planar two-link system were then incorporated into the study. A single link system under three-dimensional motion, displaying biaxial flexure, was then tested at Carnegie-Mellon.
NMR measurements of gaseous sulfur hexafluoride (SF6) to probe the cross-linking of EPDM rubber.
Terekhov, M; Neutzler, S; Aluas, M; Hoepfel, D; Oellrich, L R
2005-11-01
The effects of embedding gaseous SF6 into EPDM rubber were investigated using NMR methods. It was found that observed sorption and desorption processes follow the behavior of the dual mode sorption model. A strong correlation was found between EPDM cross-linking and transversal relaxation time of embedded SF6. EPDM samples with different cross-link densities, preliminarily determined by 1H transversal relaxation using the Gotlib model and Litvinov's method, were investigated using embedded SF6. The sensitivity of the 19F transversal relaxation rate of SF6 to the EPDM cross-link density variation was found to be at least 10 times higher than for 1H in the polymer chain. First experiments on probing the swelling effects in EPDM due to its contact with polar liquids have been performed. Copyright (c) 2005 John Wiley & Sons, Ltd.
Assessing crown fire potential by linking models of surface and crown fire behavior
Joe H. Scott; Elizabeth D. Reinhardt
2001-01-01
Fire managers are increasingly concerned about the threat of crown fires, yet only now are quantitative methods for assessing crown fire hazard being developed. Links among existing mathematical models of fire behavior are used to develop two indices of crown fire hazard-the Torching Index and Crowning Index. These indices can be used to ordinate different forest...
Walzthoeni, Thomas; Joachimiak, Lukasz A; Rosenberger, George; Röst, Hannes L; Malmström, Lars; Leitner, Alexander; Frydman, Judith; Aebersold, Ruedi
2015-12-01
Chemical cross-linking in combination with mass spectrometry generates distance restraints of amino acid pairs in close proximity on the surface of native proteins and protein complexes. In this study we used quantitative mass spectrometry and chemical cross-linking to quantify differences in cross-linked peptides obtained from complexes in spatially discrete states. We describe a generic computational pipeline for quantitative cross-linking mass spectrometry consisting of modules for quantitative data extraction and statistical assessment of the obtained results. We used the method to detect conformational changes in two model systems: firefly luciferase and the bovine TRiC complex. Our method discovers and explains the structural heterogeneity of protein complexes using only sparse structural information.
Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.
2016-01-01
A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229
Spatial-Operator Algebra For Flexible-Link Manipulators
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Rodriguez, Guillermo
1994-01-01
Method of computing dynamics of multiple-flexible-link robotic manipulators based on spatial-operator algebra, which originally applied to rigid-link manipulators. Aspects of spatial-operator-algebra approach described in several previous articles in NASA Tech Briefs-most recently "Robot Control Based on Spatial-Operator Algebra" (NPO-17918). In extension of spatial-operator algebra to manipulators with flexible links, each link represented by finite-element model: mass of flexible link apportioned among smaller, lumped-mass rigid bodies, coupling of motions expressed in terms of vibrational modes. This leads to operator expression for modal-mass matrix of link.
Kling, Teresia; Johansson, Patrik; Sanchez, José; Marinescu, Voichita D.; Jörnsten, Rebecka; Nelander, Sven
2015-01-01
Statistical network modeling techniques are increasingly important tools to analyze cancer genomics data. However, current tools and resources are not designed to work across multiple diagnoses and technical platforms, thus limiting their applicability to comprehensive pan-cancer datasets such as The Cancer Genome Atlas (TCGA). To address this, we describe a new data driven modeling method, based on generalized Sparse Inverse Covariance Selection (SICS). The method integrates genetic, epigenetic and transcriptional data from multiple cancers, to define links that are present in multiple cancers, a subset of cancers, or a single cancer. It is shown to be statistically robust and effective at detecting direct pathway links in data from TCGA. To facilitate interpretation of the results, we introduce a publicly accessible tool (cancerlandscapes.org), in which the derived networks are explored as interactive web content, linked to several pathway and pharmacological databases. To evaluate the performance of the method, we constructed a model for eight TCGA cancers, using data from 3900 patients. The model rediscovered known mechanisms and contained interesting predictions. Possible applications include prediction of regulatory relationships, comparison of network modules across multiple forms of cancer and identification of drug targets. PMID:25953855
Multivariate Methods for Meta-Analysis of Genetic Association Studies.
Dimou, Niki L; Pantavou, Katerina G; Braliou, Georgia G; Bagos, Pantelis G
2018-01-01
Multivariate meta-analysis of genetic association studies and genome-wide association studies has received a remarkable attention as it improves the precision of the analysis. Here, we review, summarize and present in a unified framework methods for multivariate meta-analysis of genetic association studies and genome-wide association studies. Starting with the statistical methods used for robust analysis and genetic model selection, we present in brief univariate methods for meta-analysis and we then scrutinize multivariate methodologies. Multivariate models of meta-analysis for a single gene-disease association studies, including models for haplotype association studies, multiple linked polymorphisms and multiple outcomes are discussed. The popular Mendelian randomization approach and special cases of meta-analysis addressing issues such as the assumption of the mode of inheritance, deviation from Hardy-Weinberg Equilibrium and gene-environment interactions are also presented. All available methods are enriched with practical applications and methodologies that could be developed in the future are discussed. Links for all available software implementing multivariate meta-analysis methods are also provided.
Online two-stage association method for robust multiple people tracking
NASA Astrophysics Data System (ADS)
Lv, Jingqin; Fang, Jiangxiong; Yang, Jie
2011-07-01
Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.
Improving Website Hyperlink Structure Using Server Logs
Paranjape, Ashwin; West, Robert; Zia, Leila; Leskovec, Jure
2016-01-01
Good websites should be easy to navigate via hyperlinks, yet maintaining a high-quality link structure is difficult. Identifying pairs of pages that should be linked may be hard for human editors, especially if the site is large and changes frequently. Further, given a set of useful link candidates, the task of incorporating them into the site can be expensive, since it typically involves humans editing pages. In the light of these challenges, it is desirable to develop data-driven methods for automating the link placement task. Here we develop an approach for automatically finding useful hyperlinks to add to a website. We show that passively collected server logs, beyond telling us which existing links are useful, also contain implicit signals indicating which nonexistent links would be useful if they were to be introduced. We leverage these signals to model the future usefulness of yet nonexistent links. Based on our model, we define the problem of link placement under budget constraints and propose an efficient algorithm for solving it. We demonstrate the effectiveness of our approach by evaluating it on Wikipedia, a large website for which we have access to both server logs (used for finding useful new links) and the complete revision history (containing a ground truth of new links). As our method is based exclusively on standard server logs, it may also be applied to any other website, as we show with the example of the biomedical research site Simtk. PMID:28345077
Assessing factorial invariance of two-way rating designs using three-way methods
Kroonenberg, Pieter M.
2015-01-01
Assessing the factorial invariance of two-way rating designs such as ratings of concepts on several scales by different groups can be carried out with three-way models such as the Parafac and Tucker models. By their definitions these models are double-metric factorially invariant. The differences between these models lie in their handling of the links between the concept and scale spaces. These links may consist of unrestricted linking (Tucker2 model), invariant component covariances but variable variances per group and per component (Parafac model), zero covariances and variances different per group but not per component (Replicated Tucker3 model) and strict invariance (Component analysis on the average matrix). This hierarchy of invariant models, and the procedures by which to evaluate the models against each other, is illustrated in some detail with an international data set from attachment theory. PMID:25620936
Coding for Parallel Links to Maximize the Expected Value of Decodable Messages
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Chang, Christopher S.
2011-01-01
When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.
Infrared dim small target segmentation method based on ALI-PCNN model
NASA Astrophysics Data System (ADS)
Zhao, Shangnan; Song, Yong; Zhao, Yufei; Li, Yun; Li, Xu; Jiang, Yurong; Li, Lin
2017-10-01
Pulse Coupled Neural Network (PCNN) is improved by Adaptive Lateral Inhibition (ALI), while a method of infrared (IR) dim small target segmentation based on ALI-PCNN model is proposed in this paper. Firstly, the feeding input signal is modulated by lateral inhibition network to suppress background. Then, the linking input is modulated by ALI, and linking weight matrix is generated adaptively by calculating ALI coefficient of each pixel. Finally, the binary image is generated through the nonlinear modulation and the pulse generator in PCNN. The experimental results show that the segmentation effect as well as the values of contrast across region and uniformity across region of the proposed method are better than the OTSU method, maximum entropy method, the methods based on conventional PCNN and visual attention, and the proposed method has excellent performance in extracting IR dim small target from complex background.
An optimization model for metabolic pathways.
Planes, F J; Beasley, J E
2009-10-15
Different mathematical methods have emerged in the post-genomic era to determine metabolic pathways. These methods can be divided into stoichiometric methods and path finding methods. In this paper we detail a novel optimization model, based upon integer linear programming, to determine metabolic pathways. Our model links reaction stoichiometry with path finding in a single approach. We test the ability of our model to determine 40 annotated Escherichia coli metabolic pathways. We show that our model is able to determine 36 of these 40 pathways in a computationally effective manner.
Meta-path based heterogeneous combat network link prediction
NASA Astrophysics Data System (ADS)
Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin
2017-09-01
The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.
Riera-Fernández, Pablo; Munteanu, Cristian R; Escobar, Manuel; Prado-Prado, Francisco; Martín-Romalde, Raquel; Pereira, David; Villalba, Karen; Duardo-Sánchez, Aliuska; González-Díaz, Humberto
2012-01-21
Graph and Complex Network theory is expanding its application to different levels of matter organization such as molecular, biological, technological, and social networks. A network is a set of items, usually called nodes, with connections between them, which are called links or edges. There are many different experimental and/or theoretical methods to assign node-node links depending on the type of network we want to construct. Unfortunately, the use of a method for experimental reevaluation of the entire network is very expensive in terms of time and resources; thus the development of cheaper theoretical methods is of major importance. In addition, different methods to link nodes in the same type of network are not totally accurate in such a way that they do not always coincide. In this sense, the development of computational methods useful to evaluate connectivity quality in complex networks (a posteriori of network assemble) is a goal of major interest. In this work, we report for the first time a new method to calculate numerical quality scores S(L(ij)) for network links L(ij) (connectivity) based on the Markov-Shannon Entropy indices of order k-th (θ(k)) for network nodes. The algorithm may be summarized as follows: (i) first, the θ(k)(j) values are calculated for all j-th nodes in a complex network already constructed; (ii) A Linear Discriminant Analysis (LDA) is used to seek a linear equation that discriminates connected or linked (L(ij)=1) pairs of nodes experimentally confirmed from non-linked ones (L(ij)=0); (iii) the new model is validated with external series of pairs of nodes; (iv) the equation obtained is used to re-evaluate the connectivity quality of the network, connecting/disconnecting nodes based on the quality scores calculated with the new connectivity function. This method was used to study different types of large networks. The linear models obtained produced the following results in terms of overall accuracy for network reconstruction: Metabolic networks (72.3%), Parasite-Host networks (93.3%), CoCoMac brain cortex co-activation network (89.6%), NW Spain fasciolosis spreading network (97.2%), Spanish financial law network (89.9%) and World trade network for Intelligent & Active Food Packaging (92.8%). In order to seek these models, we studied an average of 55,388 pairs of nodes in each model and a total of 332,326 pairs of nodes in all models. Finally, this method was used to solve a more complicated problem. A model was developed to score the connectivity quality in the Drug-Target network of US FDA approved drugs. In this last model the θ(k) values were calculated for three types of molecular networks representing different levels of organization: drug molecular graphs (atom-atom bonds), protein residue networks (amino acid interactions), and drug-target network (compound-protein binding). The overall accuracy of this model was 76.3%. This work opens a new door to the computational reevaluation of network connectivity quality (collation) for complex systems in molecular, biomedical, technological, and legal-social sciences as well as in world trade and industry. Copyright © 2011 Elsevier Ltd. All rights reserved.
An intelligent anti-jamming network system of data link
NASA Astrophysics Data System (ADS)
Fan, Xiangrui; Lin, Jingyong; Liu, Jiarun; Zhou, Chunmei
2017-10-01
Data link is the key information system for the cooperation of weapons, single physical layer anti-jamming technology has been unable to meet its requirements. High dynamic precision-guided weapon nodes like missiles, anti-jamming design of data link system need to have stronger pertinence and effectiveness: the best anti-jamming communication mode can be selected intelligently in combat environment, in real time, guarantee the continuity of communication. We discuss an anti-jamming intelligent networking technology of data link based on interference awareness, put forward a model of intelligent anti-jamming system, and introduces the cognitive node protocol stack model and intelligent anti-jamming method, in order to improve the data chain of intelligent anti-jamming ability.
Electronic Equalization of Multikilometer 10-Gb/s Multimode Fiber Links: Mode-Coupling Effects
NASA Astrophysics Data System (ADS)
Balemarthy, Kasyapa; Polley, Arup; Ralph, Stephen E.
2006-12-01
This paper investigates the ability of electronic equalization to compensate for modal dispersion in the presence of mode coupling in multimode fibers (MMFs) at 10 Gb/s. Using a new time-domain experimental method, mode coupling is quantified in MMF. These results, together with a comprehensive link model, allow to determine the impact of mode coupling on the performance of MMF. The equalizer performance on links from 300 m to 8 km is quantified with and without modal coupling. It is shown that the mode-coupling effects are influenced by the specific index profile and increase the equalizer penalty by as much as 1 dBo for 1-km links and 2.3 dBo for 2-km links when using a standard model of fiber profiles at 1310 nm.
Variable speed limit strategies analysis with link transmission model on urban expressway
NASA Astrophysics Data System (ADS)
Li, Shubin; Cao, Danni
2018-02-01
The variable speed limit (VSL) is a kind of active traffic management method. Most of the strategies are used in the expressway traffic flow control in order to ensure traffic safety. However, the urban expressway system is the main artery, carrying most traffic pressure. It has similar traffic characteristics with the expressways between cities. In this paper, the improved link transmission model (LTM) combined with VSL strategies is proposed, based on the urban expressway network. The model can simulate the movement of the vehicles and the shock wave, and well balance the relationship between the amount of calculation and accuracy. Furthermore, the optimal VSL strategy can be proposed based on the simulation method. It can provide management strategies for managers. Finally, a simple example is given to illustrate the model and method. The selected indexes are the average density, the average speed and the average flow on the traffic network in the simulation. The simulation results show that the proposed model and method are feasible. The VSL strategy can effectively alleviate traffic congestion in some cases, and greatly promote the efficiency of the transportation system.
Infrared microspectroscopic determination of collagen cross-links in articular cartilage
NASA Astrophysics Data System (ADS)
Rieppo, Lassi; Kokkonen, Harri T.; Kulmala, Katariina A. M.; Kovanen, Vuokko; Lammi, Mikko J.; Töyräs, Juha; Saarakkala, Simo
2017-03-01
Collagen forms an organized network in articular cartilage to give tensile stiffness to the tissue. Due to its long half-life, collagen is susceptible to cross-links caused by advanced glycation end-products. The current standard method for determination of cross-link concentrations in tissues is the destructive high-performance liquid chromatography (HPLC). The aim of this study was to analyze the cross-link concentrations nondestructively from standard unstained histological articular cartilage sections by using Fourier transform infrared (FTIR) microspectroscopy. Half of the bovine articular cartilage samples (n=27) were treated with threose to increase the collagen cross-linking while the other half (n=27) served as a control group. Partial least squares (PLS) regression with variable selection algorithms was used to predict the cross-link concentrations from the measured average FTIR spectra of the samples, and HPLC was used as the reference method for cross-link concentrations. The correlation coefficients between the PLS regression models and the biochemical reference values were r=0.84 (p<0.001), r=0.87 (p<0.001) and r=0.92 (p<0.001) for hydroxylysyl pyridinoline (HP), lysyl pyridinoline (LP), and pentosidine (Pent) cross-links, respectively. The study demonstrated that FTIR microspectroscopy is a feasible method for investigating cross-link concentrations in articular cartilage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarocki, John Charles; Zage, David John; Fisher, Andrew N.
LinkShop is a software tool for applying the method of Linkography to the analysis time-sequence data. LinkShop provides command line, web, and application programming interfaces (API) for input and processing of time-sequence data, abstraction models, and ontologies. The software creates graph representations of the abstraction model, ontology, and derived linkograph. Finally, the tool allows the user to perform statistical measurements of the linkograph and refine the ontology through direct manipulation of the linkograph.
Effect of Link Flexibility on tip position of a single link robotic arm
NASA Astrophysics Data System (ADS)
Madhusudan Raju, E.; Siva Rama Krishna, L.; Mouli, Y. Sharath Chandra; Nageswara Rao, V.
2015-12-01
The flexible robots are widely used in space applications due to their quick response, lower energy consumption, lower overall mass and operation at high speed compared to conventional industrial rigid link robots. These robots are inherently flexible, so that the kinematics of flexible robots can't be solved with rigid body assumptions. The flexibility in links and joints affects end-point positioning accuracy of the robot. It is important to model the link kinematics with precision which in turn simplifies modelling of dynamics of flexible robots. The main objective of this paper is to evaluate the effect of link flexibility on a tip position of a single link robotic arm for a given motion. The joint is assumed to be rigid and only link flexibility is considered. The kinematics of flexible link problem is evaluated by Assumed Modes Method (AMM) using MAT LAB Programming. To evaluate the effect of link flexibility (with and without payload) of robotic arm, the normalized tip deviation is found for flexible link with respect to a rigid link. Finally, the limiting inertia for payload mass is found if the allowable tip deviation is 5%.
Dynamic analysis and control of lightweight manipulators with flexible parallel link mechanisms
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1991-01-01
The flexible parallel link mechanism is designed for increased rigidity to sustain the buckling when it carries a heavy payload. Compared to a one link flexible manipulator, a two link flexible manipulator, especially the flexible parallel mechanism, has more complicated characteristics in dynamics and control. The objective of this research is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model. The step response of the analytical model and the TREETOPS model match each other well. The nonlinear dynamics is studied using a sinusoidal excitation. The actuator dynamic effect on a flexible robot was investigated. The effects are explained by the root loci and the Bode plot theoretically and experimentally. For the base performance for the advanced control scheme, a simple decoupled feedback scheme is applied.
Flexible method for conjugation of phenolic lignin model compounds to carrier proteins
Gao, Ruili; Lu, Fachuang; Zhu, Yimin; ...
2016-10-03
Linking lignin model compounds to carrier proteins is required either to raise antibodies to them or to structurally screen antibodies raised against lignins or models. This paper describes a flexible method to link phenolic compounds of interest to cationic bovine serum albumin (cBSA) without interfering with their important structural features. With the guaiacylglycerol- β-guaiacyl ether dimer, for example, the linking was accomplished in 89% yield with the number of dimers per carrier protein being as high as 50; NMR experiments on a 15N- and 13C-labeled conjugation product indicated that 13 dimers were added to the native lysine residues and themore » remainder (~37) to the amine moieties on the ethylenediamine linkers added to BSA; ~32% of the available primary amine groups on cBSA were therefore conjugated to the hapten. As a result, this loading is suitable for attempting to raise new antibodies to plant lignins and for screening.« less
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation. PMID:23459512
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.
NASA Astrophysics Data System (ADS)
Chu, Jiangtao; Yang, Yue
2018-06-01
Bayesian networks (BN) have many advantages over other methods in ecological modelling and have become an increasingly popular modelling tool. However, BN are flawed in regard to building models based on inadequate existing knowledge. To overcome this limitation, we propose a new method that links BN with structural equation modelling (SEM). In this method, SEM is used to improve the model structure for BN. This method was used to simulate coastal phytoplankton dynamics in Bohai Bay. We demonstrate that this hybrid approach minimizes the need for expert elicitation, generates more reasonable structures for BN models and increases the BN model's accuracy and reliability. These results suggest that the inclusion of SEM for testing and verifying the theoretical structure during the initial construction stage improves the effectiveness of BN models, especially for complex eco-environment systems. The results also demonstrate that in Bohai Bay, while phytoplankton biomass has the greatest influence on phytoplankton dynamics, the impact of nutrients on phytoplankton dynamics is larger than the influence of the physical environment in summer. Furthermore, despite the Redfield ratio indicating that phosphorus should be the primary nutrient limiting factor, our results indicate that silicate plays the most important role in regulating phytoplankton dynamics in Bohai Bay.
Mining Missing Hyperlinks from Human Navigation Traces: A Case Study of Wikipedia.
West, Robert; Paranjape, Ashwin; Leskovec, Jure
Hyperlinks are an essential feature of the World Wide Web. They are especially important for online encyclopedias such as Wikipedia: an article can often only be understood in the context of related articles, and hyperlinks make it easy to explore this context. But important links are often missing, and several methods have been proposed to alleviate this problem by learning a linking model based on the structure of the existing links. Here we propose a novel approach to identifying missing links in Wikipedia. We build on the fact that the ultimate purpose of Wikipedia links is to aid navigation. Rather than merely suggesting new links that are in tune with the structure of existing links, our method finds missing links that would immediately enhance Wikipedia's navigability. We leverage data sets of navigation paths collected through a Wikipedia-based human-computation game in which users must find a short path from a start to a target article by only clicking links encountered along the way. We harness human navigational traces to identify a set of candidates for missing links and then rank these candidates. Experiments show that our procedure identifies missing links of high quality.
Mining Missing Hyperlinks from Human Navigation Traces: A Case Study of Wikipedia
West, Robert; Paranjape, Ashwin; Leskovec, Jure
2015-01-01
Hyperlinks are an essential feature of the World Wide Web. They are especially important for online encyclopedias such as Wikipedia: an article can often only be understood in the context of related articles, and hyperlinks make it easy to explore this context. But important links are often missing, and several methods have been proposed to alleviate this problem by learning a linking model based on the structure of the existing links. Here we propose a novel approach to identifying missing links in Wikipedia. We build on the fact that the ultimate purpose of Wikipedia links is to aid navigation. Rather than merely suggesting new links that are in tune with the structure of existing links, our method finds missing links that would immediately enhance Wikipedia's navigability. We leverage data sets of navigation paths collected through a Wikipedia-based human-computation game in which users must find a short path from a start to a target article by only clicking links encountered along the way. We harness human navigational traces to identify a set of candidates for missing links and then rank these candidates. Experiments show that our procedure identifies missing links of high quality. PMID:26634229
Deployment of a multi-link flexible structure
NASA Astrophysics Data System (ADS)
Na, Kyung-Su; Kim, Ji-Hwan
2006-06-01
Deployment of a multi-link beam structure undergoing locking is analyzed in the Timoshenko beam theory. In the modeling of the system, dynamic forces are assumed to be torques and restoring forces due to the torsion spring at each joint. Hamilton's principle is used to determine the equations of motion and the finite element method is adopted to analyze the system. Newmark time integration and Newton-Raphson iteration methods are used to solve for the non-linear equations of motion at each time step. The locking at the joints of the multi-link flexible structure is analyzed by the momentum balance method. Numerical results are compared with the previous experimental data. The angles and angular velocities of each joint, tip displacement, and velocity of each link are investigated to study the motions of the links at each time step. To analyze the effect of thickness on the motion of the link, the angle and the tip displacement of each link are compared according to the various slenderness ratios. Additionally, in order to investigate the effect of shear, the tip displacements of a Timoshenko beam are compared with those of an Euler-Bernoulli beam.
D. M. Jimenez; B. W. Butler; J. Reardon
2003-01-01
Current methods for predicting fire-induced plant mortality in shrubs and trees are largely empirical. These methods are not readily linked to duff burning, soil heating, and surface fire behavior models. In response to the need for a physics-based model of this process, a detailed model for predicting the temperature distribution through a tree stem as a function of...
NASA Astrophysics Data System (ADS)
van der Sluijs, Jeroen P.; Arjan Wardekker, J.
2015-04-01
In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to well informed recommendations for model improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Bapanapalli, Satish K.; Smith, Mark T.
2008-09-01
The objective of our work is to enable the optimum design of lightweight automotive structural components using injection-molded long fiber thermoplastics (LFTs). To this end, an integrated approach that links process modeling to structural analysis with experimental microstructural characterization and validation is developed. First, process models for LFTs are developed and implemented into processing codes (e.g. ORIENT, Moldflow) to predict the microstructure of the as-formed composite (i.e. fiber length and orientation distributions). In parallel, characterization and testing methods are developed to obtain necessary microstructural data to validate process modeling predictions. Second, the predicted LFT composite microstructure is imported into amore » structural finite element analysis by ABAQUS to determine the response of the as-formed composite to given boundary conditions. At this stage, constitutive models accounting for the composite microstructure are developed to predict various types of behaviors (i.e. thermoelastic, viscoelastic, elastic-plastic, damage, fatigue, and impact) of LFTs. Experimental methods are also developed to determine material parameters and to validate constitutive models. Such a process-linked-structural modeling approach allows an LFT composite structure to be designed with confidence through numerical simulations. Some recent results of our collaborative research will be illustrated to show the usefulness and applications of this integrated approach.« less
ERIC Educational Resources Information Center
Romeo, Lynn
2008-01-01
This article presents a comprehensive model of daily, classroom informal writing assessment that is constantly linked to instruction and the characteristics of proficient writers. Methods for promoting teacher, student, and parent collaboration and their roles in dialoguing, conferencing, and reflection are discussed. Strategies for including…
Borrebaeck, C; Börjeson, J; Mattiasson, B
1978-06-15
Thermometric enzyme-linked immunosorbent assay (TELISA) is described. After the procedure of optimization, human serum albumin was assayed using anti-human serum albumin bound to Sepharose CL 4-B in the enzyme thermistor unit and catalase as label on the free antigen. The model system was used for assays down to 10(-13)M and the preparation of immobilized antibodies was used repeatedly up to 100 times. Comparative studies of the TELISA technique with bromocresol green, immunoturbidimetric and rocket immunoelectrophoretic methods were carried out and showed that TELISA could be used as an alternative method.
Hybrid modeling method for a DEP based particle manipulation.
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-30
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.
Hybrid Modeling Method for a DEP Based Particle Manipulation
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-01
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsuchiya, Hikaru; Tanaka, Keiji, E-mail: tanaka-kj@igakuken.or.jp; Saeki, Yasushi, E-mail: saeki-ys@igakuken.or.jp
2013-06-28
Highlights: •The parallel reaction monitoring method was applied to ubiquitin quantification. •The ubiquitin PRM method is highly sensitive even in biological samples. •Using the method, we revealed that Ufd4 assembles the K29-linked ubiquitin chain. -- Abstract: Ubiquitylation is an essential posttranslational protein modification that is implicated in a diverse array of cellular functions. Although cells contain eight structurally distinct types of polyubiquitin chains, detailed function of several chain types including K29-linked chains has remained largely unclear. Current mass spectrometry (MS)-based quantification methods are highly inefficient for low abundant atypical chains, such as K29- and M1-linked chains, in complex mixtures thatmore » typically contain highly abundant proteins. In this study, we applied parallel reaction monitoring (PRM), a quantitative, high-resolution MS method, to quantify ubiquitin chains. The ubiquitin PRM method allows us to quantify 100 attomole amounts of all possible ubiquitin chains in cell extracts. Furthermore, we quantified ubiquitylation levels of ubiquitin-proline-β-galactosidase (Ub-P-βgal), a historically known model substrate of the ubiquitin fusion degradation (UFD) pathway. In wild-type cells, Ub-P-βgal is modified with ubiquitin chains consisting of 21% K29- and 78% K48-linked chains. In contrast, K29-linked chains are not detected in UFD4 knockout cells, suggesting that Ufd4 assembles the K29-linked ubiquitin chain(s) on Ub-P-βgal in vivo. Thus, the ubiquitin PRM is a novel, useful, quantitative method for analyzing the highly complicated ubiquitin system.« less
Engineering calculations for the Delta S method of solving the orbital allotment problem
NASA Technical Reports Server (NTRS)
Kohnhorst, P. A.; Levis, C. A.; Walton, E. K.
1987-01-01
The method of calculating single-entry separation requirements for pairs of satellites is extended to include the interference on the top link as well as on the down link. Several heuristic models for analyzing the effects of shaped-beam antenna designs on required satellite separations are introduced and demonstrated with gain contour plots. The calculation of aggregate interference is extended to include the effects of up-link interference. The relationship between the single-entry C/I requirements, used in determining satellite separation constraints for various optimization procedures, and the aggregate C/I values of the resulting solutions is discussed.
Federating Cyber and Physical Models for Event-Driven Situational Awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephan, Eric G.; Pawlowski, Ronald A.; Sridhar, Siddharth
The purpose of this paper is to describe a novel method to improve electric power system monitoring and control software application interoperability. This method employs the concept of federation, which is defined as the use of existing models that represent aspects of a system in specific domains (such as physical and cyber security domains) and building interface to link all of domain models.
Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.
2007-01-01
A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Redmond, Sean M.
2016-01-01
Purpose: The empirical record regarding the expected co-occurrence of attention-deficit/hyperactivity disorder (ADHD) and specific language impairment is confusing and contradictory. A research plan is presented that has the potential to untangle links between these 2 common neurodevelopmental disorders. Method: Data from completed and ongoing…
Prodinger, Birgit; Tennant, Alan; Stucki, Gerold; Cieza, Alarcos; Üstün, Tevfik Bedirhan
2016-10-01
Our aim was to specify the requirements of an architecture to serve as the foundation for standardized reporting of health information and to provide an exemplary application of this architecture. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) served as the conceptual framework. Methods to establish content comparability were the ICF Linking Rules. The Rasch measurement model, as a special case of additive conjoint measurement, which satisfies the required criteria for fundamental measurement, allowed for the development of a common metric foundation for measurement unit conversion. Secondary analysis of data from the North Yorkshire Survey was used to illustrate these methods. Patients completed three instruments and the items were linked to the ICF. The Rasch measurement model was applied, first to each scale, and then to items across scales which were linked to a common domain. Based on the linking of items to the ICF, the majority of items were grouped into two domains, Mobility and Self-care. Analysis of the individual scales and of items linked to a common domain across scales satisfied the requirements of the Rasch measurement model. The measurement unit conversion between items from the three instruments linked to the Mobility and Self-care domains, respectively, was demonstrated. The realization of an ICF-based architecture for information on patients' functioning enables harmonization of health information while allowing clinicians and researchers to continue using their existing instruments. This architecture will facilitate access to comprehensive and consistently reported health information to serve as the foundation for informed decision-making. © The Author(s) 2016.
Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System
NASA Astrophysics Data System (ADS)
Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao
2017-12-01
A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.
Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.
2015-01-01
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216
Chen, X; Ashcroft, I A; Wildman, R D; Tuck, C J
2015-11-08
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic-viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic-viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance.
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
Lightweight Biometric Sensing for Walker Classification Using Narrowband RF Links
Liang, Zhuo-qian
2017-01-01
This article proposes a lightweight biometric sensing system using ubiquitous narrowband radio frequency (RF) links for path-dependent walker classification. The fluctuated received signal strength (RSS) sequence generated by human motion is used for feature representation. To capture the most discriminative characteristics of individuals, a three-layer RF sensing network is organized for building multiple sampling links at the most common heights of upper limbs, thighs, and lower legs. The optimal parameters of sensing configuration, such as the height of link location and number of fused links, are investigated to improve sensory data distinctions among subjects, and the experimental results suggest that the synergistic sensing by using multiple links can contribute a better performance. This is the new consideration of using RF links in building a biometric sensing system. In addition, two types of classification methods involving vector quantization (VQ) and hidden Markov models (HMMs) are developed and compared for closed-set walker recognition and verification. Experimental studies in indoor line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios are conducted to validate the proposed method. PMID:29206188
Lightweight Biometric Sensing for Walker Classification Using Narrowband RF Links.
Liu, Tong; Liang, Zhuo-Qian
2017-12-05
This article proposes a lightweight biometric sensing system using ubiquitous narrowband radio frequency (RF) links for path-dependent walker classification. The fluctuated received signal strength (RSS) sequence generated by human motion is used for feature representation. To capture the most discriminative characteristics of individuals, a three-layer RF sensing network is organized for building multiple sampling links at the most common heights of upper limbs, thighs, and lower legs. The optimal parameters of sensing configuration, such as the height of link location and number of fused links, are investigated to improve sensory data distinctions among subjects, and the experimental results suggest that the synergistic sensing by using multiple links can contribute a better performance. This is the new consideration of using RF links in building a biometric sensing system. In addition, two types of classification methods involving vector quantization (VQ) and hidden Markov models (HMMs) are developed and compared for closed-set walker recognition and verification. Experimental studies in indoor line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios are conducted to validate the proposed method.
Recent developments in computer modeling add ecological realism to landscape genetics
Background / Question / Methods A factor limiting the rate of progress in landscape genetics has been the shortage of spatial models capable of linking life history attributes such as dispersal behavior to complex dynamic landscape features. The recent development of new models...
Mixed Transportation Network Design under a Sustainable Development Perspective
Qin, Jin; Ni, Ling-lin; Shi, Feng
2013-01-01
A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%. PMID:23476142
Mixed transportation network design under a sustainable development perspective.
Qin, Jin; Ni, Ling-lin; Shi, Feng
2013-01-01
A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo
2018-03-01
The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.
Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks
Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki
2018-01-01
Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes. PMID:29642483
Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks.
Murakami, Masaya; Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki
2018-04-08
Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes.
Yamaguchi, Hideto; Hirakura, Yutaka; Shirai, Hiroki; Mimura, Hisashi; Toyo'oka, Toshimasa
2011-06-01
The need for a simple and high-throughput method for identifying the tertiary structure of protein pharmaceuticals has increased. In this study, a simple method for mapping the protein fold is proposed for use as a complementary quality test. This method is based on cross-linking a protein using a [bis(sulfosuccinimidyl)suberate (BS(3))], followed by peptide mapping by LC-MS. Consensus interferon (CIFN) was used as the model protein. The tryptic map obtained via liquid chromatography tandem mass spectroscopy (LC-MS/MS) and the mass mapping obtained via matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy were used to identify cross-linked peptides. While LC-MS/MS analyses found that BS(3) formed cross-links in the loop region of the protein, which was regarded as the biologically active site, sodium dodecyl-sulfate polyacrylamide gel electrophoresis demonstrated that cross-linking occurred within a protein molecule, but not between protein molecules. The occurrence of cross-links at the active site depends greatly on the conformation of the protein, which is determined by the denaturing conditions. Quantitative evaluation of the tertiary structure of CIFN was thus possible by monitoring the amounts of cross-linked peptides generated. Assuming that background information is available at the development stage, this method may be applicable to process development as a complementary test for quality control. Copyright © 2011 Elsevier B.V. All rights reserved.
Historical droughts in Mediterranean regions during the last 500 years: a data/model approach
NASA Astrophysics Data System (ADS)
Brewer, S.; Alleaume, S.; Guiot, J.; Nicault, A.
2007-06-01
We present here a new method for comparing the output of General Circulation Models (GCMs) with proxy-based reconstructions, using time series of reconstructed and simulated climate parameters. The method uses k-means clustering to allow comparison between different periods that have similar spatial patterns, and a fuzzy logic-based distance measure in order to take reconstruction errors into account. The method has been used to test two coupled ocean-atmosphere GCMs over the Mediterranean region for the last 500 years, using an index of drought stress, the Palmer Drought Severity Index. The results showed that, whilst no model exactly simulated the reconstructed changes, all simulations were an improvement over using the mean climate, and a good match was found after 1650 with a model run that took into account changes in volcanic forcing, solar irradiance, and greenhouse gases. A more detailed investigation of the output of this model showed the existence of a set of atmospheric circulation patterns linked to the patterns of drought stress: 1) a blocking pattern over northern Europe linked to dry conditions in the south prior to the Little Ice Age (LIA) and during the 20th century; 2) a NAO-positive like pattern with increased westerlies during the LIA; 3) a NAO-negative like period shown in the model prior to the LIA, but that occurs most frequently in the data during the LIA. The results of the comparison show the improvement in simulated climate as various forcings are included and help to understand the atmospheric changes that are linked to the observed reconstructed climate changes.
Michael A. Larson; Frank R., III Thompson; Joshua J. Millspaugh; William D. Dijak; Stephen R. Shifley
2004-01-01
Methods for habitat modeling based on landscape simulations and population viability modeling based on habitat quality are well developed, but no published study of which we are aware has effectively joined them in a single, comprehensive analysis. We demonstrate the application of a population viability model for ovenbirds (Seiurus aurocapillus)...
A Multi-Faceted Analysis of a New Therapeutic Model of Linking Appraisals to Affective Experiences.
ERIC Educational Resources Information Center
McCarthy, Christopher; And Others
I. Roseman, M. Spindel, and P. Jose (1990) had previously demonstrated that specific appraisals of events led to discrete emotional responses, but this model has not been widely tested by other research teams using alternative research methods. The present study utilized four qualitative research methods, taught by Patti Lather at the 1994…
SEX-DETector: A Probabilistic Approach to Study Sex Chromosomes in Non-Model Organisms
Muyle, Aline; Käfer, Jos; Zemp, Niklaus; Mousset, Sylvain; Picard, Franck; Marais, Gabriel AB
2016-01-01
We propose a probabilistic framework to infer autosomal and sex-linked genes from RNA-seq data of a cross for any sex chromosome type (XY, ZW, and UV). Sex chromosomes (especially the non-recombining and repeat-dense Y, W, U, and V) are notoriously difficult to sequence. Strategies have been developed to obtain partially assembled sex chromosome sequences. Most of them remain difficult to apply to numerous non-model organisms, either because they require a reference genome, or because they are designed for evolutionarily old systems. Sequencing a cross (parents and progeny) by RNA-seq to study the segregation of alleles and infer sex-linked genes is a cost-efficient strategy, which also provides expression level estimates. However, the lack of a proper statistical framework has limited a broader application of this approach. Tests on empirical Silene data show that our method identifies 20–35% more sex-linked genes than existing pipelines, while making reliable inferences for downstream analyses. Approximately 12 individuals are needed for optimal results based on simulations. For species with an unknown sex-determination system, the method can assess the presence and type (XY vs. ZW) of sex chromosomes through a model comparison strategy. The method is particularly well optimized for sex chromosomes of young or intermediate age, which are expected in thousands of yet unstudied lineages. Any organisms, including non-model ones for which nothing is known a priori, that can be bred in the lab, are suitable for our method. SEX-DETector and its implementation in a Galaxy workflow are made freely available. PMID:27492231
Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav
2016-08-01
The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building 'digital libraries' of distributed CDS services that can be hosted and maintained in different organizations. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)
We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...
Use and abuse of mixing models (MixSIAR)
Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...
NASA Technical Reports Server (NTRS)
Elrod, B. D.; Jacobsen, A.; Cook, R. A.; Singh, R. N. P.
1983-01-01
One-way range and Doppler methods for providing user orbit and time determination are examined. Forward link beacon tracking, with on-board processing of independent navigation signals broadcast continuously by TDAS spacecraft; forward link scheduled tracking; with on-board processing of navigation data received during scheduled TDAS forward link service intervals; and return link scheduled tracking; with ground-based processing of user generated navigation data during scheduled TDAS return link service intervals are discussed. A system level definition and requirements assessment for each alternative, an evaluation of potential navigation performance and comparison with TDAS mission model requirements is included. TDAS satellite tracking is also addressed for two alternatives: BRTS and VLBI tracking.
Yang, Liang; Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun
2017-01-01
Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection.
Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun
2017-01-01
Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection. PMID:28678864
Camboulives, A-R; Velluet, M-T; Poulenard, S; Saint-Antonin, L; Michau, V
2018-02-01
An optical communication link performance between the ground and a geostationary satellite can be impaired by scintillation, beam wandering, and beam spreading due to its propagation through atmospheric turbulence. These effects on the link performance can be mitigated by tracking and error correction codes coupled with interleaving. Precise numerical tools capable of describing the irradiance fluctuations statistically and of creating an irradiance time series are needed to characterize the benefits of these techniques and optimize them. The wave optics propagation methods have proven their capability of modeling the effects of atmospheric turbulence on a beam, but these are known to be computationally intensive. We present an analytical-numerical model which provides good results on the probability density functions of irradiance fluctuations as well as a time series with an important saving of time and computational resources.
Braddick, Darren; Sandhu, Sandeep; Roper, David I; Chappell, Michael J; Bugg, Timothy D H
2014-08-01
The polymerization of lipid intermediate II by the transglycosylase activity of penicillin-binding proteins (PBPs) represents an important target for antibacterial action, but limited methods are available for quantitative assay of this reaction, or screening potential inhibitors. A new labelling method for lipid II polymerization products using Sanger's reagent (fluoro-2,4-dinitrobenzene), followed by gel permeation HPLC analysis, has permitted the observation of intermediate polymerization products for Staphylococcus aureus monofunctional transglycosylase MGT. Peak formation is inhibited by 6 µM ramoplanin or enduracidin. Characterization by mass spectrometry indicates the formation of tetrasaccharide and octasaccharide intermediates, but not a hexasaccharide intermediate, suggesting a dimerization of a lipid-linked tetrasaccharide. Numerical modelling of the time-course data supports a kinetic model involving addition to lipid-linked tetrasaccharide of either lipid II or lipid-linked tetrasaccharide. Observation of free octasaccharide suggests that hydrolysis of the undecaprenyl diphosphate lipid carrier occurs at this stage in peptidoglycan transglycosylation. © 2014 The Authors.
GeoFramework: A Modeling Framework for Solid Earth Geophysics
NASA Astrophysics Data System (ADS)
Gurnis, M.; Aivazis, M.; Tromp, J.; Tan, E.; Thoutireddy, P.; Liu, Q.; Choi, E.; Dicaprio, C.; Chen, M.; Simons, M.; Quenette, S.; Appelbe, B.; Aagaard, B.; Williams, C.; Lavier, L.; Moresi, L.; Law, H.
2003-12-01
As data sets in geophysics become larger and of greater relevance to other earth science disciplines, and as earth science becomes more interdisciplinary in general, modeling tools are being driven in new directions. There is now a greater need to link modeling codes to one another, link modeling codes to multiple datasets, and to make modeling software available to non modeling specialists. Coupled with rapid progress in computer hardware (including the computational speed afforded by massively parallel computers), progress in numerical algorithms, and the introduction of software frameworks, these lofty goals of merging software in geophysics are now possible. The GeoFramework project, a collaboration between computer scientists and geoscientists, is a response to these needs and opportunities. GeoFramework is based on and extends Pyre, a Python-based modeling framework, recently developed to link solid (Lagrangian) and fluid (Eulerian) models, as well as mesh generators, visualization packages, and databases, with one another for engineering applications. The utility and generality of Pyre as a general purpose framework in science is now being recognized. Besides its use in engineering and geophysics, it is also being used in particle physics and astronomy. Geology and geophysics impose their own unique requirements on software frameworks which are not generally available in existing frameworks and so there is a need for research in this area. One of the special requirements is the way Lagrangian and Eulerian codes will need to be linked in time and space within a plate tectonics context. GeoFramework has grown beyond its initial goal of linking a limited number of exiting codes together. The following codes are now being reengineered within the context of Pyre: Tecton, 3-D FE Visco-elastic code for lithospheric relaxation; CitComS, a code for spherical mantle convection; SpecFEM3D, a SEM code for global and regional seismic waves; eqsim, a FE code for dynamic earthquake rupture; SNAC, a developing 3-D coded based on the FLAC method for visco-elastoplastic deformation; SNARK, a 3-D FE-PIC method for viscoplastic deformation; and gPLATES an open source paleogeographic/plate tectonics modeling package. We will demonstrate how codes can be linked with themselves, such as a regional and global model of mantle convection and a visco-elastoplastic representation of the crust within viscous mantle flow. Finally, we will describe how http://GeoFramework.org has become a distribution site for a suite of modeling software in geophysics.
ERIC Educational Resources Information Center
Masten, Ann S.; Roisman, Glenn I.; Long, Jeffrey D.; Burt, Keith B.; Obradovic, Jelena; Riley, Jennifer R.; Boelcke-Stennes, Kristen; Tellegen, Auke
2005-01-01
A developmental cascade model linking competence and symptoms was tested in a study of a normative, urban school sample of 205 children (initially 8 to 12 years old). Internalizing and externalizing symptoms and academic competence were assessed by multiple methods at the study outset and after 7, 10, and 20 years. A series of nested cascade…
A group evolving-based framework with perturbations for link prediction
NASA Astrophysics Data System (ADS)
Si, Cuiqi; Jiao, Licheng; Wu, Jianshe; Zhao, Jin
2017-06-01
Link prediction is a ubiquitous application in many fields which uses partially observed information to predict absence or presence of links between node pairs. The group evolving study provides reasonable explanations on the behaviors of nodes, relations between nodes and community formation in a network. Possible events in group evolution include continuing, growing, splitting, forming and so on. The changes discovered in networks are to some extent the result of these events. In this work, we present a group evolving-based characterization of node's behavioral patterns, and via which we can estimate the probability they tend to interact. In general, the primary aim of this paper is to offer a minimal toy model to detect missing links based on evolution of groups and give a simpler explanation on the rationality of the model. We first introduce perturbations into networks to obtain stable cluster structures, and the stable clusters determine the stability of each node. Then fluctuations, another node behavior, are assumed by the participation of each node to its own belonging group. Finally, we demonstrate that such characteristics allow us to predict link existence and propose a model for link prediction which outperforms many classical methods with a decreasing computational time in large scales. Encouraging experimental results obtained on real networks show that our approach can effectively predict missing links in network, and even when nearly 40% of the edges are missing, it also retains stationary performance.
Probabilistic structural analysis by extremum methods
NASA Technical Reports Server (NTRS)
Nafday, Avinash M.
1990-01-01
The objective is to demonstrate discrete extremum methods of structural analysis as a tool for structural system reliability evaluation. Specifically, linear and multiobjective linear programming models for analysis of rigid plastic frames under proportional and multiparametric loadings, respectively, are considered. Kinematic and static approaches for analysis form a primal-dual pair in each of these models and have a polyhedral format. Duality relations link extreme points and hyperplanes of these polyhedra and lead naturally to dual methods for system reliability evaluation.
Linda B. Brubaker; Philip E. Higuera; T. Scott Rupp; Mark A. Olson; Patricia M. Anderson; Feng Sheng. Hu
2009-01-01
Interactions between vegetation and fire have the potential to overshadow direct effects of climate change on fire regimes in boreal forests of North America. We develop methods to compare sediment-charcoal records with fire regimes simulated by an ecological model, ALFRESCO (Alaskan Frame-based Ecosystem Code) and apply these methods to evaluate potential causes of a...
Study of rain attenuation in Ka band for satellite communication in South Korea
NASA Astrophysics Data System (ADS)
Shrestha, Sujan; Choi, Dong-You
2016-10-01
The important factor to be considered in the link budget estimation for satellite communication systems, operating at frequencies above 10 GHz is the rain attenuation. Scattering and absorption are the main concern for system designers at these frequency bands. This has resulted in the need for suitable prediction models that can best provide estimates of attenuation due to rain with available information of rain attenuation data. Researchers have developed models that can be used to estimate 1-min rainfall attenuation distribution for earth space link but there is still some confusion with regard to choosing the right model to predict attenuation for the location of interest. In this context, the existing prediction models need to be tested against the measured results. This paper presents studies on rain attenuation at 19.8 GHz, which specifies the performance parameters for Ka-Band under earth space communication system. It presents the experimental result of rain rates and rain-induced attenuation in 19.8 and 20.73 GHz for vertical and circular polarization respectively. The received signal data for rain attenuation and rain rate were collected at 10 s intervals over a three year periods from 2013 to 2015. The data highlights the impact of clear air variation and rain fade loss. Rain rate data was measured through OTT Parsivel. During the observation period, rain rates of about 50 mm/h and attenuation values of 11.6 dB for 0.01% of the time were noted. The experimental link was set up at Korea Radio Promotion Association, Mokdong, Seoul. Out of several models, this paper present discussion and comparison of ITU-R P.618-12, Unified Method, Dissanayake Allnutt and Haidara (DAH), Simple Attenuation (SAM), Crane Global and Ramachandran and Kumar models. The relative error margin of 27.51, 89.84,72.46% and 67.24, 130.84, 166.48% are obtained for 0.1%, 0.01% and 0.001% of the time for 19.8 and 20.73 GHz under vertical and circular polarization respectively from ITU-R P. 618-12 method which has been analyzed in the further section of this article. In order to obtain the better approximation of rain induced attenuation, the suitable method is proposed for earth space link whose efficiency have been compared with prominent rain attenuation models. The method provides useful information for system engineers and researchers in making a decision over the choice of suitable rain attenuation prediction method for earth space communication operating in the South Korea region.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Model updating in flexible-link multibody systems
NASA Astrophysics Data System (ADS)
Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.
2016-09-01
The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.
Recursive Newton-Euler formulation of manipulator dynamics
NASA Technical Reports Server (NTRS)
Nasser, M. G.
1989-01-01
A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.
Using Empirical Models for Communication Prediction of Spacecraft
NASA Technical Reports Server (NTRS)
Quasny, Todd
2015-01-01
A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.
Spectral method for a kinetic swarming model
Gamba, Irene M.; Haack, Jeffrey R.; Motsch, Sebastien
2015-04-28
Here we present the first numerical method for a kinetic description of the Vicsek swarming model. The kinetic model poses a unique challenge, as there is a distribution dependent collision invariant to satisfy when computing the interaction term. We use a spectral representation linked with a discrete constrained optimization to compute these interactions. To test the numerical scheme we investigate the kinetic model at different scales and compare the solution with the microscopic and macroscopic descriptions of the Vicsek model. Lastly, we observe that the kinetic model captures key features such as vortex formation and traveling waves.
NASA Astrophysics Data System (ADS)
Dalarmelina, Carlos A.; Adegbite, Saheed A.; Pereira, Esequiel da V.; Nunes, Reginaldo B.; Rocha, Helder R. O.; Segatto, Marcelo E. V.; Silva, Jair A. L.
2017-05-01
Block-level detection is required to decode what may be classified as selective control information (SCI) such as control format indicator in 4G-long-term evolution systems. Using optical orthogonal frequency division multiplexing over radio-over-fiber (RoF) links, we report the experimental evaluation of an SCI detection scheme based on a time-domain correlation (TDC) technique in comparison with the conventional maximum likelihood (ML) approach. When compared with the ML method, it is shown that the TDC method improves detection performance over both 20 and 40 km of standard single mode fiber (SSMF) links. We also report a performance analysis of the TDC scheme in noisy visible light communication channel models after propagation through 40 km of SSMF. Experimental and simulation results confirm that the TDC method is attractive for practical orthogonal frequency division multiplexing-based RoF and fiber-wireless systems. Unlike the ML method, another key benefit of the TDC is that it requires no channel estimation.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Ulitsky, Igor; Shamir, Ron
2007-01-01
The biological interpretation of genetic interactions is a major challenge. Recently, Kelley and Ideker proposed a method to analyze together genetic and physical networks, which explains many of the known genetic interactions as linking different pathways in the physical network. Here, we extend this method and devise novel analytic tools for interpreting genetic interactions in a physical context. Applying these tools on a large-scale Saccharomyces cerevisiae data set, our analysis reveals 140 between-pathway models that explain 3765 genetic interactions, roughly doubling those that were previously explained. Model genes tend to have short mRNA half-lives and many phosphorylation sites, suggesting that their stringent regulation is linked to pathway redundancy. We also identify ‘pivot' proteins that have many physical interactions with both pathways in our models, and show that pivots tend to be essential and highly conserved. Our analysis of models and pivots sheds light on the organization of the cellular machinery as well as on the roles of individual proteins. PMID:17437029
Huang, Ruihua; Liu, Qian; Zhang, Lujie; Yang, Bingchao
2015-01-01
A kind of biocomposite was prepared by the intercalation of chitosan in bentonite and the cross-linking reaction of chitosan with glutaraldehyde, which was referred to as cross-linked chitosan/bentonite (CCS/BT) composite. Adsorptive removal of methyl orange (MO) from aqueous solutions was investigated by batch method. The adsorption of MO onto CCS/BT composite was affected by the ratio of chitosan to BT and contact time. pH value had only a minor impact on MO adsorption in a wide pH range. Adsorption kinetics was mainly controlled by the pseudo-second-order kinetic model. The adsorption of MO onto CCS/BT composite followed the Langmuir isotherm model, and the maximum adsorption capacity of CCS/BT composite calculated by the Langmuir model was 224.8 mg/g. Experimental results indicated that this adsorbent had a potential for the removal of MO from aqueous solutions.
Role of weakest links and system-size scaling in multiscale modeling of stochastic plasticity
NASA Astrophysics Data System (ADS)
Ispánovity, Péter Dusán; Tüzes, Dániel; Szabó, Péter; Zaiser, Michael; Groma, István
2017-02-01
Plastic deformation of crystalline and amorphous matter often involves intermittent local strain burst events. To understand the physical background of the phenomenon a minimal stochastic mesoscopic model was introduced, where details of the microstructure evolution are statistically represented in terms of a fluctuating local yield threshold. In the present paper we propose a method for determining the corresponding yield stress distribution for the case of crystal plasticity from lower scale discrete dislocation dynamics simulations which we combine with weakest link arguments. The success of scale linking is demonstrated by comparing stress-strain curves obtained from the resulting mesoscopic and the underlying discrete dislocation models in the microplastic regime. As shown by various scaling relations they are statistically equivalent and behave identically in the thermodynamic limit. The proposed technique is expected to be applicable to different microstructures and also to amorphous materials.
Ye, Xiaoduan; O'Neil, Patrick K; Foster, Adrienne N; Gajda, Michal J; Kosinski, Jan; Kurowski, Michal A; Bujnicki, Janusz M; Friedman, Alan M; Bailey-Kellogg, Chris
2004-12-01
Emerging high-throughput techniques for the characterization of protein and protein-complex structures yield noisy data with sparse information content, placing a significant burden on computation to properly interpret the experimental data. One such technique uses cross-linking (chemical or by cysteine oxidation) to confirm or select among proposed structural models (e.g., from fold recognition, ab initio prediction, or docking) by testing the consistency between cross-linking data and model geometry. This paper develops a probabilistic framework for analyzing the information content in cross-linking experiments, accounting for anticipated experimental error. This framework supports a mechanism for planning experiments to optimize the information gained. We evaluate potential experiment plans using explicit trade-offs among key properties of practical importance: discriminability, coverage, balance, ambiguity, and cost. We devise a greedy algorithm that considers those properties and, from a large number of combinatorial possibilities, rapidly selects sets of experiments expected to discriminate pairs of models efficiently. In an application to residue-specific chemical cross-linking, we demonstrate the ability of our approach to plan experiments effectively involving combinations of cross-linkers and introduced mutations. We also describe an experiment plan for the bacteriophage lambda Tfa chaperone protein in which we plan dicysteine mutants for discriminating threading models by disulfide formation. Preliminary results from a subset of the planned experiments are consistent and demonstrate the practicality of planning. Our methods provide the experimenter with a valuable tool (available from the authors) for understanding and optimizing cross-linking experiments.
Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan
2015-01-01
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Bazargan-Lari, Y; Eghtesad, M; Khoogar, A; Mohammad-Zadeh, A
2014-09-01
Despite some successful dynamic simulation of self-impact double pendulum (SIDP)-as humanoid robots legs or arms- studies, there is limited information available about the control of one leg locomotion. The main goal of this research is to improve the reliability of the mammalians leg locomotion and building more elaborated models close to the natural movements, by modeling the swing leg as a SIDP. This paper also presents the control design for a SIDP by a nonlinear model-based control method. To achieve this goal, the available data of normal human gait will be taken as the desired trajectories of the hip and knee joints. The model is characterized by the constraint that occurs at the knee joint (the lower joint of the model) in both dynamic modeling and control design. Since the system dynamics is nonlinear, the MIMO Input-Output Feedback Linearization method will be employed for control purposes. The first constraint in forward impact simulation happens at 0.5 rad where the speed of the upper link is increased to 2.5 rad/sec. and the speed of the lower link is reduced to -5 rad/sec. The subsequent constraints occur rather moderately. In the case of both backward and forward constraints simulation, the backward impact occurs at -0.5 rad and the speeds of the upper and lower links increase to 2.2 and 1.5 rad/sec., respectively. The designed controller performed suitably well and regulated the system accurately.
A Glider-Assisted Link Disruption Restoration Mechanism in Underwater Acoustic Sensor Networks.
Jin, Zhigang; Wang, Ning; Su, Yishan; Yang, Qiuling
2018-02-07
Underwater acoustic sensor networks (UASNs) have become a hot research topic. In UASNs, nodes can be affected by ocean currents and external forces, which could result in sudden link disruption. Therefore, designing a flexible and efficient link disruption restoration mechanism to ensure the network connectivity is a challenge. In the paper, we propose a glider-assisted restoration mechanism which includes link disruption recognition and related link restoring mechanism. In the link disruption recognition mechanism, the cluster heads collect the link disruption information and then schedule gliders acting as relay nodes to restore the disrupted link. Considering the glider's sawtooth motion, we design a relay location optimization algorithm with a consideration of both the glider's trajectory and acoustic channel attenuation model. The utility function is established by minimizing the channel attenuation and the optimal location of glider is solved by a multiplier method. The glider-assisted restoration mechanism can greatly improve the packet delivery rate and reduce the communication energy consumption and it is more general for the restoration of different link disruption scenarios. The simulation results show that glider-assisted restoration mechanism can improve the delivery rate of data packets by 15-33% compared with cooperative opportunistic routing (OVAR), the hop-by-hop vector-based forwarding (HH-VBF) and the vector based forward (VBF) methods, and reduce communication energy consumption by 20-58% for a typical network's setting.
A Glider-Assisted Link Disruption Restoration Mechanism in Underwater Acoustic Sensor Networks
Wang, Ning; Su, Yishan; Yang, Qiuling
2018-01-01
Underwater acoustic sensor networks (UASNs) have become a hot research topic. In UASNs, nodes can be affected by ocean currents and external forces, which could result in sudden link disruption. Therefore, designing a flexible and efficient link disruption restoration mechanism to ensure the network connectivity is a challenge. In the paper, we propose a glider-assisted restoration mechanism which includes link disruption recognition and related link restoring mechanism. In the link disruption recognition mechanism, the cluster heads collect the link disruption information and then schedule gliders acting as relay nodes to restore the disrupted link. Considering the glider’s sawtooth motion, we design a relay location optimization algorithm with a consideration of both the glider’s trajectory and acoustic channel attenuation model. The utility function is established by minimizing the channel attenuation and the optimal location of glider is solved by a multiplier method. The glider-assisted restoration mechanism can greatly improve the packet delivery rate and reduce the communication energy consumption and it is more general for the restoration of different link disruption scenarios. The simulation results show that glider-assisted restoration mechanism can improve the delivery rate of data packets by 15–33% compared with cooperative opportunistic routing (OVAR), the hop-by-hop vector-based forwarding (HH-VBF) and the vector based forward (VBF) methods, and reduce communication energy consumption by 20–58% for a typical network’s setting. PMID:29414898
Identification of hybrid node and link communities in complex networks
He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong
2015-01-01
Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately. PMID:25728010
Identification of hybrid node and link communities in complex networks.
He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong
2015-03-02
Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.
Identification of hybrid node and link communities in complex networks
NASA Astrophysics Data System (ADS)
He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong
2015-03-01
Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.
Overlapping community detection based on link graph using distance dynamics
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Jing; Cai, Li-Jun
2018-01-01
The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.
Mine safety assessment using gray relational analysis and bow tie model
2018-01-01
Mine safety assessment is a precondition for ensuring orderly and safety in production. The main purpose of this study was to prevent mine accidents more effectively by proposing a composite risk analysis model. First, the weights of the assessment indicators were determined by the revised integrated weight method, in which the objective weights were determined by a variation coefficient method and the subjective weights determined by the Delphi method. A new formula was then adopted to calculate the integrated weights based on the subjective and objective weights. Second, after the assessment indicator weights were determined, gray relational analysis was used to evaluate the safety of mine enterprises. Mine enterprise safety was ranked according to the gray relational degree, and weak links of mine safety practices identified based on gray relational analysis. Third, to validate the revised integrated weight method adopted in the process of gray relational analysis, the fuzzy evaluation method was used to the safety assessment of mine enterprises. Fourth, for first time, bow tie model was adopted to identify the causes and consequences of weak links and allow corresponding safety measures to be taken to guarantee the mine’s safe production. A case study of mine safety assessment was presented to demonstrate the effectiveness and rationality of the proposed composite risk analysis model, which can be applied to other related industries for safety evaluation. PMID:29561875
Capturing the Flatness of a peer-to-peer lending network through random and selected perturbations
NASA Astrophysics Data System (ADS)
Karampourniotis, Panagiotis D.; Singh, Pramesh; Uparna, Jayaram; Horvat, Emoke-Agnes; Szymanski, Boleslaw K.; Korniss, Gyorgy; Bakdash, Jonathan Z.; Uzzi, Brian
Null models are established tools that have been used in network analysis to uncover various structural patterns. They quantify the deviance of an observed network measure to that given by the null model. We construct a null model for weighted, directed networks to identify biased links (carrying significantly different weights than expected according to the null model) and thus quantify the flatness of the system. Using this model, we study the flatness of Kiva, a large international crownfinancing network of borrowers and lenders, aggregated to the country level. The dataset spans the years from 2006 to 2013. Our longitudinal analysis shows that flatness of the system is reducing over time, meaning the proportion of biased inter-country links is growing. We extend our analysis by testing the robustness of the flatness of the network in perturbations on the links' weights or the nodes themselves. Examples of such perturbations are event shocks (e.g. erecting walls) or regulatory shocks (e.g. Brexit). We find that flatness is unaffected by random shocks, but changes after shocks target links with a large weight or bias. The methods we use to capture the flatness are based on analytics, simulations, and numerical computations using Shannon's maximum entropy. Supported by ARL NS-CTA.
Weak reversible cross links may decrease the strength of aligned fiber bundles.
Nabavi, S Soran; Hartmann, Markus A
2016-02-21
Reversible cross-linking is an effective strategy to specifically tailor the mechanical properties of polymeric materials that can be found in a variety of biological as well as man-made materials. Using a simple model in this paper the influence of weak, reversible cross-links on the mechanical properties of aligned fiber bundles is investigated. Special emphasis in this analysis is put on the strength of the investigated structures. Using Monte Carlo methods two topologies of cross-links exceeding the strength of the covalent backbone are studied. Most surprisingly only two cross-links are sufficient to break the backbone of a multi chain system, resulting in a reduced strength of the material. The found effect crucially depends on the ratio of inter- to intra-chain cross-links and, thus, on the grafting density that determines this ratio.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Lewis, Mark
2010-01-01
A new method of treating electronic energy level transitions as well as linking ionization to electronic energy levels is proposed following the particle-based chemistry model of Bird. Although the use of electronic energy levels and ionization reactions in DSMC are not new ideas, the current method of selecting what level to transition to, how to reproduce transition rates, and the linking of the electronic energy levels to ionization are, to the author s knowledge, novel concepts. The resulting equilibrium temperatures are shown to remain constant, and the electronic energy level distributions are shown to reproduce the Boltzmann distribution. The electronic energy level transition rates and ionization rates due to electron impacts are shown to reproduce theoretical and measured rates. The rates due to heavy particle impacts, while not as favorable as the electron impact rates, compare favorably to values from the literature. Thus, these new extensions to the particle-based chemistry model of Bird provide an accurate method for predicting electronic energy level transition and ionization rates in gases.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include (1) diagnostic/detection methodology, (2) prognosis/lifing methodology, (3) diagnostic/prognosis linkage, (4) experimental validation, and (5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multimechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include 1) diagnostic/detection methodology, 2) prognosis/lifing methodology, 3) diagnostic/prognosis linkage, 4) experimental validation and 5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multi-mechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
Role of special cross-links in structure formation of bacterial DNA polymer
NASA Astrophysics Data System (ADS)
Agarwal, Tejal; Manjunath, G. P.; Habib, Farhat; Lakshmi Vaddavalli, Pavana; Chatterji, Apratim
2018-01-01
Using data from contact maps of the DNA-polymer of Escherichia coli (E. Coli) (at kilobase pair resolution) as an input to our model, we introduce cross-links between monomers in a bead-spring model of a ring polymer at very specific points along the chain. Via suitable Monte Carlo simulations, we show that the presence of these cross-links leads to a particular organization of the chain at large (micron) length scales of the DNA. We also investigate the structure of a ring polymer with an equal number of cross-links at random positions along the chain. We find that though the polymer does get organized at the large length scales, the nature of the organization is quite different from the organization observed with cross-links at specific biologically determined positions. We used the contact map of E. Coli bacteria which has around 4.6 million base pairs in a single circular chromosome. In our coarse-grained flexible ring polymer model, we used 4642 monomer beads and observed that around 80 cross-links are enough to induce the large-scale organization of the molecule accounting for statistical fluctuations caused by thermal energy. The length of a DNA chain even of a simple bacterial cell such as E. Coli is much longer than typical proteins, hence we avoided methods used to tackle protein folding problems. We define new suitable quantities to identify the large scale structure of a polymer chain with a few cross-links.
A new method of radio frequency links by coplanar coils for implantable medical devices.
Xue, L; Hao, H W; Li, L; Ma, B Z
2005-01-01
A new method based on coplanar coils for the design of radio frequency links has been developed, to realize the communication between the programming wand and the implantable medical devices with shielding container simply and reliably. With the analysis of electronic and magnetic field theory, the communication model has been established and simulated, and the circuit has been designed and tested. The experimental results are consistent with the simulation fairly well. The voltage transfer ratio of the typical circuit with present parameters can reach as high as 0.02, which can fulfill the requirements of communication.
Banerjee, D; Dalmonte, M; Müller, M; Rico, E; Stebler, P; Wiese, U-J; Zoller, P
2012-10-26
Using a Fermi-Bose mixture of ultracold atoms in an optical lattice, we construct a quantum simulator for a U(1) gauge theory coupled to fermionic matter. The construction is based on quantum links which realize continuous gauge symmetry with discrete quantum variables. At low energies, quantum link models with staggered fermions emerge from a Hubbard-type model which can be quantum simulated. This allows us to investigate string breaking as well as the real-time evolution after a quench in gauge theories, which are inaccessible to classical simulation methods.
Low-lying Photoexcited States of a One-Dimensional Ionic Extended Hubbard Model
NASA Astrophysics Data System (ADS)
Yokoi, Kota; Maeshima, Nobuya; Hino, Ken-ichi
2017-10-01
We investigate the properties of low-lying photoexcited states of a one-dimensional (1D) ionic extended Hubbard model at half-filling. Numerical analysis by using the full and Lanczos diagonalization methods shows that, in the ionic phase, there exist low-lying photoexcited states below the charge transfer gap. As a result of comparison with numerical data for the 1D antiferromagnetic (AF) Heisenberg model, it was found that, for a small alternating potential Δ, these low-lying photoexcited states are spin excitations, which is consistent with a previous analytical study [Katsura et al.,
Exploring context and content links in social media: a latent space method.
Qi, Guo-Jun; Aggarwal, Charu; Tian, Qi; Ji, Heng; Huang, Thomas S
2012-05-01
Social media networks contain both content and context-specific information. Most existing methods work with either of the two for the purpose of multimedia mining and retrieval. In reality, both content and context information are rich sources of information for mining, and the full power of mining and processing algorithms can be realized only with the use of a combination of the two. This paper proposes a new algorithm which mines both context and content links in social media networks to discover the underlying latent semantic space. This mapping of the multimedia objects into latent feature vectors enables the use of any off-the-shelf multimedia retrieval algorithms. Compared to the state-of-the-art latent methods in multimedia analysis, this algorithm effectively solves the problem of sparse context links by mining the geometric structure underlying the content links between multimedia objects. Specifically for multimedia annotation, we show that an effective algorithm can be developed to directly construct annotation models by simultaneously leveraging both context and content information based on latent structure between correlated semantic concepts. We conduct experiments on the Flickr data set, which contains user tags linked with images. We illustrate the advantages of our approach over the state-of-the-art multimedia retrieval techniques.
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
NASA Astrophysics Data System (ADS)
Wang, Chao; Durney, Krista M.; Fomovsky, Gregory; Ateshian, Gerard A.; Vukelic, Sinisa
2016-03-01
The onset of osteoarthritis (OA)in articular cartilage is characterized by degradation of extracellular matrix (ECM). Specifically, breakage of cross-links between collagen fibrils in the articular cartilage leads to loss of structural integrity of the bulk tissue. Since there are no broadly accepted, non-invasive, label-free tools for diagnosing OA at its early stage, Raman spectroscopyis therefore proposed in this work as a novel, non-destructive diagnostic tool. In this study, collagen thin films were employed to act as a simplified model system of the cartilage collagen extracellular matrix. Cross-link formation was controlled via exposure to glutaraldehyde (GA), by varying exposure time and concentration levels, and Raman spectral information was collected to quantitatively characterize the cross-link assignments imparted to the collagen thin films during treatment. A novel, quantitative method was developed to analyze the Raman signal obtained from collagen thin films. Segments of Raman signal were decomposed and modeled as the sum of individual bands, providing an optimization function for subsequent curve fitting against experimental findings. Relative changes in the concentration of the GA-induced pyridinium cross-links were extracted from the model, as a function of the exposure to GA. Spatially resolved characterization enabled construction of spectral maps of the collagen thin films, which provided detailed information about the variation of cross-link formation at various locations on the specimen. Results showed that Raman spectral data correlate with glutaraldehyde treatment and therefore may be used as a proxy by which to measure loss of collagen cross-links in vivo. This study proposes a promising system of identifying onset of OA and may enable early intervention treatments that may serve to slow or prevent osteoarthritis progression.
Representing Functions in n Dimensions to Arbitrary Accuracy
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
2007-01-01
A method of approximating a scalar function of n independent variables (where n is a positive integer) to arbitrary accuracy has been developed. This method is expected to be attractive for use in engineering computations in which it is necessary to link global models with local ones or in which it is necessary to interpolate noiseless tabular data that have been computed from analytic functions or numerical models in n-dimensional spaces of design parameters.
Similarity-based Regularized Latent Feature Model for Link Prediction in Bipartite Networks.
Wang, Wenjun; Chen, Xue; Jiao, Pengfei; Jin, Di
2017-12-05
Link prediction is an attractive research topic in the field of data mining and has significant applications in improving performance of recommendation system and exploring evolving mechanisms of the complex networks. A variety of complex systems in real world should be abstractly represented as bipartite networks, in which there are two types of nodes and no links connect nodes of the same type. In this paper, we propose a framework for link prediction in bipartite networks by combining the similarity based structure and the latent feature model from a new perspective. The framework is called Similarity Regularized Nonnegative Matrix Factorization (SRNMF), which explicitly takes the local characteristics into consideration and encodes the geometrical information of the networks by constructing a similarity based matrix. We also develop an iterative scheme to solve the objective function based on gradient descent. Extensive experiments on a variety of real world bipartite networks show that the proposed framework of link prediction has a more competitive, preferable and stable performance in comparison with the state-of-art methods.
Emerging semantics to link phenotype and environment
Bunker, Daniel E.; Buttigieg, Pier Luigi; Cooper, Laurel D.; Dahdul, Wasila M.; Domisch, Sami; Franz, Nico M.; Jaiswal, Pankaj; Lawrence-Dill, Carolyn J.; Midford, Peter E.; Mungall, Christopher J.; Ramírez, Martín J.; Specht, Chelsea D.; Vogt, Lars; Vos, Rutger Aldo; Walls, Ramona L.; White, Jeffrey W.; Zhang, Guanyang; Deans, Andrew R.; Huala, Eva; Lewis, Suzanna E.; Mabee, Paula M.
2015-01-01
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies are well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. In this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments. PMID:26713234
Emerging semantics to link phenotype and environment.
Thessen, Anne E; Bunker, Daniel E; Buttigieg, Pier Luigi; Cooper, Laurel D; Dahdul, Wasila M; Domisch, Sami; Franz, Nico M; Jaiswal, Pankaj; Lawrence-Dill, Carolyn J; Midford, Peter E; Mungall, Christopher J; Ramírez, Martín J; Specht, Chelsea D; Vogt, Lars; Vos, Rutger Aldo; Walls, Ramona L; White, Jeffrey W; Zhang, Guanyang; Deans, Andrew R; Huala, Eva; Lewis, Suzanna E; Mabee, Paula M
2015-01-01
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies are well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. In this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments.
Emerging semantics to link phenotype and environment
Thessen, Anne E.; Bunker, Daniel E.; Buttigieg, Pier Luigi; ...
2015-12-14
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies aremore » well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. Lastly, in this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments.« less
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
Standard model of knowledge representation
NASA Astrophysics Data System (ADS)
Yin, Wensheng
2016-09-01
Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo
2013-08-01
Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.
Thermometric enzyme linked immunosorbent assay: TELISA.
Mattiasson, B; Borrebaeck, C; Sanfridson, B; Mosbach, K
1977-08-11
A new method, thermometric enzyme linked immunosorbent assay (TELISA), for the assay of endogenous and exogenous compounds in biological fluids is described. It is based on the previously described enzyme linked immunosorbent assay technique, ELISA, but utilizes enzymic heat formation which is measured in an enzyme thermistor unit. In the model system studied determination of human serum albumin down to a concentration of 10(-10) M (5 ng/ml) was achieved, with both normal and catalase labelled human serum albumin competing for the binding sites on the immunosorbent, which was rabbit antihuman serum albumin immobilized onto Sepharose CL-4B.
Model of Atmospheric Links on Optical Communications from High Altitude
NASA Technical Reports Server (NTRS)
Subich, Christopher
2004-01-01
Optical communication links have the potential to solve many of the problems of current radio and microwave links to satellites and high-altitude aircraft. The higher frequency involved in optical systems allows for significantly greater signal bandwidth, and thus information transfer rate, in excess of 10 Gbps, and the highly directional nature of laser-based signals eliminates the need for frequency-division multiplexing seen in radio and microwave links today. The atmosphere, however, distorts an optical signal differently than a microwave signal. While the ionosphere is one of the most significant sources of noise and distortion in a microwave or radio signal, the lower atmosphere affects an optical signal more significantly. Refractive index fluctuations, primarily caused by changes in atmospheric temperature and density, distort the incoming signal in both deterministic and nondeterministic ways. Additionally, suspended particles, such as those in haze or rain, further corrupt the transmitted signal. To model many of the atmospheric effects on the propagating beam, we use simulations based on the beam-propagation method. This method, developed both for simulation of signals in waveguides and propagation in atmospheric turbulence, separates the propagation into a diffraction and refraction problem. The diffraction step is an exact solution, within the limits of numerical precision, to the problem of propagation in free space, and the refraction step models the refractive index variances over a segment of the propagation path. By applying refraction for a segment of the propagation path, then diffracting over that same segment, this method forms a good approximation to true propagation through the atmospheric medium. Iterating over small segments of the total propagation path gives a good approximation to the problem of propagation over the entire path. Parameters in this model, such as initial beam profile and atmospheric constants, are easily modified in a simulation such as this, which allows for the rapid analysis of different propagation scenarios. Therefore, this method allows the development of a near-optimal system design for a wide range of situations, typical of what would be seen in different atmospheric conditions over a receiving ground station. A simulation framework based upon this model was developed in FORTRAN, and for moderate grid sizes and propagation distances these simulations are computable in reasonable time on a standard workstation. This presentation will discuss results thus far.
Modeling of Radiowave Propagation in a Forested Environment
2014-09-01
is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Propagation models used in wireless communication system design play an...domains. Applications in both domains require communication devices and sensors to be operated in forested environments. Various methods have been...wireless communication system design play an important role in overall link performance. Propagation models in a forested environment, in particular
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin
Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less
Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.
Fluorescence imaging of tryptophan and collagen cross-links to evaluate wound closure ex vivo
NASA Astrophysics Data System (ADS)
Wang, Ying; Ortega-Martinez, Antonio; Farinelli, Bill; Anderson, R. R.; Franco, Walfre
2016-02-01
Wound size is a key parameter in monitoring healing. Current methods to measure wound size are often subjective, time-consuming and marginally invasive. Recently, we developed a non-invasive, non-contact, fast and simple but robust fluorescence imaging (u-FEI) method to monitor the healing of skin wounds. This method exploits the fluorescence of native molecules to tissue as functional and structural markers. The objective of the present study is to demonstrate the feasibility of using variations in the fluorescence intensity of tryptophan and cross-links of collagen to evaluate proliferation of keratinocyte cells and quantitate size of wound during healing, respectively. Circular dermal wounds were created in ex vivo human skin and cultured in different media. Two serial fluorescence images of tryptophan and collagen cross-links were acquired every two days. Histology and immunohistology were used to validate correlation between fluorescence and epithelialization. Images of collagen cross-links show fluorescence of the exposed dermis and, hence, are a measure of wound area. Images of tryptophan show higher fluorescence intensity of proliferating keratinocytes forming new epithelium, as compared to surrounding keratinocytes not involved in epithelialization. These images are complementary since collagen cross-links report on structure while tryptophan reports on function. HE and immunohistology show that tryptophan fluorescence correlates with newly formed epidermis. We have established a fluorescence imaging method for studying epithelialization processes during wound healing in a skin organ culture model, our approach has the potential to provide a non-invasive, non-contact, quick, objective and direct method for quantitative measurements in wound healing in vivo.
The quick acquisition technique for laser communication between LEO and GEO
NASA Astrophysics Data System (ADS)
Zhang, Li-zhong; Zhang, Rui-qin; Li, Yong-hao; Meng, Li-xin; Li, Xiao-ming
2013-08-01
The sight-axis alignment can be accomplished by the quick acquisition operation between two laser communication terminals, which is the premise of establishing a free-space optical communication link. Especially for the laser communication links of LEO (Low Earth Orbit)-Ground and LEO-GEO (Geostationary Earth Orbit), since the earth would break the transmission of laser and break the communication as well, so the effective time for each communication is very shot (several minutes~ dozens of minutes), as a result the communication terminals have to capture each other to rebuild the laser communication link. In the paper, on the basis of the analysis of the traditional methods, it presents a new idea that using the long beacon light instead of the circular beacon light; thereby the original of two-dimensional raster spiral scanning is replaced by one-dimensional scanning. This method will reduce the setup time and decrease the failure probability of acquisition for the LEO-GEO laser communication link. Firstly, the analysis of the external constraint conditions in the acquisition phase has been presented in this paper. Furthermore, the acquisition algorithm models have been established. The optimization analysis for the parameters of the acquisition unit has been carried out, and the ground validation experiments of the acquisition strategy have also been performed. The experiments and analysis show that compared with traditional capturing methods, the method presented in this article can make the capturing time be shortened by about 40%, and the failure probability of capturing be reduced by about 30%. So, the method is significant for the LEO-GEO laser communication link.
Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam
2017-10-27
Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.
NASA Astrophysics Data System (ADS)
Yang, Xinxin; Ge, Shuzhi Sam; He, Wei
2018-04-01
In this paper, both the closed-form dynamics and adaptive robust tracking control of a space robot with two-link flexible manipulators under unknown disturbances are developed. The dynamic model of the system is described with assumed modes approach and Lagrangian method. The flexible manipulators are represented as Euler-Bernoulli beams. Based on singular perturbation technique, the displacements/joint angles and flexible modes are modelled as slow and fast variables, respectively. A sliding mode control is designed for trajectories tracking of the slow subsystem under unknown but bounded disturbances, and an adaptive sliding mode control is derived for slow subsystem under unknown slowly time-varying disturbances. An optimal linear quadratic regulator method is proposed for the fast subsystem to damp out the vibrations of the flexible manipulators. Theoretical analysis validates the stability of the proposed composite controller. Numerical simulation results demonstrate the performance of the closed-loop flexible space robot system.
Implementation of a vibrationally linked chemical reaction model for DSMC
NASA Technical Reports Server (NTRS)
Carlson, A. B.; Bird, Graeme A.
1994-01-01
A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2017-07-01
Though Ogden et al. list several shortcomings of the original SCS-CN method, fit for purpose is a key consideration in hydrological modelling, as shown by the adoption of SCS-CN method in many design standards. The theoretical framework of Bartlett et al. [2016a] reveals a family of semidistributed models, of which the SCS-CN method is just one member. Other members include event-based versions of the Variable Infiltration Capacity (VIC) model and TOPMODEL. This general model allows us to move beyond the limitations of the original SCS-CN method under different rainfall-runoff mechanisms and distributions for soil and rainfall variability. Future research should link this general model approach to different hydrogeographic settings, in line with the call for action proposed by Ogden et al.
Bailey, Ajay; Hutter, Inge
2008-10-01
With 3.1 million people estimated to be living with HIV/AIDS in India and 39.5 million people globally, the epidemic has posed academics the challenge of identifying behaviours and their underlying beliefs in the effort to reduce the risk of HIV transmission. The Health Belief Model (HBM) is frequently used to identify risk behaviours and adherence behaviour in the field of HIV/AIDS. Risk behaviour studies that apply HBM have been largely quantitative and use of qualitative methodology is rare. The marriage of qualitative and quantitative methods has never been easy. The challenge is in triangulating the methods. Method triangulation has been largely used to combine insights from the qualitative and quantitative methods but not to link both the methods. In this paper we suggest a linked trajectory of method triangulation (LTMT). The linked trajectory aims to first gather individual level information through in-depth interviews and then to present the information as vignettes in focus group discussions. We thus validate information obtained from in-depth interviews and gather emic concepts that arise from the interaction. We thus capture both the interpretation and the interaction angles of the qualitative method. Further, using the qualitative information gained, a survey is designed. In doing so, the survey questions are grounded and contextualized. We employed this linked trajectory of method triangulation in a study on the risk assessment of HIV/AIDS among migrant and mobile men. Fieldwork was carried out in Goa, India. Data come from two waves of studies, first an explorative qualitative study (2003), second a larger study (2004-2005), including in-depth interviews (25), focus group discussions (21) and a survey (n=1259). By employing the qualitative to quantitative LTMT we can not only contextualize the existing concepts of the HBM, but also validate new concepts and identify new risk groups.
Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C
2013-01-01
Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less
Observation of Topological Links Associated with Hopf Insulators in a Solid-State Quantum Simulator
NASA Astrophysics Data System (ADS)
Yuan, X.-X.; He, L.; Wang, S.-T.; Deng, D.-L.; Wang, F.; Lian, W.-Q.; Wang, X.; Zhang, C.-H.; Zhang, H.-L.; Chang, X.-Y.; Duan, L.-M.
2017-06-01
Hopf insulators are intriguing three-dimensional topological insulators characterized by an integer topological invariant. They originate from the mathematical theory of Hopf fibration and epitomize the deep connection between knot theory and topological phases of matter, which distinguishes them from other classes of topological insulators. Here, we implement a model Hamiltonian for Hopf insulators in a solid-state quantum simulator and report the first experimental observation of their topological properties, including fascinating topological links associated with the Hopf fibration and the integer-valued topological invariant obtained from a direct tomographic measurement. Our observation of topological links and Hopf fibration in a quantum simulator opens the door to probe rich topological properties of Hopf insulators in experiments. The quantum simulation and probing methods are also applicable to the study of other intricate three-dimensional topological model Hamiltonians.
Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.
Mall, Susmita; Chakraverty, S
2016-08-01
Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.
A Comparison of Linking and Concurrent Calibration under the Graded Response Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, two methods for developing a common metric for the graded response model under item response theory were…
The Model for External Reliance of Localities In (MERLIN) Coastal Management Zones is a proposed solution to allow scaling of variables to smaller, nested geographies. Utilizing a Principal Components Analysis and data normalization techniques, smaller scale trends are linked to ...
Methods for integrated modeling of landscape change: Interior Northwest Landscape Analysis System.
Jane L. Hayes; Alan. A. Ager; R. James Barbour
2004-01-01
The Interior Northwest Landscape Analysis System (INLAS) links a number of resource, disturbance, and landscape simulations models to examine the interactions of vegetative succession, management, and disturbance with policy goals. The effects of natural disturbance like wildfire, herbivory, forest insects and diseases, as well as specific management actions are...
Emotion modelling towards affective pathogenesis.
Bas, James Le
2009-12-01
Objective: There is a need in psychiatry for models that integrate pathological states with normal systems. The interaction of arousal and emotion is the focus of an exploration of affective pathogenesis. Method: Given that the explicit causes of affective disorder remain nascent, methods of linking emotion and disorder are evaluated. Results: A network model of emotional families is presented, in which emotions exist as quantal gradients. Morbid emotional states are seen as the activation of distal emotion sites. The phenomenology of affective disorders is described with reference to this model. Recourse is made to non-linear dynamic theory. Conclusions: Metaphoric emotion models have face validity and may prove a useful heuristic.
Confounding factors in determining causal soil moisture-precipitation feedback
NASA Astrophysics Data System (ADS)
Tuttle, Samuel E.; Salvucci, Guido D.
2017-07-01
Identification of causal links in the land-atmosphere system is important for construction and testing of land surface and general circulation models. However, the land and atmosphere are highly coupled and linked by a vast number of complex, interdependent processes. Statistical methods, such as Granger causality, can help to identify feedbacks from observational data, independent of the different parameterizations of physical processes and spatiotemporal resolution effects that influence feedbacks in models. However, statistical causal identification methods can easily be misapplied, leading to erroneous conclusions about feedback strength and sign. Here, we discuss three factors that must be accounted for in determination of causal soil moisture-precipitation feedback in observations and model output: seasonal and interannual variability, precipitation persistence, and endogeneity. The effect of neglecting these factors is demonstrated in simulated and observational data. The results show that long-timescale variability and precipitation persistence can have a substantial effect on detected soil moisture-precipitation feedback strength, while endogeneity has a smaller effect that is often masked by measurement error and thus is more likely to be an issue when analyzing model data or highly accurate observational data.
NASA Astrophysics Data System (ADS)
He, Yaqian; Bo, Yanchen; Chai, Leilei; Liu, Xiaolong; Li, Aihua
2016-08-01
Leaf Area Index (LAI) is an important parameter of vegetation structure. A number of moderate resolution LAI products have been produced in urgent need of large scale vegetation monitoring. High resolution LAI reference maps are necessary to validate these LAI products. This study used a geostatistical regression (GR) method to estimate LAI reference maps by linking in situ LAI and Landsat TM/ETM+ and SPOT-HRV data over two cropland and two grassland sites. To explore the discrepancies of employing different vegetation indices (VIs) on estimating LAI reference maps, this study established the GR models for different VIs, including difference vegetation index (DVI), normalized difference vegetation index (NDVI), and ratio vegetation index (RVI). To further assess the performance of the GR model, the results from the GR and Reduced Major Axis (RMA) models were compared. The results show that the performance of the GR model varies between the cropland and grassland sites. At the cropland sites, the GR model based on DVI provides the best estimation, while at the grassland sites, the GR model based on DVI performs poorly. Compared to the RMA model, the GR model improves the accuracy of reference LAI maps in terms of root mean square errors (RMSE) and bias.
Interactive 3D segmentation using connected orthogonal contours.
de Bruin, P W; Dercksen, V J; Post, F H; Vossepoel, A M; Streekstra, G J; Vos, F M
2005-05-01
This paper describes a new method for interactive segmentation that is based on cross-sectional design and 3D modelling. The method represents a 3D model by a set of connected contours that are planar and orthogonal. Planar contours overlayed on image data are easily manipulated and linked contours reduce the amount of user interaction.1 This method solves the contour-to-contour correspondence problem and can capture extrema of objects in a more flexible way than manual segmentation of a stack of 2D images. The resulting 3D model is guaranteed to be free of geometric and topological errors. We show that manual segmentation using connected orthogonal contours has great advantages over conventional manual segmentation. Furthermore, the method provides effective feedback and control for creating an initial model for, and control and steering of, (semi-)automatic segmentation methods.
Revisiting competition in a classic model system using formal links between theory and data.
Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J
2012-09-01
Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.
Kramer, Michael R; Dunlop, Anne L; Hogue, Carol J R
2014-02-01
A life course conceptual framework for MCH research demands new tools for understanding population health and measuring exposures. We propose a method for measuring population-based socio-environmental trajectories for women of reproductive age. We merged maternal longitudinally-linked births to Georgia-resident women from 1994 to 2007 with census economic and social measures using residential geocodes to create woman-centered socio-environmental trajectories. We calculated a woman's neighborhood deprivation index (NDI) at the time of each of her births and, from these, we calculated a cumulative NDI. We fit Loess curves to describe average life course NDI trajectories and binomial regression models to test specific life course theory hypotheses relating cumulative NDI to risk for preterm birth. Of the 1,815,944 total live births, we linked 1,000,437 live births to 413,048 unique women with two or more births. Record linkage had high specificity but relatively low sensitivity which appears non-differential with respect to maternal characteristics. Georgia women on average experienced upward mobility across the life course, although differences by race, early life neighborhood quality, and age at first birth produced differences in cumulative NDI. Adjusted binomial models found evidence for modification of the effect of history of prior preterm birth and advancing age on risk for preterm birth by cumulative NDI. The creation of trajectories from geocoded maternal longitudinally-linked vital records is one method to carry out life course MCH research. We discuss approaches for investigating the impact of truncation of the life course, selection bias from migration, and misclassification of cumulative exposure.
Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network
NASA Astrophysics Data System (ADS)
Fallahpour, R.; Chakouvari, S.; Askari, H.
2015-03-01
In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.
Coupled basin-scale water resource models for arid and semiarid regions
NASA Astrophysics Data System (ADS)
Winter, C.; Springer, E.; Costigan, K.; Fasel, P.; Mniewski, S.; Zyvoloski, G.
2003-04-01
Managers of semi-arid and arid water resources must allocate increasingly variable surface sources and limited groundwater resources to growing demands. This challenge is leading to a new generation of detailed computational models that link multiple interacting sources and demands. We will discuss a new computational model of arid region hydrology that we are parameterizing for the upper Rio Grande Basin of the United States. The model consists of linked components for the atmosphere (the Regional Atmospheric Modeling System, RAMS), surface hydrology (the Los Alamos Distributed Hydrologic System, LADHS), and groundwater (the Finite Element Heat and Mass code, FEHM), and the couplings between them. The model runs under the Parallel Application WorkSpace software developed at Los Alamos for applications running on large distributed memory computers. RAMS simulates regional meteorology coupled to global climate data on the one hand and land surface hydrology on the other. LADHS generates runoff by infiltration or saturation excess mechanisms, as well as interception, evapotranspiration, and snow accumulation and melt. FEHM simulates variably saturated flow and heat transport in three dimensions. A key issue is to increase the components’ spatial and temporal resolution to account for changes in topography and other rapidly changing variables that affect results such as soil moisture distribution or groundwater recharge. Thus, RAMS’ smallest grid is 5 km on a side, LADHS uses 100 m spacing, while FEHM concentrates processing on key volumes by means of an unstructured grid. Couplings within our model are based on new scaling methods that link groundwater-groundwater systems and streams to aquifers and we are developing evapotranspiration methods based on detailed calculations of latent heat and vegetative cover. Simulations of precipitation and soil moisture for the 1992-93 El Nino year will be used to demonstrate the approach and suggest further needs.
NASA Astrophysics Data System (ADS)
Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine
2016-04-01
Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the considered region. Finally, even with such an adaptive predictor identification, the downscaling link appears to be robust: for a same prediction day, predictors selected for different locations of a given region are similar and the regression parameters are consistent within the region of interest.
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
Valkonen, Mira; Ruusuvuori, Pekka; Kartasalo, Kimmo; Nykter, Matti; Visakorpi, Tapio; Latonen, Leena
2017-01-01
Cancer involves histological changes in tissue, which is of primary importance in pathological diagnosis and research. Automated histological analysis requires ability to computationally separate pathological alterations from normal tissue with all its variables. On the other hand, understanding connections between genetic alterations and histological attributes requires development of enhanced analysis methods suitable also for small sample sizes. Here, we set out to develop computational methods for early detection and distinction of prostate cancer-related pathological alterations. We use analysis of features from HE stained histological images of normal mouse prostate epithelium, distinguishing the descriptors for variability between ventral, lateral, and dorsal lobes. In addition, we use two common prostate cancer models, Hi-Myc and Pten+/− mice, to build a feature-based machine learning model separating the early pathological lesions provoked by these genetic alterations. This work offers a set of computational methods for separation of early neoplastic lesions in the prostates of model mice, and provides proof-of-principle for linking specific tumor genotypes to quantitative histological characteristics. The results obtained show that separation between different spatial locations within the organ, as well as classification between histologies linked to different genetic backgrounds, can be performed with very high specificity and sensitivity. PMID:28317907
Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa
2016-08-01
For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.
Xiang, Junxi; Liu, Peng; Zheng, Xinglong; Dong, Dinghui; Fan, Shujuan; Dong, Jian; Zhang, Xufeng; Liu, Xuemin; Wang, Bo; Lv, Yi
2017-10-01
Weak mechanical property and unstable degradation rate limited the application of decellularized liver matrix in tissue engineering. The aim of this study was to explore a new method for improving the mechanical properties, anti-degeneration and angiogenic capability of decellularized liver matrix. This was achieved by a novel approach using riboflavin/ultraviolet A treatment to induce collagen cross-linking of decellularized matrix. Histological staining and scanning electron microscope showed that the diameter of cross-linked fibers significantly increased compared with the control group. The average peak load and Young's modulus of decellularized matrix were obviously improved after cross-linking. Then we implanted the modified matrix into the rat hepatic injury model to test the anti-degeneration and angiogenic capability of riboflavin/UVA cross-linked decellularized liver scaffolds in vivo. The results indicated that cross-linked scaffolds degrade more slowly than those in the control group. In the experiment group, average microvessel density in the implanted matrix was higher than that in the control group since the first week after implantation. In conclusion, we initiated the method to improve the biomechanical properties of decellularized liver scaffolds by riboflavin/UVA cross-linking, and more importantly, its improvement on anti-degeneration and angiogenesis was identified. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 105A: 2662-2669, 2017. © 2017 Wiley Periodicals, Inc.
A Multi-Cycle Q-Modulation for Dynamic Optimization of Inductive Links.
Lee, Byunghun; Yeon, Pyungwoo; Ghovanloo, Maysam
2016-08-01
This paper presents a new method, called multi-cycle Q-modulation, which can be used in wireless power transmission (WPT) to modulate the quality factor (Q) of the receiver (Rx) coil and dynamically optimize the load impedance to maximize the power transfer efficiency (PTE) in two-coil links. A key advantage of the proposed method is that it can be easily implemented using off-the-shelf components without requiring fast switching at or above the carrier frequency, which is more suitable for integrated circuit design. Moreover, the proposed technique does not need any sophisticated synchronization between the power carrier and Q-modulation switching pulses. The multi-cycle Q-modulation is analyzed theoretically by a lumped circuit model, and verified in simulation and measurement using an off-the-shelf prototype. Automatic resonance tuning (ART) in the Rx, combined with multi-cycle Q-modulation helped maximizing PTE of the inductive link dynamically in the presence of environmental and loading variations, which can otherwise significantly degrade the PTE in multi-coil settings. In the prototype conventional 2-coil link, the proposed method increased the power amplifier (PA) plus inductive link efficiency from 4.8% to 16.5% at ( R L = 1 kΩ, d 23 = 3 cm), and from 23% to 28.2% at ( R L = 100 Ω, d 23 = 3 cm) after 11% change in the resonance capacitance, while delivering 168.1 mW to the load (PDL).
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Combining a dispersal model with network theory to assess habitat connectivity.
Lookingbill, Todd R; Gardner, Robert H; Ferrari, Joseph R; Keller, Cherry E
2010-03-01
Assessing the potential for threatened species to persist and spread within fragmented landscapes requires the identification of core areas that can sustain resident populations and dispersal corridors that can link these core areas with isolated patches of remnant habitat. We developed a set of GIS tools, simulation methods, and network analysis procedures to assess potential landscape connectivity for the Delmarva fox squirrel (DFS; Sciurus niger cinereus), an endangered species inhabiting forested areas on the Delmarva Peninsula, USA. Information on the DFS's life history and dispersal characteristics, together with data on the composition and configuration of land cover on the peninsula, were used as input data for an individual-based model to simulate dispersal patterns of millions of squirrels. Simulation results were then assessed using methods from graph theory, which quantifies habitat attributes associated with local and global connectivity. Several bottlenecks to dispersal were identified that were not apparent from simple distance-based metrics, highlighting specific locations for landscape conservation, restoration, and/or squirrel translocations. Our approach links simulation models, network analysis, and available field data in an efficient and general manner, making these methods useful and appropriate for assessing the movement dynamics of threatened species within landscapes being altered by human and natural disturbances.
Toxicokinetic Triage for Environmental Chemicals
Toxicokinetic (TK) models are essential for linking administered doses to blood and tissue concentrations. In vitro-to-in vivo extrapolation (IVIVE) methods have been developed to determine TK from limited in vitro measurements and chemical structure-based property predictions, p...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thessen, Anne E.; Bunker, Daniel E.; Buttigieg, Pier Luigi
Understanding the interplay between environmental conditions and phenotypes is a fundamental goal of biology. Unfortunately, data that include observations on phenotype and environment are highly heterogeneous and thus difficult to find and integrate. One approach that is likely to improve the status quo involves the use of ontologies to standardize and link data about phenotypes and environments. Specifying and linking data through ontologies will allow researchers to increase the scope and flexibility of large-scale analyses aided by modern computing methods. Investments in this area would advance diverse fields such as ecology, phylogenetics, and conservation biology. While several biological ontologies aremore » well-developed, using them to link phenotypes and environments is rare because of gaps in ontological coverage and limits to interoperability among ontologies and disciplines. Lastly, in this manuscript, we present (1) use cases from diverse disciplines to illustrate questions that could be answered more efficiently using a robust linkage between phenotypes and environments, (2) two proof-of-concept analyses that show the value of linking phenotypes to environments in fishes and amphibians, and (3) two proposed example data models for linking phenotypes and environments using the extensible observation ontology (OBOE) and the Biological Collections Ontology (BCO); these provide a starting point for the development of a data model linking phenotypes and environments.« less
Unterberger, Michael J; Holzapfel, Gerhard A
2014-11-01
The protein actin is a part of the cytoskeleton and, therefore, responsible for the mechanical properties of the cells. Starting with the single molecule up to the final structure, actin creates a hierarchical structure of several levels exhibiting a remarkable behavior. The hierarchy spans several length scales and limitations in computational power; therefore, there is a call for different mechanical modeling approaches for the different scales. On the molecular level, we may consider each atom in molecular dynamics simulations. Actin forms filaments by combining the molecules into a double helix. In a model, we replace molecular subdomains using coarse-graining methods, allowing the investigation of larger systems of several atoms. These models on the nanoscale inform continuum mechanical models of large filaments, which are based on worm-like chain models for polymers. Assemblies of actin filaments are connected with cross-linker proteins. Models with discrete filaments, so-called Mikado models, allow us to investigate the dependence of the properties of networks on the parameters of the constituents. Microstructurally motivated continuum models of the networks provide insights into larger systems containing cross-linked actin networks. Modeling of such systems helps to gain insight into the processes on such small scales. On the other hand, they call for verification and hence trigger the improvement of established experiments and the development of new methods.
An improved strategy for regression of biophysical variables and Landsat ETM+ data.
Warren B. Cohen; Thomas K. Maiersperger; Stith T. Gower; David P. Turner
2003-01-01
Empirical models are important tools for relating field-measured biophysical variables to remote sensing data. Regression analysis has been a popular empirical method of linking these two types of data to provide continuous estimates for variables such as biomass, percent woody canopy cover, and leaf area index (LAI). Traditional methods of regression are not...
ERIC Educational Resources Information Center
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.
2018-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value…
Dubay, Rickey; Hassan, Marwan; Li, Chunying; Charest, Meaghan
2014-09-01
This paper presents a unique approach for active vibration control of a one-link flexible manipulator. The method combines a finite element model of the manipulator and an advanced model predictive controller to suppress vibration at its tip. This hybrid methodology improves significantly over the standard application of a predictive controller for vibration control. The finite element model used in place of standard modelling in the control algorithm provides a more accurate prediction of dynamic behavior, resulting in enhanced control. Closed loop control experiments were performed using the flexible manipulator, instrumented with strain gauges and piezoelectric actuators. In all instances, experimental and simulation results demonstrate that the finite element based predictive controller provides improved active vibration suppression in comparison with using a standard predictive control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Study on SOC wavelet analysis for LiFePO4 battery
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.
A novel time series link prediction method: Learning automata approach
NASA Astrophysics Data System (ADS)
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2017-09-01
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
NASA Astrophysics Data System (ADS)
Del Vescovo, D.; D'Ambrogio, W.
1995-01-01
A frequency domain method is presented to design a closed-loop control for vibration reduction flexible mechanisms. The procedure is developed on a single-link flexible arm, driven by one rotary degree of freedom servomotor, although the same technique may be applied to similar systems such as supports for aerospace antennae or solar panels. The method uses the structural frequency response functions (FRFs), thus avoiding system identification, that produces modeling uncertainties. Two closed-loops are implemented: the inner loop uses acceleration feedback with the aim of making the FRF similar to that of an equivalent rigid link; the outer loop feeds back displacements to achieve a fast positioning response and null steady state error. In both cases, the controller type is established a priori, while actual characteristics are defined by an optimisation procedure in which the relevant FRF is constrained into prescribed bounds and stability is taken into account.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2018-03-26
In this paper we present a framework for the reduction and linking of physiologically based pharmacokinetic (PBPK) models with models of systems biology to describe the effects of drug administration across multiple scales. To address the issue of model complexity, we propose the reduction of each type of model separately prior to being linked. We highlight the use of balanced truncation in reducing the linear components of PBPK models, whilst proper lumping is shown to be efficient in reducing typically nonlinear systems biology type models. The overall methodology is demonstrated via two example systems; a model of bacterial chemotactic signalling in Escherichia coli and a model of extracellular regulatory kinase activation mediated via the extracellular growth factor and nerve growth factor receptor pathways. Each system is tested under the simulated administration of three hypothetical compounds; a strong base, a weak base, and an acid, mirroring the parameterisation of pindolol, midazolam, and thiopental, respectively. Our method can produce up to an 80% decrease in simulation time, allowing substantial speed-up for computationally intensive applications including parameter fitting or agent based modelling. The approach provides a straightforward means to construct simplified Quantitative Systems Pharmacology models that still provide significant insight into the mechanisms of drug action. Such a framework can potentially bridge pre-clinical and clinical modelling - providing an intermediate level of model granularity between classical, empirical approaches and mechanistic systems describing the molecular scale.
A Comparison of Two Models for Cognitive Diagnosis. Research Report. ETS RR-04-02
ERIC Educational Resources Information Center
Yan, Duanli; Almond, Russell; Mislevy, Robert
2004-01-01
Diagnostic score reports linking assessment outcomes to instructional interventions are one of the most requested features of assessment products. There is a body of interesting work done in the last 20 years including Tatsuoka's rule space method (Tatsuoka, 1983), Haertal and Wiley's binary skills model (Haertal, 1984; Haertal & Wiley, 1993),…
ERIC Educational Resources Information Center
lo, C. Owen
2014-01-01
Using a realist grounded theory method, this study resulted in a theoretical model and 4 propositions. As displayed in the LINK model, the labeling practice is situated in and endorsed by a social context that carries explicit theory about and educational policies regarding the labels. Taking a developmental perspective, the labeling practice…
Linking Outcomes from Peabody Picture Vocabulary Test Forms Using Item Response Models
ERIC Educational Resources Information Center
Hoffman, Lesa; Templin, Jonathan; Rice, Mabel L.
2012-01-01
Purpose: The present work describes how vocabulary ability as assessed by 3 different forms of the Peabody Picture Vocabulary Test (PPVT; Dunn & Dunn, 1997) can be placed on a common latent metric through item response theory (IRT) modeling, by which valid comparisons of ability between samples or over time can then be made. Method: Responses…
NASA Astrophysics Data System (ADS)
Dehkordi, N. Mahdian; Sadati, N.; Hamzeh, M.
2017-09-01
This paper presents a robust dc-link voltage as well as a current control strategy for a bidirectional interlink converter (BIC) in a hybrid ac/dc microgrid. To enhance the dc-bus voltage control, conventional methods strive to measure and feedforward the load or source power in the dc-bus control scheme. However, the conventional feedforward-based approaches require remote measurement with communications. Moreover, conventional methods suffer from stability and performance issues, mainly due to the use of the small-signal-based control design method. To overcome these issues, in this paper, the power from DG units of the dc subgrid imposed on the BIC is considered an unmeasurable disturbance signal. In the proposed method, in contrast to existing methods, using the nonlinear model of BIC, a robust controller that does not need the remote measurement with communications effectively rejects the impact of the disturbance signal imposed on the BIC's dc-link voltage. To avoid communication links, the robust controller has a plug-and-play feature that makes it possible to add a DG/load to or remove it from the dc subgrid without distorting the hybrid microgrid stability. Finally, Monte Carlo simulations are conducted to confirm the effectiveness of the proposed control strategy in MATLAB/SimPowerSystems software environment.
Downing, Julia; Batuli, Mwazi; Kivumbi, Grace; Kabahweza, Josephine; Grant, Liz; Murray, Scott A; Namukwaya, Elizabeth; Leng, Mhoira
2016-04-08
Integrating palliative care (PC) and empowering the health care workforce is essential to achieve universal access to PC services. In 2010, 46% of patients in Mulago Hospital, Uganda had a life limiting illness, of whom 96% had PC needs. The university/hospital specialist PC unit (Makerere/Mulago Palliative Care Unit -MPCU) implemented a link-nurse model to empower hospital nurses to provide generalist PC. Over two years, 27 link nurses were trained and mentored and 11 clinical protocols developed. The aim of the study was to evaluate the impact of the palliative care link nurse programme at Mulago Hospital An evaluation approach utilising mixed methods was used integrating qualitative and quantitative data including: pre and post course assessment confidence ratings; course evaluation forms; audit of clinical guidelines availability; review of link-nurse activity sheets/action plans; review of MPCU patient documentation; Most Significant Change (MSC); individual and focus group interviews. A significant difference was seen in nurses' confidence after the training (p < 0.001). From July 2012 to December 2013, link nurses identified 2447 patients needing PC, of whom they cared for 2113 (86%) and referred 334 (14%) to MPCU. Clinical guidelines/protocols were utilised in 50% of wards. Main themes identified include: change in attitude; developing new skills and knowledge; change in relationships; improved outcomes of care, along with the challenges that they experienced in integrating PC. Since the start of the programme there has been an increase in PC patients seen at the hospital (611 in 2011 to 1788 in 2013). The link-nurse programme is a practical model for integrating PC into generalist services. Recommendations have been made for ongoing development and expansion of the programme as an effective health systems strengthening approach in similar healthcare contexts, as well as the improvement in medical and nursing education.
NASA Astrophysics Data System (ADS)
D, Meena; Francis, Fredy; T, Sarath K.; E, Dipin; Srinivas, T.; K, Jayasree V.
2014-10-01
Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.
Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data
NASA Astrophysics Data System (ADS)
Wang, Jie; Shen, Yuzhong
2011-03-01
3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Ruili; Lu, Fachuang; Zhu, Yimin
Linking lignin model compounds to carrier proteins is required either to raise antibodies to them or to structurally screen antibodies raised against lignins or models. This paper describes a flexible method to link phenolic compounds of interest to cationic bovine serum albumin (cBSA) without interfering with their important structural features. With the guaiacylglycerol- β-guaiacyl ether dimer, for example, the linking was accomplished in 89% yield with the number of dimers per carrier protein being as high as 50; NMR experiments on a 15N- and 13C-labeled conjugation product indicated that 13 dimers were added to the native lysine residues and themore » remainder (~37) to the amine moieties on the ethylenediamine linkers added to BSA; ~32% of the available primary amine groups on cBSA were therefore conjugated to the hapten. As a result, this loading is suitable for attempting to raise new antibodies to plant lignins and for screening.« less
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R
2018-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided.
NASA Astrophysics Data System (ADS)
Tkacz, J.; Bukowiec, A.; Doligalski, M.
2017-08-01
The paper presentes the method of modeling and implementation of concurrent controllers. Concurrent controllers are specified by Petri nets. Then Petri nets are decomposed using symbolic deduction method of analysis. Formal methods like sequent calculus system with considered elements of Thelen's algorithm have been used here. As a result, linked state machines (LSMs) are received. Each FSM is implemented using methods of structural decomposition during process of logic synthesis. The method of multiple encoding of microinstruction has been applied. It leads to decreased number of Boolean function realized by combinational part of FSM. The additional decoder could be implemented with the use of memory blocks.
NASA Technical Reports Server (NTRS)
Dubowsky, Steven
1989-01-01
An approach is described to modeling the flexibility effects in spatial mechanisms and manipulator systems. The method is based on finite element representations of the individual links in the system. However, it should be noted that conventional finite element methods and software packages will not handle the highly nonlinear dynamic behavior of these systems which results form their changing geometry. In order to design high-performance lightweight systems and their control systems, good models of their dynamic behavior which include the effects of flexibility are required.
Estimating survival of radio-tagged birds
Bunck, C.M.; Pollock, K.H.; Lebreton, J.-D.; North, P.M.
1993-01-01
Parametric and nonparametric methods for estimating survival of radio-tagged birds are described. The general assumptions of these methods are reviewed. An estimate based on the assumption of constant survival throughout the period is emphasized in the overview of parametric methods. Two nonparametric methods, the Kaplan-Meier estimate of the survival funcrion and the log rank test, are explained in detail The link between these nonparametric methods and traditional capture-recapture models is discussed aloag with considerations in designing studies that use telemetry techniques to estimate survival.
Real-Time Distributed Embedded Oscillator Operating Frequency Monitoring
NASA Technical Reports Server (NTRS)
Pollock, Julie; Oliver, Brett; Brickner, Christopher
2012-01-01
A document discusses the utilization of embedded clocks inside of operating network data links as an auxiliary clock source to satisfy local oscillator monitoring requirements. Modem network interfaces, typically serial network links, often contain embedded clocking information of very tight precision to recover data from the link. This embedded clocking data can be utilized by the receiving device to monitor the local oscillator for tolerance to required specifications, often important in high-integrity fault-tolerant applications. A device can utilize a received embedded clock to determine if the local or the remote device is out of tolerance by using a single link. The local device can determine if it is failing, assuming a single fault model, with two or more active links. Network fabric components, containing many operational links, can potentially determine faulty remote or local devices in the presence of multiple faults. Two methods of implementation are described. In one method, a recovered clock can be directly used to monitor the local clock as a direct replacement of an external local oscillator. This scheme is consistent with a general clock monitoring function whereby clock sources are clocking two counters and compared over a fixed interval of time. In another method, overflow/underflow conditions can be used to detect clock relationships for monitoring. These network interfaces often provide clock compensation circuitry to allow data to be transferred from the received (network) clock domain to the internal clock domain. This circuit could be modified to detect overflow/underflow conditions of the buffering required and report a fast or slow receive clock, respectively.
On one solution of Volterra integral equations of second kind
NASA Astrophysics Data System (ADS)
Myrhorod, V.; Hvozdeva, I.
2016-10-01
A solution of Volterra integral equations of the second kind with separable and difference kernels based on solutions of corresponding equations linking the kernel and resolvent is suggested. On the basis of a discrete functions class, the equations linking the kernel and resolvent are obtained and the methods of their analytical solutions are proposed. A mathematical model of the gas-turbine engine state modification processes in the form of Volterra integral equation of the second kind with separable kernel is offered.
Biomedical hypothesis generation by text mining and gene prioritization.
Petric, Ingrid; Ligeti, Balazs; Gyorffy, Balazs; Pongor, Sandor
2014-01-01
Text mining methods can facilitate the generation of biomedical hypotheses by suggesting novel associations between diseases and genes. Previously, we developed a rare-term model called RaJoLink (Petric et al, J. Biomed. Inform. 42(2): 219-227, 2009) in which hypotheses are formulated on the basis of terms rarely associated with a target domain. Since many current medical hypotheses are formulated in terms of molecular entities and molecular mechanisms, here we extend the methodology to proteins and genes, using a standardized vocabulary as well as a gene/protein network model. The proposed enhanced RaJoLink rare-term model combines text mining and gene prioritization approaches. Its utility is illustrated by finding known as well as potential gene-disease associations in ovarian cancer using MEDLINE abstracts and the STRING database.
NASA Astrophysics Data System (ADS)
Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan
2018-02-01
Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.
Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho
2008-10-01
In this paper, we propose a new robust output feedback control approach for flexible-joint electrically driven (FJED) robots via the observer dynamic surface design technique. The proposed method only requires position measurements of the FJED robots. To estimate the link and actuator velocity information of the FJED robots with model uncertainties, we develop an adaptive observer using self-recurrent wavelet neural networks (SRWNNs). The SRWNNs are used to approximate model uncertainties in both robot (link) dynamics and actuator dynamics, and all their weights are trained online. Based on the designed observer, the link position tracking controller using the estimated states is induced from the dynamic surface design procedure. Therefore, the proposed controller can be designed more simply than the observer backstepping controller. From the Lyapunov stability analysis, it is shown that all signals in a closed-loop adaptive system are uniformly ultimately bounded. Finally, the simulation results on a three-link FJED robot are presented to validate the good position tracking performance and robustness of the proposed control system against payload uncertainties and external disturbances.
LinkImputeR: user-guided genotype calling and imputation for non-model organisms.
Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean
2017-07-10
Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from NGS technologies. It enables the user to quickly and easily examine the effects of varying thresholds and filters on the number and quality of the resulting genotype calls. In this manner, users can decide on thresholds that are most suitable for their purposes. We show that LinkImputeR can significantly augment the value and utility of NGS data sets, especially in non-model organisms with poor genomic resources.
Delgado, Luis M.; Bayon, Yves; Pandit, Abhay
2015-01-01
Collagen-based devices, in various physical conformations, are extensively used for tissue engineering and regenerative medicine applications. Given that the natural cross-linking pathway of collagen does not occur in vitro, chemical, physical, and biological cross-linking methods have been assessed over the years to control mechanical stability, degradation rate, and immunogenicity of the device upon implantation. Although in vitro data demonstrate that mechanical properties and degradation rate can be accurately controlled as a function of the cross-linking method utilized, preclinical and clinical data indicate that cross-linking methods employed may have adverse effects on host response, especially when potent cross-linking methods are employed. Experimental data suggest that more suitable cross-linking methods should be developed to achieve a balance between stability and functional remodeling. PMID:25517923
Stellar Parameters in an Instant with Machine Learning. Application to Kepler LEGACY Targets
NASA Astrophysics Data System (ADS)
Bellinger, Earl P.; Angelou, George C.; Hekker, Saskia; Basu, Sarbani; Ball, Warrick H.; Guggenberger, Elisabet
2017-10-01
With the advent of dedicated photometric space missions, the ability to rapidly process huge catalogues of stars has become paramount. Bellinger and Angelou et al. [1] recently introduced a new method based on machine learning for inferring the stellar parameters of main-sequence stars exhibiting solar-like oscillations. The method makes precise predictions that are consistent with other methods, but with the advantages of being able to explore many more parameters while costing practically no time. Here we apply the method to 52 so-called "LEGACY" main-sequence stars observed by the Kepler space mission. For each star, we present estimates and uncertainties of mass, age, radius, luminosity, core hydrogen abundance, surface helium abundance, surface gravity, initial helium abundance, and initial metallicity as well as estimates of their evolutionary model parameters of mixing length, overshooting coeffcient, and diffusion multiplication factor. We obtain median uncertainties in stellar age, mass, and radius of 14.8%, 3.6%, and 1.7%, respectively. The source code for all analyses and for all figures appearing in this manuscript can be found electronically at
ERIC Educational Resources Information Center
Borges, Sidnei; Mello-Carpes, Pâmela Billig
2015-01-01
Physiology teaching-learning methods have passed through several changes over the years. The traditional model of education appears to be linked to the teacher, who resists the use of tools and methods designed to provide something different to the student, depriving the student of new things or even the motivation to realize the applicability of…
Evaluation of scour potential of cohesive soils : final report, August 2009.
DOT National Transportation Integrated Search
2009-08-01
Prediction of scour at bridge river crossings is an evolving process. Hydraulic models to estimate water velocity and, therefore, the shear stresses that erode soil are reasonably well developed. The weak link remains methods for estimating soil erod...
Link-N: The missing link towards intervertebral disc repair is species-specific
Bach, Frances C.; Laagland, Lisanne T.; Grant, Michael P.; Creemers, Laura B.; Ito, Keita; Meij, Björn P.; Mwale, Fackson
2017-01-01
Introduction Degeneration of the intervertebral disc (IVD) is a frequent cause for back pain in humans and dogs. Link-N stabilizes proteoglycan aggregates in cartilaginous tissues and exerts growth factor-like effects. The human variant of Link-N facilitates IVD regeneration in several species in vitro by inducing Smad1 signaling, but it is not clear whether this is species specific. Dogs with IVD disease could possibly benefit from Link-N treatment, but Link-N has not been tested on canine IVD cells. If Link-N appears to be effective in canines, this would facilitate translation of Link-N into the clinic using the dog as an in vivo large animal model for human IVD degeneration. Materials and methods This study’s objective was to determine the effect of the human and canine variant of Link-N and short (s) Link-N on canine chondrocyte-like cells (CLCs) and compare this to those on already studied species, i.e. human and bovine CLCs. Extracellular matrix (ECM) production was determined by measuring glycosaminoglycan (GAG) content and histological evaluation. Additionally, the micro-aggregates’ DNA content was measured. Phosphorylated (p) Smad1 and -2 levels were determined using ELISA. Results Human (s)Link-N induced GAG deposition in human and bovine CLCs, as expected. In contrast, canine (s)Link-N did not affect ECM production in human CLCs, while it mainly induced collagen type I and II deposition in bovine CLCs. In canine CLCs, both canine and human (s)Link-N induced negligible GAG deposition. Surprisingly, human and canine (s)Link-N did not induce Smad signaling in human and bovine CLCs. Human and canine (s)Link-N only mildly increased pSmad1 and Smad2 levels in canine CLCs. Conclusions Human and canine (s)Link-N exerted species-specific effects on CLCs from early degenerated IVDs. Both variants, however, lacked the potency as canine IVD regeneration agent. While these studies demonstrate the challenges of translational studies in large animal models, (s)Link-N still holds a regenerative potential for humans. PMID:29117254
Social Relations and Resident Health in Assisted Living: An Application of the Convoy Model
ERIC Educational Resources Information Center
Perkins, Molly M.; Ball, Mary M.; Kemp, Candace L.; Hollingsworth, Carole
2013-01-01
Purpose: This article, based on analysis of data from a mixed methods study, builds on a growing body of assisted living (AL) research focusing on the link between residents' social relationships and health. A key aim of this analysis, which uses the social convoy model as a conceptual and methodological framework, was to examine the relative…
Linking 3D spatial models of fuels and fire: Effects of spatial heterogeneity on fire behavior
Russell A. Parsons; William E. Mell; Peter McCauley
2011-01-01
Crownfire endangers fire fighters and can have severe ecological consequences. Prediction of fire behavior in tree crowns is essential to informed decisions in fire management. Current methods used in fire management do not address variability in crown fuels. New mechanistic physics-based fire models address convective heat transfer with computational fluid dynamics (...
ERIC Educational Resources Information Center
Dickinson, Paul Gordon
2017-01-01
This paper evaluates the effect and potential of a new educational learning model called Peer to Peer (P2P). The study was focused on Laurea, Hyvinkaa's Finland campus and its response to bridging the gap between traditional educational methods and working reality, where modern technology plays an important role. The study describes and evaluates…
Audrey Addison; James A. Powell; Barbara J. Bentz; Diana L. Six
2015-01-01
The fates of individual species are often tied to synchronization of phenology, however, few methods have been developed for integrating phenological models involving linked species. In this paper, we focus on mountain pine beetle (MPB, Dendroctonus ponderosae) and its two obligate mutualistic fungi, Grosmannia clavigera and Ophiostoma montium. Growth rates of...
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.
Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K
2011-10-01
To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods including only regression or both regression and ranking constraints on clinical data. On high dimensional data, the former model performs better. However, this approach does not have a theoretical link with standard statistical models for survival data. This link can be made by means of transformation models when ranking constraints are included. Copyright © 2011 Elsevier B.V. All rights reserved.
Link Analysis in the Mission Planning Lab
NASA Technical Reports Server (NTRS)
McCarthy, Jessica A.; Cervantes, Benjamin W.; Daugherty, Sarah C.; Arroyo, Felipe; Mago, Divyang
2011-01-01
The legacy communications link analysis software currently used at Wallops Flight Facility involves processes that are different for command destruct, radar, and telemetry. There is a clear advantage to developing an easy-to-use tool that combines all the processes in one application. Link Analysis in the Mission Planning Lab (MPL) uses custom software and algorithms integrated with Analytical Graphics Inc. Satellite Toolkit (AGI STK). The MPL link analysis tool uses pre/post-mission data to conduct a dynamic link analysis between ground assets and the launch vehicle. Just as the legacy methods do, the MPL link analysis tool calculates signal strength and signal- to-noise according to the accepted processes for command destruct, radar, and telemetry assets. Graphs and other custom data are generated rapidly in formats for reports and presentations. STK is used for analysis as well as to depict plume angles and antenna gain patterns in 3D. The MPL has developed two interfaces with the STK software (see figure). The first interface is an HTML utility, which was developed in Visual Basic to enhance analysis for plume modeling and to offer a more user friendly, flexible tool. A graphical user interface (GUI) written in MATLAB (see figure upper right-hand corner) is also used to quickly depict link budget information for multiple ground assets. This new method yields a dramatic decrease in the time it takes to provide launch managers with the required link budgets to make critical pre-mission decisions. The software code used for these two custom utilities is a product of NASA's MPL.
Connan, O; Maro, D; Hébert, D; Solier, L; Caldeira Ideas, P; Laguionie, P; St-Amant, N
2015-10-01
The behaviour of tritium in the environment is linked to the water cycle. We compare three methods of calculating the tritium evapotranspiration flux from grassland cover. The gradient and eddy covariance methods, together with a method based on the theoretical Penmann-Monteith model were tested in a study carried out in 2013 in an environment characterised by high levels of tritium activity. The results show that each of the three methods gave similar results. The various constraints applying to each method are discussed. The results show a tritium evapotranspiration flux of around 15 mBq m(-2) s(-1) in this environment. These results will be used to improve the entry parameters for the general models of tritium transfers in the environment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Toward link predictability of complex networks
Lü, Linyuan; Pan, Liming; Zhou, Tao; Zhang, Yi-Cheng; Stanley, H. Eugene
2015-01-01
The organization of real networks usually embodies both regularities and irregularities, and, in principle, the former can be modeled. The extent to which the formation of a network can be explained coincides with our ability to predict missing links. To understand network organization, we should be able to estimate link predictability. We assume that the regularity of a network is reflected in the consistency of structural features before and after a random removal of a small set of links. Based on the perturbation of the adjacency matrix, we propose a universal structural consistency index that is free of prior knowledge of network organization. Extensive experiments on disparate real-world networks demonstrate that (i) structural consistency is a good estimation of link predictability and (ii) a derivative algorithm outperforms state-of-the-art link prediction methods in both accuracy and robustness. This analysis has further applications in evaluating link prediction algorithms and monitoring sudden changes in evolving network mechanisms. It will provide unique fundamental insights into the above-mentioned academic research fields, and will foster the development of advanced information filtering technologies of interest to information technology practitioners. PMID:25659742
Rank-based methods for modeling dependence between loss triangles.
Côté, Marie-Pier; Genest, Christian; Abdallah, Anas
2016-01-01
In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.
VMT Mix Modeling for Mobile Source Emissions Forecasting: Formulation and Empirical Application
DOT National Transportation Integrated Search
2000-05-01
The purpose of the current report is to propose and implement a methodology for obtaining improved link-specific vehicle miles of travel (VMT) mix values compared to those obtained from existent methods. Specifically, the research is developing a fra...
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.
2017-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided. PMID:29230152
Global Precipitation Measurement: Methods, Datasets and Applications
NASA Technical Reports Server (NTRS)
Tapiador, Francisco; Turk, Francis J.; Petersen, Walt; Hou, Arthur Y.; Garcia-Ortega, Eduardo; Machado, Luiz, A. T.; Angelis, Carlos F.; Salio, Paola; Kidd, Chris; Huffman, George J.;
2011-01-01
This paper reviews the many aspects of precipitation measurement that are relevant to providing an accurate global assessment of this important environmental parameter. Methods discussed include ground data, satellite estimates and numerical models. First, the methods for measuring, estimating, and modeling precipitation are discussed. Then, the most relevant datasets gathering precipitation information from those three sources are presented. The third part of the paper illustrates a number of the many applications of those measurements and databases. The aim of the paper is to organize the many links and feedbacks between precipitation measurement, estimation and modeling, indicating the uncertainties and limitations of each technique in order to identify areas requiring further attention, and to show the limits within which datasets can be used.
Ding, Ming; Zhu, Qianlong
2016-01-01
Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.
Complexity and dynamics of switched human balance control during quiet standing.
Nema, Salam; Kowalczyk, Piotr; Loram, Ian
2015-10-01
In this paper, we use a combination of numerical simulations, time series analysis, and complexity measures to investigate the dynamics of switched systems with noise, which are often used as models of human balance control during quiet standing. We link the results with complexity measures found in experimental data of human sway motion during quiet standing. The control model ensuring balance, which we use, is based on an act-and-wait control concept, that is, a human controller is switched on when a certain sway angle is reached. Otherwise, there is no active control present. Given a time series data, we determine how does it look a typical pattern of control strategy in our model system. We detect the switched nonlinearity in the system using a frequency analysis method in the absence of noise. We also analyse the effect of time delay on the existence of limit cycles in the system in the absence of noise. We perform the entropy and detrended fluctuation analyses in view of linking the switchings (and the dead zone) with the occurrences of complexity in the model system in the presence of noise. Finally, we perform the entropy and detrended fluctuation analyses on experimental data and link the results with numerical findings in our model example.
Modeling Clinical Information Needs in the Context of a Specific Patient
Price, Susan L.
2000-01-01
Investigators have tried various approaches to link clinical information directly to information sources that may contain answers to clinical questions. Developing a model of clinical information needs that may arise in the context of viewing information about a specific patient is a preliminary step to finding an efficient, useful solution to the information retrieval problem. This poster illustrates a method of modeling clinical information needs in the context of a specific patient that that is adapted from entity-relationship models used in database design.
Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps
NASA Astrophysics Data System (ADS)
Carrillo Lopez, J.; Gallardo, L. A.
2016-12-01
Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.
Koukaras, Emmanuel N; Papadimitriou, Sofia A; Bikiaris, Dimitrios N; Froudakis, George E
2012-10-01
This work reports details pertaining to the formation of chitosan nanoparticles that we prepare by the ionic gelation method. The molecular interactions of the ionic cross-linking of chitosan with tripolyphosphate have been investigated and elucidated by means of all-electron density functional theory. Solvent effects have been taken into account using implicit models. We have identified primary-interaction ionic cross-linking configurations that we define as H-link, T-link, and M-link, and we have quantified the corresponding interaction energies. H-links, which display high interaction energies and are also spatially broadly accessible, are the most probable cross-linking configurations. At close range, proton transfer has been identified, with maximum interaction energies ranging from 12.3 up to 68.3 kcal/mol depending on the protonation of the tripolyphosphate polyanion and the relative coordination of chitosan with tripolyphosphate. On the basis of our results for the linking types (interaction energies and torsion bias), we propose a simple mechanism for their impact on the chitosan/TPP nanoparticle formation process. We introduce the β ratio, which is derived from the commonly used α ratio but is more fundamental since it additionally takes into account structural details of the oligomers.
Entropic benefit of a cross-link in protein association.
Zaman, Muhammad H; Berry, R Stephen; Sosnick, Tobin R
2002-08-01
We introduce a method to estimate the loss of configurational entropy upon insertion of a cross-link to a dimeric system. First, a clear distinction is established between the loss of entropy upon tethering and binding, two quantities that are often considered to be equivalent. By comparing the probability distribution of the center-to-center distances for untethered and cross-linked versions, we are able to calculate the loss of translational entropy upon cross-linking. The distribution function for the untethered helices is calculated from the probability that a given helix is closer to its partner than to all other helices, the "Nearest Neighbor" method. This method requires no assumptions about the nature of the solvent, and hence resolves difficulties normally associated with calculations for systems in liquids. Analysis of the restriction of angular freedom upon tethering indicates that the loss of rotational entropy is negligible. The method is applied in the context of the folding of a ten turn helical coiled coil with the tether modeled as a Gaussian chain or a flexible amino acid chain. After correcting for loop closure entropy in the docked state, we estimate the introduction of a six-residue tether in the coiled coil results in an effective concentration of the chain to be about 4 or 100 mM, depending upon whether the helices are denatured or pre-folded prior to their association. Thus, tethering results in significant stabilization for systems with millimolar or stronger dissociation constants. Copyright 2002 Wiley-Liss, Inc.
Paleoclimate reconstruction through Bayesian data assimilation
NASA Astrophysics Data System (ADS)
Fer, I.; Raiho, A.; Rollinson, C.; Dietze, M.
2017-12-01
Methods of paleoclimate reconstruction from plant-based proxy data rely on assumptions of static vegetation-climate link which is often established between modern climate and vegetation. This approach might result in biased climate constructions as it does not account for vegetation dynamics. Predictive tools such as process-based dynamic vegetation models (DVM) and their Bayesian inversion could be used to construct the link between plant-based proxy data and palaeoclimate more realistically. In other words, given the proxy data, it is possible to infer the climate that could result in that particular vegetation composition, by comparing the DVM outputs to the proxy data within a Bayesian state data assimilation framework. In this study, using fossil pollen data from five sites across the northern hardwood region of the US, we assimilate fractional composition and aboveground biomass into dynamic vegetation models, LINKAGES, LPJ-GUESS and ED2. To do this, starting from 4 Global Climate Model outputs, we generate an ensemble of downscaled meteorological drivers for the period 850-2015. Then, as a first pass, we weigh these ensembles based on their fidelity with independent paleoclimate proxies. Next, we run the models with this ensemble of drivers, and comparing the ensemble model output to the vegetation data, adjust the model state estimates towards the data. At each iteration, we also reweight the climate values that make the model and data consistent, producing a reconstructed climate time-series dataset. We validated the method using present-day datasets, as well as a synthetic dataset, and then assessed the consistency of results across ecosystem models. Our method allows the combination of multiple data types to reconstruct the paleoclimate, with associated uncertainty estimates, based on ecophysiological and ecological processes rather than phenomenological correlations with proxy data.
Daylighting simulation: methods, algorithms, and resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, William L.
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have beenmore » rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of the printed version, a list of the links is explicitly provided. This document exists in HTML form at the URL address: http://eande.lbl.gov/Task21/dlalgorithms.html. An equivalent downloadable PDF version, also with live links, at the URL address: http://eande.lbl.gov/Task21/dlalgorithms.pdf. A printed report can be derived directly from either of the electronic versions by simply printing either of them. In addition to the live links in the electronic forms, all report forms, electronic and paper, also have explicitly listed link addresses so that they can be followed up or referenced manually.« less
ERIC Educational Resources Information Center
Zeidenberg, Matthew; Scott, Marc; Belfield, Clive
2015-01-01
Of the copious research on the labor market returns to college, very little has adequately modeled the pathways of non-completers or compared their outcomes with those of award holders. In this paper, we present a novel method for linking non-completers with completers according to their program of study. This method allows us to calculate the…
ERIC Educational Resources Information Center
Zeidenberg, Matthew; Scott, Marc; Belfield, Clive
2015-01-01
Of the copious research on the labor market returns to college, very little has adequately modeled the pathways of non-completers or compared their outcomes with those of award holders. In this paper, we present a novel method for linking non-completers with completers according to their program of study. This method allows us to calculate the…
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
Retrieving hydrological connectivity from empirical causality in karst systems
NASA Astrophysics Data System (ADS)
Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier
2017-04-01
Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.
Chapter 5: Application of state-and-transition models to evaluate wildlife habitat
Anita T. Morzillo; Pamela Comeleo; Blair Csuti; Stephanie Lee
2014-01-01
Wildlife habitat analysis often is a central focus of natural resources management and policy. State-and-transition models (STMs) allow for simulation of landscape level ecological processes, and for managers to test âwhat ifâ scenarios of how those processes may affect wildlife habitat. This chapter describes the methods used to link STM output to wildlife habitat to...
NASA Technical Reports Server (NTRS)
Gibson, Jim; Jordan, Joe; Grant, Terry
1990-01-01
Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.
2014-10-01
offer a practical solution to calculating the grain -scale hetero- geneity present in the deformation field. Consequently, crystal plasticity models...process/performance simulation codes (e.g., crystal plasticity finite element method). 15. SUBJECT TERMS ICME; microstructure informatics; higher...iii) protocols for direct and efficient linking of materials models/databases into process/performance simulation codes (e.g., crystal plasticity
Assessment and Learning of Mathematics.
ERIC Educational Resources Information Center
Leder, Gilah C., Ed.
This book addresses the link between student learning of mathematics, the teaching method adopted in the mathematics classroom, and the assessment procedures used to determine and measure student knowledge. Fifteen chapters address issues that include a review of different models of mathematics learning and assessment practices, three contrasting…
The science of complexity and the role of mathematics
NASA Astrophysics Data System (ADS)
Bountis, T.; Johnson, J.; Provata, A.; Tsironis, G.
2016-09-01
In the middle of the second decade of the 21st century, Complexity Science has reached a turning point. Its rapid advancement over the last 30 years has led to remarkable new concepts, methods and techniques, whose applications to complex systems of the physical, biological and social sciences has produced a great number of exciting results. The approach has so far depended almost exclusively on the solution of a wide variety of mathematical models by sophisticated numerical techniques and extensive simulations that have inspired a new generation of researchers interested in complex systems. Still, the impact of Complexity beyond the natural sciences, its applications to Medicine, Technology, Economics, Society and Policy are only now beginning to be explored. Furthermore, its basic principles and methods have so far remained within the realm of high level research institutions, out of reach of society's urgent need for practical applications. To address these issues, evaluate the current situation and bring Complexity Science closer to university students, a series of Ph.D. Schools on Mathematical Modeling of Complex Systems was launched, starting in July 2011 at the University of Patras, Greece (see
Equivalent-Continuum Modeling of Nano-Structured Materials
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.
2001-01-01
A method has been developed for modeling structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with an equivalent-continuum model. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As an important example with direct application to the development and characterization of single-walled carbon nanotubes, the model has been applied to determine the effective continuum geometry of a graphene sheet. A representative volume element of the equivalent-continuum model has been developed with an effective thickness. This effective thickness has been shown to be similar to, but slightly smaller than, the interatomic spacing of graphite.
A Cohesive Zone Approach for Fatigue-Driven Delamination Analysis in Composite Materials
NASA Astrophysics Data System (ADS)
Amiri-Rad, Ahmad; Mashayekhi, Mohammad
2017-08-01
A new model for prediction of fatigue-driven delamination in laminated composites is proposed using cohesive interface elements. The presented model provides a link between cohesive elements damage evolution rate and crack growth rate of Paris law. This is beneficial since no additional material parameters are required and the well-known Paris law constants are used. The link between the cohesive zone method and fracture mechanics is achieved without use of effective length which has led to more accurate results. The problem of unknown failure path in calculation of the energy release rate is solved by imposing a condition on the damage model which leads to completely vertical failure path. A global measure of energy release rate is used for the whole cohesive zone which is computationally more efficient compared to previous similar models. The performance of the proposed model is investigated by simulation of well-known delamination tests and comparison against experimental data of the literature.
Normalized burn ratios link fire severity with patterns of avian occurrence
Rose, Eli T.; Simons, Theodore R.; Klein, Rob; McKerrow, Alexa
2016-01-01
ContextRemotely sensed differenced normalized burn ratios (DNBR) provide an index of fire severity across the footprint of a fire. We asked whether this index was useful for explaining patterns of bird occurrence within fire adapted xeric pine-oak forests of the southern Appalachian Mountains.ObjectivesWe evaluated the use of DNBR indices for linking ecosystem process with patterns of bird occurrence. We compared field-based and remotely sensed fire severity indices and used each to develop occupancy models for six bird species to identify patterns of bird occurrence following fire.MethodsWe identified and sampled 228 points within fires that recently burned within Great Smoky Mountains National Park. We performed avian point counts and field-assessed fire severity at each bird census point. We also used Landsat™ imagery acquired before and after each fire to quantify fire severity using DNBR. We used non-parametric methods to quantify agreement between fire severity indices, and evaluated single season occupancy models incorporating fire severity summarized at different spatial scales.ResultsAgreement between field-derived and remotely sensed measures of fire severity was influenced by vegetation type. Although occurrence models using field-derived indices of fire severity outperformed those using DNBR, summarizing DNBR at multiple spatial scales provided additional insights into patterns of occurrence associated with different sized patches of high severity fire.ConclusionsDNBR is useful for linking the effects of fire severity to patterns of bird occurrence, and informing how high severity fire shapes patterns of bird species occurrence on the landscape.
New model helps find missing link between financial and clinical health care management.
Dasso, E; Wilson, T
2001-01-01
U.S. health care is missing a link between the financial managers and clinical health managers of defined patient populations. Utilization and cost management try to bridge the gap by focusing on restricted access to care or tightly managed provider reimbursement to control costs. But frequently, they do not take clinical outcomes or health status into consideration. Take a look at another method based on the science of epidemiology that brings a more balanced knowledge of the clinical world to financial managers and more financial insight to clinicians.
Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors
NASA Astrophysics Data System (ADS)
Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.
2007-12-01
Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.
NASA Astrophysics Data System (ADS)
Shimada, Satoshi; Azuma, Shouzou; Teranaka, Sayaka; Kojima, Akira; Majima, Yukie; Maekawa, Yasuko
We developed the system that knowledge could be discovered and shared cooperatively in the organization based on the SECI model of knowledge management. This system realized three processes by the following method. (1)A video that expressed skill is segmented into a number of scenes according to its contents. Tacit knowledge is shared in each scene. (2)Tacit knowledge is extracted by bulletin board linked to each scene. (3)Knowledge is acquired by repeatedly viewing the video scene with the comment that shows the technical content to be practiced. We conducted experiments that the system was used by nurses working for general hospitals. Experimental results show that the nursing practical knack is able to be collected by utilizing bulletin board linked to video scene. Results of this study confirmed the possibility of expressing the tacit knowledge of nurses' empirical nursing skills sensitively with a clue of video images.
Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix
2016-03-01
To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.
Extension of D-H parameter method to hybrid manipulators used in robot-assisted surgery.
Singh, Amanpreet; Singla, Ashish; Soni, Sanjeev
2015-10-01
The main focus of this work is to extend the applicability of D-H parameter method to develop a kinematic model of a hybrid manipulator. A hybrid manipulator is a combination of open- and closed-loop chains and contains planar and spatial links. It has been found in the literature that D-H parameter method leads to ambiguities, when dealing with closed-loop chains. In this work, it has been observed that the D-H parameter method, when applied to a hybrid manipulator, results in an orientational inconsistency, because of which the method cannot be used to develop the kinematic model. In this article, the concept of dummy frames is proposed to resolve the orientational inconsistency and to develop the kinematic model of a hybrid manipulator. Moreover, the prototype of 7-degree-of-freedom hybrid manipulator, known as a surgeon-side manipulator to assist the surgeon during a medical surgery, is also developed to validate the kinematic model derived in this work. © IMechE 2015.
A simplified dynamic model of the T700 turboshaft engine
NASA Technical Reports Server (NTRS)
Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.
1992-01-01
A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.
NASA Astrophysics Data System (ADS)
Stephanou, Pavlos S.; Schweizer, Thomas; Kröger, Martin
2017-04-01
Our experimental data unambiguously show (i) a damping behavior (the appearance of an undershoot following the overshoot) in the transient shear viscosity of a concentrated polymeric solution, and (ii) the absence of a corresponding behavior in the transient normal stress coefficients. Both trends are shown to be quantitatively captured by the bead-link chain kinetic theory for concentrated polymer solutions and entangled polymer melts proposed by Curtiss and Bird, supplemented by a non-constant link tension coefficient that we relate to the nematic order parameter. The observed phenomena are attributed to the tumbling behavior of the links, triggered by rotational fluctuations, on top of reptation. Using model parameters deduced from stationary data, we calculate the transient behavior of the stress tensor for this "tumbling-snake" model after startup of shear flow efficiently via simple Brownian dynamics. The unaltered method is capable of handling arbitrary homogeneous flows and has the promising capacity to improve our understanding of the transient behavior of concentrated polymer solutions.
Cross-linkable liposomes stabilize a magnetic resonance contrast-enhancing polymeric fastener.
Smith, Cartney E; Kong, Hyunjoon
2014-04-08
Liposomes are commonly used to deliver drugs and contrast agents to their target site in a controlled manner. One of the greatest obstacles in the performance of such delivery vehicles is their stability in the presence of serum. Here, we demonstrate a method to stabilize a class of liposomes that load gadolinium, a magnetic resonance (MR) contrast agent, as a model cargo on their surfaces. We hypothesized that the sequential adsorption of a gadolinium-binding chitosan fastener on the liposome surface followed by covalent cross-linking of the lipid bilayer would provide enhanced stability and improved MR signal in the presence of human serum. To investigate this hypothesis, liposomes composed of diyne-containing lipids were assembled and functionalized via chitosan conjugated with a hydrophobic anchor and diethylenetriaminepentaacetic acid (DTPA). This postadsorption cross-linking strategy served to stabilize the thermodynamically favorable association between liposome and polymeric fastener. Furthermore, the chitosan-coated, cross-linked liposomes proved more effective as delivery vehicles of gadolinium than uncross-linked liposomes due to the reduced liposome degradation and chitosan desorption. Overall, this study demonstrates a useful method to stabilize a broad class of particles used for systemic delivery of various molecular payloads.
Cross-Linkable Liposomes Stabilize a Magnetic Resonance Contrast-Enhancing Polymeric Fastener
2015-01-01
Liposomes are commonly used to deliver drugs and contrast agents to their target site in a controlled manner. One of the greatest obstacles in the performance of such delivery vehicles is their stability in the presence of serum. Here, we demonstrate a method to stabilize a class of liposomes that load gadolinium, a magnetic resonance (MR) contrast agent, as a model cargo on their surfaces. We hypothesized that the sequential adsorption of a gadolinium-binding chitosan fastener on the liposome surface followed by covalent cross-linking of the lipid bilayer would provide enhanced stability and improved MR signal in the presence of human serum. To investigate this hypothesis, liposomes composed of diyne-containing lipids were assembled and functionalized via chitosan conjugated with a hydrophobic anchor and diethylenetriaminepentaacetic acid (DTPA). This postadsorption cross-linking strategy served to stabilize the thermodynamically favorable association between liposome and polymeric fastener. Furthermore, the chitosan-coated, cross-linked liposomes proved more effective as delivery vehicles of gadolinium than uncross-linked liposomes due to the reduced liposome degradation and chitosan desorption. Overall, this study demonstrates a useful method to stabilize a broad class of particles used for systemic delivery of various molecular payloads. PMID:24635565
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
Basic Science Evidence for the Link Between Erectile Dysfunction and Cardiometabolic Dysfunction
Musicki, Biljana; Bella, Anthony J.; Bivalacqua, Trinity J.; Davies, Kelvin P.; DiSanto, Michael E.; Gonzalez-Cadavid, Nestor F.; Hannan, Johanna L.; Kim, Noel N.; Podlasek, Carol A.; Wingard, Christopher J.; Burnett, Arthur L.
2016-01-01
Introduction Although clinical evidence supports an association between cardiovascular/metabolic diseases (CVMD) and erectile dysfunction (ED), scientific evidence for this link is incompletely elucidated. Aim This study aims to provide scientific evidence for the link between CVMD and ED. Methods In this White Paper, the Basic Science Committee of the Sexual Medicine Society of North America assessed the current literature on basic scientific support for a mechanistic link between ED and CVMD, and deficiencies in this regard with a critical assessment of current preclinical models of disease. Results A link exists between ED and CVMD on several grounds: the endothelium (endothelium-derived nitric oxide and oxidative stress imbalance); smooth muscle (SM) (SM abundance and altered molecular regulation of SM contractility); autonomic innervation (autonomic neuropathy and decreased neuronal-derived nitric oxide); hormones (impaired testosterone release and actions); and metabolics (hyperlipidemia, advanced glycation end product formation). Conclusion Basic science evidence supports the link between ED and CVMD. The Committee also highlighted gaps in knowledge and provided recommendations for guiding further scientific study defining this risk relationship. This endeavor serves to develop novel strategic directions for therapeutic interventions. PMID:26646025
Jang, Jinah; Seol, Young-Joon; Kim, Hyeon Ji; Kundu, Joydip; Kim, Sung Won; Cho, Dong-Woo
2014-09-01
An effective cross-linking of alginate gel was made through reaction with calcium carbonate (CaCO3). We used human chondrocytes as a model cell to study the effects of cross-linking density. Three different pore size ranges of cross-linked alginate hydrogels were fabricated. The morphological, mechanical, and rheological properties of various alginate hydrogels were characterized and responses of biosynthesis of cells encapsulated in each gel to the variation in cross-linking density were investigated. Desired outer shape of structure was maintained when the alginate solution was cross-linked with the applied method. The properties of alginate hydrogel could be tailored through applying various concentrations of CaCO3. The rate of synthesized GAGs and collagens was significantly higher in human chondrocytes encapsulated in the smaller pore structure than that in the larger pore structure. The expression of chondrogenic markers, including collagen type II and aggrecan, was enhanced in the smaller pore structure. It was found that proper structural morphology is a critical factor to enhance the performance and tissue regeneration. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Mechanisms Linking Health Literacy to Behavior and Health Status
Osborn, Chandra Y.; Paasche-Orlow, Michael K.; Bailey, Stacy Cooper; Wolf, Michael S.
2011-01-01
Objective To examine the mechanisms linking health literacy to physical activity and self-reported health. Methods From 2005–2007, patients (N=330) with hypertension were recruited from safety net clinics. Path analytic models tested the pathways linking health literacy to physical activity and self-reported health. Results There were significant paths from health literacy to knowledge (r=0.22, P<0.001), knowledge to self-efficacy (r=0.13, P<0.01), self-efficacy to physical activity (r=0.17, P<0.01), and physical activity to health status (r=0.17, P<0.01). Conclusions Health education interventions should be literacy sensitive and aim to enhance patient health knowledge and self-efficacy to promote self-care behavior and desirable health outcomes. PMID:20950164
DOT National Transportation Integrated Search
2010-06-01
The purpose of this project is to conduct a pilot application of the Network : Robustness Index (NRI) for the Chittenden County Regional Transportation Model. : Using the results, improvements to the method to increase its effectiveness for more : wi...
Chemically induced vascular toxicity during embryonic development can result in a wide range of adverse prenatal outcomes. We used information from genetic mouse models linked to phenotypic outcomes and a vascular toxicity knowledge base to construct an embryonic vascular disrupt...
Background Adverse cardiovascular events have been linked with PM2.5 exposure obtained primarily from air quality monitors, which rarely co-locate with participant residences. Modeled PM2.5 predictions at finer resolution may more accurately predict residential exposure; however...
Estimating 1 min rain rate distributions from numerical weather prediction
NASA Astrophysics Data System (ADS)
Paulson, Kevin S.
2017-01-01
Internationally recognized prognostic models of rain fade on terrestrial and Earth-space EHF links rely fundamentally on distributions of 1 min rain rates. Currently, in Rec. ITU-R P.837-6, these distributions are generated using the Salonen-Poiares Baptista method where 1 min rain rate distributions are estimated from long-term average annual accumulations provided by numerical weather prediction (NWP). This paper investigates an alternative to this method based on the distribution of 6 h accumulations available from the same NWPs. Rain rate fields covering the UK, produced by the Nimrod network of radars, are integrated to estimate the accumulations provided by NWP, and these are linked to distributions of fine-scale rain rates. The proposed method makes better use of the available data. It is verified on 15 NWP regions spanning the UK, and the extension to other regions is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merkley, Eric D.; Cort, John R.; Adkins, Joshua N.
2013-09-01
Multiprotein complexes, rather than individual proteins, make up a large part of the biological macromolecular machinery of a cell. Understanding the structure and organization of these complexes is critical to understanding cellular function. Chemical cross-linking coupled with mass spectrometry is emerging as a complementary technique to traditional structural biology methods and can provide low-resolution structural information for a multitude of purposes, such as distance constraints in computational modeling of protein complexes. In this review, we discuss the experimental considerations for successful application of chemical cross-linking-mass spectrometry in biological studies and highlight three examples of such studies from the recent literature.more » These examples (as well as many others) illustrate the utility of a chemical cross-linking-mass spectrometry approach in facilitating structural analysis of large and challenging complexes.« less
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
Electrical and fluid transport in consolidated sphere packs
NASA Astrophysics Data System (ADS)
Zhan, Xin; Schwartz, Lawrence M.; Toksöz, M. Nafi
2015-05-01
We calculate geometrical and transport properties (electrical conductivity, permeability, specific surface area, and surface conductivity) of a family of model granular porous media from an image based representation of its microstructure. The models are based on the packing described by Finney and cover a wide range of porosities. Finite difference methods are applied to solve for electrical conductivity and hydraulic permeability. Two image processing methods are used to identify the pore-grain interface and to test correlations linking permeability to electrical conductivity. A three phase conductivity model is developed to compute surface conductivity associated with the grain-pore interface. Our results compare well against empirical models over the entire porosity range studied. We conclude by examining the influence of image resolution on our calculations.
Numerical analysis of nonminimum phase zero for nonuniform link design
NASA Technical Reports Server (NTRS)
Girvin, Douglas L.; Book, Wayne J.
1991-01-01
As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links becomes more of an issue in control. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. This research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixed, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to the inverse dynamic algorithms that are currently under development.
Numerical analysis of nonminimum phase zero for nonuniform link design
NASA Astrophysics Data System (ADS)
Girvin, Douglas L.; Book, Wayne J.
1991-11-01
As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links becomes more of an issue in control. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. This research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixed, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to the inverse dynamic algorithms that are currently under development.
2012-01-01
Computational approaches to generate hypotheses from biomedical literature have been studied intensively in recent years. Nevertheless, it still remains a challenge to automatically discover novel, cross-silo biomedical hypotheses from large-scale literature repositories. In order to address this challenge, we first model a biomedical literature repository as a comprehensive network of biomedical concepts and formulate hypotheses generation as a process of link discovery on the concept network. We extract the relevant information from the biomedical literature corpus and generate a concept network and concept-author map on a cluster using Map-Reduce frame-work. We extract a set of heterogeneous features such as random walk based features, neighborhood features and common author features. The potential number of links to consider for the possibility of link discovery is large in our concept network and to address the scalability problem, the features from a concept network are extracted using a cluster with Map-Reduce framework. We further model link discovery as a classification problem carried out on a training data set automatically extracted from two network snapshots taken in two consecutive time duration. A set of heterogeneous features, which cover both topological and semantic features derived from the concept network, have been studied with respect to their impacts on the accuracy of the proposed supervised link discovery process. A case study of hypotheses generation based on the proposed method has been presented in the paper. PMID:22759614
Design of Broadband High Dynamic-Range Fiber Optic Links
NASA Astrophysics Data System (ADS)
Monsurrò, P.; Tommasino, P.; Trifiletti, A.; Vannucci, A.
2018-04-01
An analytic design-oriented model of microwave optical links has been developed. The core of the model is the non-linear and noise model of a Mach-Zehnder LiNbO3 interferometer. Both a 100 MHz-20 GHz link and a linearized microwave link, comprising an auxiliary modulator, have been designed and prototyped by using the model.
Barrès, Victor; Simons, Arthur; Arbib, Michael
2013-01-01
Our previous work developed Synthetic Brain Imaging to link neural and schema network models of cognition and behavior to PET and fMRI studies of brain function. We here extend this approach to Synthetic Event-Related Potentials (Synthetic ERP). Although the method is of general applicability, we focus on ERP correlates of language processing in the human brain. The method has two components: Phase 1: To generate cortical electro-magnetic source activity from neural or schema network models; and Phase 2: To generate known neurolinguistic ERP data (ERP scalp voltage topographies and waveforms) from putative cortical source distributions and activities within a realistic anatomical model of the human brain and head. To illustrate the challenges of Phase 2 of the methodology, spatiotemporal information from Friederici's 2002 model of auditory language comprehension was used to define cortical regions and time courses of activation for implementation within a forward model of ERP data. The cortical regions from the 2002 model were modeled using atlas-based masks overlaid on the MNI high definition single subject cortical mesh. The electromagnetic contribution of each region was modeled using current dipoles whose position and orientation were constrained by the cortical geometry. In linking neural network computation via EEG forward modeling to empirical results in neurolinguistics, we emphasize the need for neural network models to link their architecture to geometrically sound models of the cortical surface, and the need for conceptual models to refine and adopt brain-atlas based approaches to allow precise brain anchoring of their modules. The detailed analysis of Phase 2 sets the stage for a brief introduction to Phase 1 of the program, including the case for a schema-theoretic approach to language production and perception presented in detail elsewhere. Unlike Dynamic Causal Modeling (DCM) and Bojak's mean field model, Synthetic ERP builds on models of networks that mediate the relation between the brain's inputs, outputs, and internal states in executing a specific task. The neural networks used for Synthetic ERP must include neuroanatomically realistic placement and orientation of the cortical pyramidal neurons. These constraints pose exciting challenges for future work in neural network modeling that is applicable to systems and cognitive neuroscience. Copyright © 2012 Elsevier Ltd. All rights reserved.
Jordan, John B; Whittington, Douglas A; Bartberger, Michael D; Sickmier, E Allen; Chen, Kui; Cheng, Yuan; Judd, Ted
2016-04-28
Fragment-based drug discovery (FBDD) has become a widely used tool in small-molecule drug discovery efforts. One of the most commonly used biophysical methods in detecting weak binding of fragments is nuclear magnetic resonance (NMR) spectroscopy. In particular, FBDD performed with (19)F NMR-based methods has been shown to provide several advantages over (1)H NMR using traditional magnetization-transfer and/or two-dimensional methods. Here, we demonstrate the utility and power of (19)F-based fragment screening by detailing the identification of a second-site fragment through (19)F NMR screening that binds to a specific pocket of the aspartic acid protease, β-secretase (BACE-1). The identification of this second-site fragment allowed the undertaking of a fragment-linking approach, which ultimately yielded a molecule exhibiting a more than 360-fold increase in potency while maintaining reasonable ligand efficiency and gaining much improved selectivity over cathepsin-D (CatD). X-ray crystallographic studies of the molecules demonstrated that the linked fragments exhibited binding modes consistent with those predicted from the targeted screening approach, through-space NMR data, and molecular modeling.
Yan, Zheping; Xu, Da; Chen, Tao; Zhang, Wei; Liu, Yibo
2018-01-01
Unmanned underwater vehicles (UUVs) have rapidly developed as mobile sensor networks recently in the investigation, survey, and exploration of the underwater environment. The goal of this paper is to develop a practical and efficient formation control method to improve work efficiency of multi-UUV sensor networks. Distributed leader-follower formation controllers are designed based on a state feedback and consensus algorithm. Considering that each vehicle is subject to model uncertainties and current disturbances, a second-order integral UUV model with a nonlinear function is established using the state feedback linearized method under current disturbances. For unstable communication among UUVs, communication failure and acoustic link noise interference are considered. Two-layer random switching communication topologies are proposed to solve the problem of communication failure. For acoustic link noise interference, accurate representation of valid communication information and noise stripping when designing controllers is necessary. Effective communication topology weights are designed to represent the validity of communication information interfered by noise. Utilizing state feedback and noise stripping, sufficient conditions for design formation controllers are proposed to ensure UUV formation achieves consensus under model uncertainties, current disturbances, and unstable communication. The stability of formation controllers is proven by the Lyapunov-Razumikhin theorem, and the validity is verified by simulation results. PMID:29473919
Zhang, Xiaoshuai; Yang, Xiaowei; Yuan, Zhongshang; Liu, Yanxun; Li, Fangyu; Peng, Bin; Zhu, Dianwen; Zhao, Jinghua; Xue, Fuzhong
2013-01-01
For genome-wide association data analysis, two genes in any pathway, two SNPs in the two linked gene regions respectively or in the two linked exons respectively within one gene are often correlated with each other. We therefore proposed the concept of gene-gene co-association, which refers to the effects not only due to the traditional interaction under nearly independent condition but the correlation between two genes. Furthermore, we constructed a novel statistic for detecting gene-gene co-association based on Partial Least Squares Path Modeling (PLSPM). Through simulation, the relationship between traditional interaction and co-association was highlighted under three different types of co-association. Both simulation and real data analysis demonstrated that the proposed PLSPM-based statistic has better performance than single SNP-based logistic model, PCA-based logistic model, and other gene-based methods. PMID:23620809
Solutions of burnt-bridge models for molecular motor transport.
Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B; Artyomov, Maxim N
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called "bridges"), is investigated theoretically by analyzing discrete-state stochastic "burnt-bridge" models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed ("burned") with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
Zhang, Xiaoshuai; Yang, Xiaowei; Yuan, Zhongshang; Liu, Yanxun; Li, Fangyu; Peng, Bin; Zhu, Dianwen; Zhao, Jinghua; Xue, Fuzhong
2013-01-01
For genome-wide association data analysis, two genes in any pathway, two SNPs in the two linked gene regions respectively or in the two linked exons respectively within one gene are often correlated with each other. We therefore proposed the concept of gene-gene co-association, which refers to the effects not only due to the traditional interaction under nearly independent condition but the correlation between two genes. Furthermore, we constructed a novel statistic for detecting gene-gene co-association based on Partial Least Squares Path Modeling (PLSPM). Through simulation, the relationship between traditional interaction and co-association was highlighted under three different types of co-association. Both simulation and real data analysis demonstrated that the proposed PLSPM-based statistic has better performance than single SNP-based logistic model, PCA-based logistic model, and other gene-based methods.
Exact Solutions of Burnt-Bridge Models for Molecular Motor Transport
NASA Astrophysics Data System (ADS)
Morozov, Alexander; Pronina, Ekaterina; Kolomeisky, Anatoly; Artyomov, Maxim
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called ``bridges''), is investigated theoretically by analyzing discrete-state stochastic ``burnt-bridge'' models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (``burned'') with a probability p, creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For general case of p<1 a new theoretical method is developed, and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics, periodic and random distribution of bridges and different burning dynamics are analyzed and compared. Theoretical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
Solutions of burnt-bridge models for molecular motor transport
NASA Astrophysics Data System (ADS)
Morozov, Alexander Yu.; Pronina, Ekaterina; Kolomeisky, Anatoly B.; Artyomov, Maxim N.
2007-03-01
Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called “bridges”), is investigated theoretically by analyzing discrete-state stochastic “burnt-bridge” models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (“burned”) with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.
ALARA: The next link in a chain of activation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, P.P.H.; Henderson, D.L.
1996-12-31
The Adaptive Laplace and Analytic Radioactivity Analysis [ALARA] code has been developed as the next link in the chain of DKR radioactivity codes. Its methods address the criticisms of DKR while retaining its best features. While DKR ignored loops in the transmutation/decay scheme to preserve the exactness of the mathematical solution, ALARA incorporates new computational approaches without jeopardizing the most important features of DKR`s physical modelling and mathematical methods. The physical model uses `straightened-loop, linear chains` to achieve the same accuracy in the loop solutions as is demanded in the rest of the scheme. In cases where a chain hasmore » no loops, the exact DKR solution is used. Otherwise, ALARA adaptively chooses between a direct Laplace inversion technique and a Laplace expansion inversion technique to optimize the accuracy and speed of the solution. All of these methods result in matrix solutions which allow the fastest and most accurate solution of exact pulsing histories. Since the entire history is solved for each chain as it is created, ALARA achieves the optimum combination of high accuracy, high speed and low memory usage. 8 refs., 2 figs.« less
RF wave simulation for cold edge plasmas using the MFEM library
NASA Astrophysics Data System (ADS)
Shiraiwa, S.; Wright, J. C.; Bonoli, P. T.; Kolev, T.; Stowell, M.
2017-10-01
A newly developed generic electro-magnetic (EM) simulation tool for modeling RF wave propagation in SOL plasmas is presented. The primary motivation of this development is to extend the domain partitioning approach for incorporating arbitrarily shaped SOL plasmas and antenna to the TORIC core ICRF solver, which was previously demonstrated in the 2D geometry [S. Shiraiwa, et. al., "HISTORIC: extending core ICRF wave simulation to include realistic SOL plasmas", Nucl. Fusion in press], to larger and more complicated simulations by including a 3D realistic antenna and integrating RF rectified sheath potential model. Such an extension requires a scalable high fidelity 3D edge plasma wave simulation. We used the MFEM [
Method of locating underground mines fires
Laage, Linneas; Pomroy, William
1992-01-01
An improved method of locating an underground mine fire by comparing the pattern of measured combustion product arrival times at detector locations with a real time computer-generated array of simulated patterns. A number of electronic fire detection devices are linked thru telemetry to a control station on the surface. The mine's ventilation is modeled on a digital computer using network analysis software. The time reguired to locate a fire consists of the time required to model the mines' ventilation, generate the arrival time array, scan the array, and to match measured arrival time patterns to the simulated patterns.
Optimal investments in digital communication systems in primary exchange area
NASA Astrophysics Data System (ADS)
Garcia, R.; Hornung, R.
1980-11-01
Integer linear optimization theory, following Gomory's method, was applied to the model planning of telecommunication networks in which all future investments are made in digital systems only. The integer decision variables are the number of digital systems set up on cable or radiorelay links that can be installed. The objective function is the total cost of the extension of the existing line capacity to meet the demand between primary and local exchanges. Traffic volume constraints and flow conservation in transit nodes complete the model. Results indicating computing time and method efficiency are illustrated by an example.
Yang-Baxter and other relations for free-fermion and Ising models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, B.
1987-02-01
Eight-vertex, free fermion, and Ising models are formulated using a convention that emphasizes the algebra of the local transition operators that arise in the quantum inverse method. Equivalent classes of models, are investigated, with particular emphasis on the role of the star-triangle relations. Using these results, a natural and symmetrical parametrization is introduced and Yang-Baxter relations are constructed in an elementary way. The paper concludes with a consideration of duality, which links the present work to a recent paper of Baxter on the free fermion model.
Linking Reflection and Technical Competence: The Logbook as an Instrument in Teacher Education.
ERIC Educational Resources Information Center
Korthagen, Fred A. J.
1999-01-01
Describes a framework for integrating reflection and teacher competency development into teacher education programs, introducing a spiral model for reflection, standard reflection questions, and a method of structuring logbooks, all designed to develop a competency for self-directed professional growth in interpersonal classroom behavior. An…
AN IMPROVED STRATEGY FOR REGRESSION OF BIOPHYSICAL VARIABLES AND LANDSAT ETM+ DATA. (R828309)
Empirical models are important tools for relating field-measured biophysical variables to remote sensing data. Regression analysis has been a popular empirical method of linking these two types of data to provide continuous estimates for variables such as biomass, percent wood...
NASA Technical Reports Server (NTRS)
Devasirvatham, D. M. J.; Hodge, D. B.
1981-01-01
A model of the microwave and millimeter wave link in the presence of atmospheric turbulence is presented with emphasis on satellite communications systems. The analysis is based on standard methods of statistical theory. The results are directly usable by the design engineer.
Leader Positivity and Follower Creativity: An Experimental Analysis
ERIC Educational Resources Information Center
Avey, James B.; Richmond, F. Lynn; Nixon, Don R.
2012-01-01
Using an experimental research design, 191 working adults were randomly assigned to two experimental conditions in order to test a theoretical model linking leader and follower positive psychological capital (PsyCap). Multiple methods were used to gather information from the participants. We found when leader PsyCap was manipulated experimentally,…
Chemically induced vascular toxicity during embryonic development can result in a wide range of adverse prenatal outcomes. We used information from genetic mouse models linked to phenotypic outcomes and a vascular toxicity knowledge base to construct an embryonic vascular disrupt...
Sleep Disruptions and Emotional Insecurity Are Pathways of Risk for Children
ERIC Educational Resources Information Center
El-Sheikh, Mona; Buckhalt, Joseph A.; Cummings, E. Mark; Keller, Peggy
2007-01-01
Background: Sleep problems are prevalent in American children. A critical need is to identify sources and processes related to sleep disruptions and their sequelae. We examined a model linking parental marital conflict and children's emotional insecurity, sleep disruptions, and their adjustment and academic problems. Method: One hundred and…
While environmental toxicity testing typically focuses on organism-level endpoints such as mortality, growth, and reproduction, risk assessment guidelines specify protection goals at the level of the population and above. One method of linking these different levels of biological...
Pain, motor and gait assessment of murine osteoarthritis in a cruciate ligament transection model
Ruan, Merry ZC; Patel, Ronak M; Dawson, Brian C; Jiang, Ming-ming; Lee, Brendan HI
2013-01-01
Objective The major complaint of Osteoarthritis (OA) patients is pain. However, due to the nature of clinical studies and the limitation of animal studies, few studies have linked function impairment and behavioral changes in OA animal models to cartilage loss and histopathology. Our objective was to study surrogate markers of functional impairment in relation to cartilage loss and pathological changes in a post-traumatic mouse model of OA. Method We performed a battery of functional analyses in a mouse model of OA generated by cruciate ligament transection (CLT). The changes in functional analyses were linked to histological changes graded by OARSI standards, histological grading of synovitis, and volumetric changes of the articular cartilage and osteophytes quantified by phase contrast micro-CT. Results OA generated by CLT led to decreased time on rotarod, delayed response on hotplate analysis, and altered gait starting from 4 weeks after surgery. Activity in open field analysis did not change at 4, 8, or 12 weeks after CLT. The magnitude of behavioral changes was directly correlated with higher OARSI histological scores of OA, synovitis in the knee joints, cartilage volume loss, and osteophyte formation. Conclusion Our findings link functional analyses to histological grading, synovitis, comprehensive 3-dimensional assessment of cartilage volume and osteophyte formation. This serves as a reference for a mouse model in predicting outcomes of OA treatment. PMID:23973150
An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Saether, E.; Glaessgen, E.H.; Yamakov, V.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
G = MAT: linking transcription factor expression and DNA binding data.
Tretyakov, Konstantin; Laur, Sven; Vilo, Jaak
2011-01-31
Transcription factors are proteins that bind to motifs on the DNA and thus affect gene expression regulation. The qualitative description of the corresponding processes is therefore important for a better understanding of essential biological mechanisms. However, wet lab experiments targeted at the discovery of the regulatory interplay between transcription factors and binding sites are expensive. We propose a new, purely computational method for finding putative associations between transcription factors and motifs. This method is based on a linear model that combines sequence information with expression data. We present various methods for model parameter estimation and show, via experiments on simulated data, that these methods are reliable. Finally, we examine the performance of this model on biological data and conclude that it can indeed be used to discover meaningful associations. The developed software is available as a web tool and Scilab source code at http://biit.cs.ut.ee/gmat/.
G = MAT: Linking Transcription Factor Expression and DNA Binding Data
Tretyakov, Konstantin; Laur, Sven; Vilo, Jaak
2011-01-01
Transcription factors are proteins that bind to motifs on the DNA and thus affect gene expression regulation. The qualitative description of the corresponding processes is therefore important for a better understanding of essential biological mechanisms. However, wet lab experiments targeted at the discovery of the regulatory interplay between transcription factors and binding sites are expensive. We propose a new, purely computational method for finding putative associations between transcription factors and motifs. This method is based on a linear model that combines sequence information with expression data. We present various methods for model parameter estimation and show, via experiments on simulated data, that these methods are reliable. Finally, we examine the performance of this model on biological data and conclude that it can indeed be used to discover meaningful associations. The developed software is available as a web tool and Scilab source code at http://biit.cs.ut.ee/gmat/. PMID:21297945
Automated design of genetic toggle switches with predetermined bistability.
Chen, Shuobing; Zhang, Haoqian; Shi, Handuo; Ji, Weiyue; Feng, Jingchen; Gong, Yan; Yang, Zhenglin; Ouyang, Qi
2012-07-20
Synthetic biology aims to rationally construct biological devices with required functionalities. Methods that automate the design of genetic devices without post-hoc adjustment are therefore highly desired. Here we provide a method to predictably design genetic toggle switches with predetermined bistability. To accomplish this task, a biophysical model that links ribosome binding site (RBS) DNA sequence to toggle switch bistability was first developed by integrating a stochastic model with RBS design method. Then, to parametrize the model, a library of genetic toggle switch mutants was experimentally built, followed by establishing the equivalence between RBS DNA sequences and switch bistability. To test this equivalence, RBS nucleotide sequences for different specified bistabilities were in silico designed and experimentally verified. Results show that the deciphered equivalence is highly predictive for the toggle switch design with predetermined bistability. This method can be generalized to quantitative design of other probabilistic genetic devices in synthetic biology.
An Introduction to the BFS Method and Its Use to Model Binary NiAl Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Noebe, Ronald D.; Ferrante, J.; Amador, C.
1998-01-01
We introduce the Bozzolo-Ferrante-Smith (BFS) method for alloys as a computationally efficient tool for aiding in the process of alloy design. An intuitive description of the BFS method is provided, followed by a formal discussion of its implementation. The method is applied to the study of the defect structure of NiAl binary alloys. The groundwork is laid for a detailed progression to higher order NiAl-based alloys linking theoretical calculations and computer simulations based on the BFS method and experimental work validating each step of the alloy design process.
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
NASA Astrophysics Data System (ADS)
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
An evidential link prediction method and link predictability based on Shannon entropy
NASA Astrophysics Data System (ADS)
Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong
2017-09-01
Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.
Software-defined Quantum Networking Ecosystem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Sadlier, Ronald
The software enables a user to perform modeling and simulation of software-defined quantum networks. The software addresses the problem of how to synchronize transmission of quantum and classical signals through multi-node networks and to demonstrate quantum information protocols such as quantum teleportation. The software approaches this problem by generating a graphical model of the underlying network and attributing properties to each node and link in the graph. The graphical model is then simulated using a combination of discrete-event simulators to calculate the expected state of each node and link in the graph at a future time. A user interacts withmore » the software by providing an initial network model and instantiating methods for the nodes to transmit information with each other. This includes writing application scripts in python that make use of the software library interfaces. A user then initiates the application scripts, which invokes the software simulation. The user then uses the built-in diagnostic tools to query the state of the simulation and to collect statistics on synchronization.« less
Predicting missing links and identifying spurious links via likelihood analysis
NASA Astrophysics Data System (ADS)
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-03-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms.
Predicting missing links and identifying spurious links via likelihood analysis
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-01-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms. PMID:26961965
Impacts of wireless charging lanes on travel time and energy consumption in a two-lane road system
NASA Astrophysics Data System (ADS)
He, Jia; Yang, Hai; Huang, Hai-Jun; Tang, Tie-Qiao
2018-06-01
In this paper, we propose a method to compare different energy consumption models and design a strategy to study the quantitative effects of wireless charging lane (WCL) on each electric vehicle's (EV's) link travel time. We utilize the modified energy consumption model and strategy to explore electric vehicle's electricity consumption and link travel time in a two-lane system with a WCL. The numerical results show that EVs' charging behavior on WCL will cause the drivers to execute the lane-changing maneuvers frequently and that the WCL has prominent impacts on EV's energy consumption and travel time, i.e., the capacity drops by 8%-17% while the EV's energy consumption increases by 3%-14% in the two-lane road system.
Intra-molecular cross-linking of acidic residues for protein structure studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruppa, Gary Hermann; Young, Malin M.; Novak, Petr
2005-03-01
Intra-molecular cross-linking has been suggested as a method of obtaining distance constraints that would be useful in developing structural models of proteins. Recent work published on intra-molecular cross-linking for protein structural studies has employed commercially available primary amine selective reagents that can cross-link lysine residues to other lysine residues or the amino terminus. Previous work using these cross-linkers has shown that for several proteins of known structure, the number of cross-links that can be obtained experimentally may be small compared to what would be expected from the known structure, due to the relative reactivity, distribution, and solvent accessibility of themore » lysines in the protein sequence. To overcome these limitations we have investigated the use of cross-linking reagents that can react with other reactive sidechains in proteins. We used 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC) to activate the carboxylic acid containing residues, aspartic acid (D), glutamic acid (E), and the carboxy terminus (O), for cross-linking reactions. Once activated, the DEO sidechains can react to form 'zero-length' cross-links with nearby primary amine containing resides, lysines (K) and the amino terminus (X), via the formation of a new amide bond. We also show that the EDC-activated DEO sidechains can be cross-linked to each other using dihydrazides, two hydrazide moieties connected by an alkyl cross-linker ann of variable length. Using these reagents, we have found three new 'zero-length' cross-links in ubiquitin consistent with its known structure (M1-E16, M1-E18, and K63-E64). Using the dihydrazide cross-linkers, we have identified 2 new cross-links (D21-D32 and E24-D32) unambiguously. Using a library of dihydrazide cross-linkers with varying arm length, we have shown that there is a minimum arm length required for the DEO-DEO cross-links of 5.8 angstroms. These results show that additional structural information can be obtained by exploiting new cross-linker chemistry, increasing the probability that the protein target of choice will yield sufficient distance constraints to develop a structural model.« less
Computational physiology and the Physiome Project.
Crampin, Edmund J; Halstead, Matthew; Hunter, Peter; Nielsen, Poul; Noble, Denis; Smith, Nicolas; Tawhai, Merryn
2004-01-01
Bioengineering analyses of physiological systems use the computational solution of physical conservation laws on anatomically detailed geometric models to understand the physiological function of intact organs in terms of the properties and behaviour of the cells and tissues within the organ. By linking behaviour in a quantitative, mathematically defined sense across multiple scales of biological organization--from proteins to cells, tissues, organs and organ systems--these methods have the potential to link patient-specific knowledge at the two ends of these spatial scales. A genetic profile linked to cardiac ion channel mutations, for example, can be interpreted in relation to body surface ECG measurements via a mathematical model of the heart and torso, which includes the spatial distribution of cardiac ion channels throughout the myocardium and the individual kinetics for each of the approximately 50 types of ion channel, exchanger or pump known to be present in the heart. Similarly, linking molecular defects such as mutations of chloride ion channels in lung epithelial cells to the integrated function of the intact lung requires models that include the detailed anatomy of the lungs, the physics of air flow, blood flow and gas exchange, together with the large deformation mechanics of breathing. Organizing this large body of knowledge into a coherent framework for modelling requires the development of ontologies, markup languages for encoding models, and web-accessible distributed databases. In this article we review the state of the field at all the relevant levels, and the tools that are being developed to tackle such complexity. Integrative physiology is central to the interpretation of genomic and proteomic data, and is becoming a highly quantitative, computer-intensive discipline.
Wright, J K; Tschopp, J; Jaton, J C
1980-01-01
Pure dimers, trimers, tetramers and pentamers of rabbit non-immune IgG (immunoglobulin G) or antibody IgG were prepared by polymerization in the presence of the bifunctional cross-linking reagent dithiobis (succinimidylpropionate). Oligomerization was performed either in the presence of polysaccharide antigen and specific monomeric antibody (method A) or by random cross-linking of non-immune rabbit IgG in the absence of antigen (method B). By repeated gel-filtration chromatography, samples prepared by both methods exhibited a single band in analytical sodium dodecyl sulphate/polyacrylamide-gel electrophoresis. The electrophoretic mobilities of samples prepared by method A were slightly greater than those for the corresponding samples prepared by method B. This might suggest a role played by antigen in the orientation of IgG molecules within the clusters, which may be more compact than those formed by random cross-linking. The average numbers of cross-linker molecules per oligomer varied between 3 and 6 for clusters made by method A and between 1 and 3 for clusters made by method B. Ultracentrifugal analyses of the oligomers yielded sedimentation coefficients (S20,w) of 9.6S for the dimer, 11.2S for the trimer, 13.6S for the tetramer and 16.1S for the pentamer. Comparison of the observed sedimentation coefficients with those predicted by various hydrodynamic models suggested these oligomers possessed open and linear structures. Reduction of the cross-linking molecules converted oligomers into monomeric species of IgG. C.d. spectra of some oligomers studied in the range 200-250 nm were essentially the same as that of monomeric IgG molecules, thus strongly suggesting no major conformation changes in IgG molecules within clusters. These oligomers were found to be stable for up to 2 months when stored at -70 degrees C. Images Fig. 1. Fig. 4. PMID:7188424
Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia
2016-01-01
Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.
Accuracy test for link prediction in terms of similarity index: The case of WS and BA models
NASA Astrophysics Data System (ADS)
Ahn, Min-Woo; Jung, Woo-Sung
2015-07-01
Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.
Linked independent component analysis for multimodal data fusion.
Groves, Adrian R; Beckmann, Christian F; Smith, Steve M; Woolrich, Mark W
2011-02-01
In recent years, neuroimaging studies have increasingly been acquiring multiple modalities of data and searching for task- or disease-related changes in each modality separately. A major challenge in analysis is to find systematic approaches for fusing these differing data types together to automatically find patterns of related changes across multiple modalities, when they exist. Independent Component Analysis (ICA) is a popular unsupervised learning method that can be used to find the modes of variation in neuroimaging data across a group of subjects. When multimodal data is acquired for the subjects, ICA is typically performed separately on each modality, leading to incompatible decompositions across modalities. Using a modular Bayesian framework, we develop a novel "Linked ICA" model for simultaneously modelling and discovering common features across multiple modalities, which can potentially have completely different units, signal- and contrast-to-noise ratios, voxel counts, spatial smoothnesses and intensity distributions. Furthermore, this general model can be configured to allow tensor ICA or spatially-concatenated ICA decompositions, or a combination of both at the same time. Linked ICA automatically determines the optimal weighting of each modality, and also can detect single-modality structured components when present. This is a fully probabilistic approach, implemented using Variational Bayes. We evaluate the method on simulated multimodal data sets, as well as on a real data set of Alzheimer's patients and age-matched controls that combines two very different types of structural MRI data: morphological data (grey matter density) and diffusion data (fractional anisotropy, mean diffusivity, and tensor mode). Copyright © 2010 Elsevier Inc. All rights reserved.
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Assessing Low-Intensity Relationships in Complex Networks
Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes
2016-01-01
Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes. PMID:27096435
Assessing Low-Intensity Relationships in Complex Networks.
Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes
2016-01-01
Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes.
Percolation of spatially constrained Erdős-Rényi networks with degree correlations.
Schmeltzer, C; Soriano, J; Sokolov, I M; Rüdiger, S
2014-01-01
Motivated by experiments on activity in neuronal cultures [ J. Soriano, M. Rodríguez Martínez, T. Tlusty and E. Moses Proc. Natl. Acad. Sci. 105 13758 (2008)], we investigate the percolation transition and critical exponents of spatially embedded Erdős-Rényi networks with degree correlations. In our model networks, nodes are randomly distributed in a two-dimensional spatial domain, and the connection probability depends on Euclidian link length by a power law as well as on the degrees of linked nodes. Generally, spatial constraints lead to higher percolation thresholds in the sense that more links are needed to achieve global connectivity. However, degree correlations favor or do not favor percolation depending on the connectivity rules. We employ two construction methods to introduce degree correlations. In the first one, nodes stay homogeneously distributed and are connected via a distance- and degree-dependent probability. We observe that assortativity in the resulting network leads to a decrease of the percolation threshold. In the second construction methods, nodes are first spatially segregated depending on their degree and afterwards connected with a distance-dependent probability. In this segregated model, we find a threshold increase that accompanies the rising assortativity. Additionally, when the network is constructed in a disassortative way, we observe that this property has little effect on the percolation transition.
Animal movement: Statistical models for telemetry data
Hooten, Mevin B.; Johnson, Devin S.; McClintock, Brett T.; Morales, Juan M.
2017-01-01
The study of animal movement has always been a key element in ecological science, because it is inherently linked to critical processes that scale from individuals to populations and communities to ecosystems. Rapid improvements in biotelemetry data collection and processing technology have given rise to a variety of statistical methods for characterizing animal movement. The book serves as a comprehensive reference for the types of statistical models used to study individual-based animal movement.
Modeling water quality, temperature, and flow in Link River, south-central Oregon
Sullivan, Annett B.; Rounds, Stewart A.
2016-09-09
The 2.1-km (1.3-mi) Link River connects Upper Klamath Lake to the Klamath River in south-central Oregon. A CE-QUAL-W2 flow and water-quality model of Link River was developed to provide a connection between an existing model of the upper Klamath River and any existing or future models of Upper Klamath Lake. Water-quality sampling at six locations in Link River was done during 2013–15 to support model development and to provide a better understanding of instream biogeochemical processes. The short reach and high velocities in Link River resulted in fast travel times and limited water-quality transformations, except for dissolved oxygen. Reaeration through the reach, especially at the falls in Link River, was particularly important in moderating dissolved oxygen concentrations that at times entered the reach at Link River Dam with marked supersaturation or subsaturation. This reaeration resulted in concentrations closer to saturation downstream at the mouth of Link River.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Numerical analysis of right-half plane zeros for a single-link manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Girvin, Douglas Lynn
1992-01-01
The purpose of this research is to further develop an understanding of how nonminimum phase zero location is affected by structural link design. As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links become more of an issue in controls problems. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. The research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixes, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to inverse dynamic algorithms currently under development at Georgia Tech.
Cognition and procedure representational requirements for predictive human performance models
NASA Technical Reports Server (NTRS)
Corker, K.
1992-01-01
Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods including procedural backtracking with concurrent search, temporal reasoning, and constraint checking for partial ordering of procedures. Finally, the representation is being linked to models of human decision making processes that include heuristic, propositional and prescriptive judgement models that are sensitive to the procedural content in which the valuative functions are being performed.
Method of making superconducting cylinders for flux detectors
Goodkind, J.M.; Stolfa, D.L.
1971-07-06
A method of making superconducting cylinders of the ''weak link'' type is provided. The method allows the weak link to be made much smaller than was heretofore possible, thereby greatly increasing sensitivity and operating temperature range when the cylinder is used in a flux detector. The resistance of the weak link is monitored continuously as metal is removed from the link by electrochemical action.
NASA Astrophysics Data System (ADS)
Ciminelli, Caterina; Dell'Olio, Francesco; Armenise, Mario N.; Iacomacci, Francesco; Pasquali, Franca; Formaro, Roberto
2017-11-01
A fiber optic digital link for on-board data handling is modeled, designed and optimized in this paper. Design requirements and constraints relevant to the link, which is in the frame of novel on-board processing architectures, are discussed. Two possible link configurations are investigated, showing their advantages and disadvantages. An accurate mathematical model of each link component and the entire system is reported and results of link simulation based on those models are presented. Finally, some details on the optimized design are provided.
Xu, Lifeng; Henke, Michael; Zhu, Jun; Kurth, Winfried; Buck-Sorlin, Gerhard
2011-01-01
Background and Aims Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype–phenotype model, we present here a three-dimensional functional–structural plant model (FSPM) of rice, in which some model parameters are controlled by functions describing the effect of main-effect and epistatic QTLs. Methods The model simulates the growth and development of rice based on selected ecophysiological processes, such as photosynthesis (source process) and organ formation, growth and extension (sink processes). It was devised using GroIMP, an interactive modelling platform based on the Relational Growth Grammar formalism (RGG). RGG rules describe the course of organ initiation and extension resulting in final morphology. The link between the phenotype (as represented by the simulated rice plant) and the QTL genotype was implemented via a data interface between the rice FSPM and the QTLNetwork software, which computes predictions of QTLs from map data and measured trait data. Key Results Using plant height and grain yield, it is shown how QTL information for a given trait can be used in an FSPM, computing and visualizing the phenotypes of different lines of a mapping population. Furthermore, we demonstrate how modification of a particular trait feeds back on the entire plant phenotype via the physiological processes considered. Conclusions We linked a rice FSPM to a quantitative genetic model, thereby employing QTL information to refine model parameters and visualizing the dynamics of development of the entire phenotype as a result of ecophysiological processes, including the trait(s) for which genetic information is available. Possibilities for further extension of the model, for example for the purposes of ideotype breeding, are discussed. PMID:21247905
Logic modeling and the ridiculome under the rug
2012-01-01
Logic-derived modeling has been used to map biological networks and to study arbitrary functional interactions, and fine-grained kinetic modeling can accurately predict the detailed behavior of well-characterized molecular systems; at present, however, neither approach comes close to unraveling the full complexity of a cell. The current data revolution offers significant promises and challenges to both approaches - and could bring them together as it has spurred the development of new methods and tools that may help to bridge the many gaps between data, models, and mechanistic understanding. Have you used logic modeling in your research? It would not be surprising if many biologists would answer no to this hypothetical question. And it would not be true. In high school biology we already became familiar with cartoon diagrams that illustrate basic mechanisms of the molecular machinery operating inside cells. These are nothing else but simple logic models. If receptor and ligand are present, then receptor-ligand complexes form; if a receptor-ligand complex exists, then an enzyme gets activated; if the enzyme is active, then a second messenger is being produced; and so on. Such chains of causality are the essence of logic models (Figure 1a). Arbitrary events and mechanisms are abstracted; relationships are simplified and usually involve just two possible conditions and three possible consequences. The presence or absence of one or more molecule, activity, or function, [some icons in the cartoon] will determine whether another one of them will be produced (created, up-regulated, stimulated) [a 'positive' link] or destroyed (degraded, down-regulated, inhibited) [a 'negative' link], or be unaffected [there is no link]. The icons and links often do not follow a standardized format, but when we look at such a cartoon diagram, we believe that we 'understand' how the system works. Because our brain is easily able to process these relationships, these diagrams allow us to answer two fundamental types of questions related to the system: why (are certain things happening)? What if (we make some changes)? PMID:23171629
Willey, Barbara; Waiswa, Peter; Kajjo, Darious; Munos, Melinda; Akuze, Joseph; Allen, Elizabeth; Marchant, Tanya
2018-06-01
Improving maternal and newborn health requires improvements in the quality of facility-based care. This is challenging to measure: routine data may be unreliable; respondents in population surveys may be unable to accurately report on quality indicators; and facility assessments lack population level denominators. We explored methods for linking access to skilled birth attendance (SBA) from household surveys to data on provision of care from facility surveys with the aim of estimating population level effective coverage reflecting access to quality care. We used data from Mayuge District, Uganda. Data from household surveys on access to SBA were linked to health facility assessment census data on readiness to provide basic emergency obstetric and newborn care (BEmONC) in the same district. One individual- and two ecological-linking methods were applied. All methods used household survey reports on where care at birth was accessed. The individual-linking method linked this to data about facility readiness from the specific facility where each woman delivered. The first ecological-linking approach used a district-wide mean estimate of facility readiness. The second used an estimate of facility readiness adjusted by level of health facility accessed. Absolute differences between estimates derived from the different linking methods were calculated, and agreement examined using Lin's concordance correlation coefficient. A total of 1177 women resident in Mayuge reported a birth during 2012-13. Of these, 664 took place in facilities within Mayuge, and were eligible for linking to the census of the district's 38 facilities. 55% were assisted by a SBA in a facility. Using the individual-linking method, effective coverage of births that took place with an SBA in a facility ready to provide BEmONC was just 10% (95% confidence interval CI 3-17). The absolute difference between the individual- and ecological-level linking method adjusting for facility level was one percentage point (11%), and tests suggested good agreement. The ecological method using the district-wide estimate demonstrated poor agreement. The proportion of women accessing appropriately equipped facilities for care at birth is far lower than the coverage of facility delivery. To realise the life-saving potential of health services, countries need evidence to inform actions that address gaps in the provision of quality care. Linking household and facility-based information provides a simple but innovative method for estimating quality of care at the population level. These encouraging findings suggest that linking data sets can result in meaningful evidence even when the exact location of care seeking is not known.
NASA Astrophysics Data System (ADS)
Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.
2009-12-01
A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.
Cross-linked polyvinyl alcohol films as alkaline battery separators
NASA Technical Reports Server (NTRS)
Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.
1983-01-01
Cross-linking methods have been investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. Then pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide-zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.
Cross-linked polyvinyl alcohol films as alkaline battery separators
NASA Technical Reports Server (NTRS)
Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.
1982-01-01
Cross-linking methods were investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. The pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide - zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.
ERIC Educational Resources Information Center
Ojanen, Tiina; Little, Todd D.
2010-01-01
This special section was inspired by the recent increased interest and methodological advances in the assessment of context-specificity in child and adolescent social development. While the effects of groups, situations, and social relationships on cognitive, affective and behavioral development have long been acknowledged in theoretical…
The US EPA ToxCast program is using in vitro HTS (High-Throughput Screening) methods to profile and model bioactivity of environmental chemicals. The main goals of the ToxCast program are to generate predictive signatures of toxicity, and ultimately provide rapid and cost-effecti...
Comparison of dew point temperature estimation methods in Southwestern Georgia
Marcus D. Williams; Scott L. Goodrick; Andrew Grundstein; Marshall Shepherd
2015-01-01
Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in...
ERIC Educational Resources Information Center
Rinehart, Steven D.; Ahern, Terence C.
2016-01-01
Computer applications related to reading instruction have become commonplace in schools and link with established components of the reading process, emergent skills, decoding, comprehension, vocabulary, and fluency. This article focuses on computer technology in conjunction with durable methods for building oral reading fluency when readers…
Linking Combat Systems Capabilities and Ship Design Through Modeling and Computer Simulation
2013-09-01
23 C. OVERVIEW OF FIVE—PARAMETER METHOD .................................24 1. Lift /Drag Ratio (L/D Ratio...FOR TESTING ..............29 1. Parameter 1: Lift /Drag Ratio (calculated value) ............................29 2. Parameter 2: Overall Propulsion...34 G. METRIC CONVERSIONS—JANE’S DATA ............................................35 H. DECOMPOSITION – LIFT TO DRAG RATIO AND
Graph Structure in Three National Academic Webs: Power Laws with Anomalies.
ERIC Educational Resources Information Center
Thelwall, Mike; Wilkinson, David
2003-01-01
Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)
Strategic combinations and tiered application of alternative testing methods to replace or minimize the use of animal models is attracting much attention. With the advancement of high throughput screening (HTS) assays and legacy databases providing in vivo testing results, suffic...
Lessons Learned from the Whole Child and Coordinated School Health Approaches
ERIC Educational Resources Information Center
Rasberry, Catherine N.; Slade, Sean; Lohrmann, David K.; Valois, Robert F.
2015-01-01
Background: The new Whole School, Whole Community, Whole Child (WSCC) model, designed to depict links between health and learning, is founded on concepts of coordinated school health (CSH) and a whole child approach to education. Methods: The existing literature, including scientific articles and key publications from national agencies and…
Mark-recapture with multiple, non-invasive marks.
Bonner, Simon J; Holmberg, Jason
2013-09-01
Non-invasive marks, including pigmentation patterns, acquired scars, and genetic markers, are often used to identify individuals in mark-recapture experiments. If animals in a population can be identified from multiple, non-invasive marks then some individuals may be counted twice in the observed data. Analyzing the observed histories without accounting for these errors will provide incorrect inference about the population dynamics. Previous approaches to this problem include modeling data from only one mark and combining estimators obtained from each mark separately assuming that they are independent. Motivated by the analysis of data from the ECOCEAN online whale shark (Rhincodon typus) catalog, we describe a Bayesian method to analyze data from multiple, non-invasive marks that is based on the latent-multinomial model of Link et al. (2010, Biometrics 66, 178-185). Further to this, we describe a simplification of the Markov chain Monte Carlo algorithm of Link et al. (2010, Biometrics 66, 178-185) that leads to more efficient computation. We present results from the analysis of the ECOCEAN whale shark data and from simulation studies comparing our method with the previous approaches. © 2013, The International Biometric Society.
Predicting links based on knowledge dissemination in complex network
NASA Astrophysics Data System (ADS)
Zhou, Wen; Jia, Yifan
2017-04-01
Link prediction is the task of mining the missing links in networks or predicting the next vertex pair to be connected by a link. A lot of link prediction methods were inspired by evolutionary processes of networks. In this paper, a new mechanism for the formation of complex networks called knowledge dissemination (KD) is proposed with the assumption of knowledge disseminating through the paths of a network. Accordingly, a new link prediction method-knowledge dissemination based link prediction (KDLP)-is proposed to test KD. KDLP characterizes vertex similarity based on knowledge quantity (KQ) which measures the importance of a vertex through H-index. Extensive numerical simulations on six real-world networks demonstrate that KDLP is a strong link prediction method which performs at a higher prediction accuracy than four well-known similarity measures including common neighbors, local path index, average commute time and matrix forest index. Furthermore, based on the common conclusion that an excellent link prediction method reveals a good evolving mechanism, the experiment results suggest that KD is a considerable network evolving mechanism for the formation of complex networks.
3D road marking reconstruction from street-level calibrated stereo pairs
NASA Astrophysics Data System (ADS)
Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier
This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.
Predicting missing links via correlation between nodes
NASA Astrophysics Data System (ADS)
Liao, Hao; Zeng, An; Zhang, Yi-Cheng
2015-10-01
As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.
Cascaded Optimization for a Persistent Data Ferrying Unmanned Aircraft
NASA Astrophysics Data System (ADS)
Carfang, Anthony
This dissertation develops and assesses a cascaded method for designing optimal periodic trajectories and link schedules for an unmanned aircraft to ferry data between stationary ground nodes. This results in a fast solution method without the need to artificially constrain system dynamics. Focusing on a fundamental ferrying problem that involves one source and one destination, but includes complex vehicle and Radio-Frequency (RF) dynamics, a cascaded structure to the system dynamics is uncovered. This structure is exploited by reformulating the nonlinear optimization problem into one that reduces the independent control to the vehicle's motion, while the link scheduling control is folded into the objective function and implemented as an optimal policy that depends on candidate motion control. This formulation is proven to maintain optimality while reducing computation time in comparison to traditional ferry optimization methods. The discrete link scheduling problem takes the form of a combinatorial optimization problem that is known to be NP-Hard. A derived necessary condition for optimality guides the development of several heuristic algorithms, specifically the Most-Data-First Algorithm and the Knapsack Adaptation. These heuristics are extended to larger ferrying scenarios, and assessed analytically and through Monte Carlo simulation, showing better throughput performance in the same order of magnitude of computation time in comparison to other common link scheduling policies. The cascaded optimization method is implemented with a novel embedded software system on a small, unmanned aircraft to validate the simulation results with field experiments. To address the sensitivity of results on trajectory tracking performance, a system that combines motion and link control with waypoint-based navigation is developed and assessed through field experiments. The data ferrying algorithms are further extended by incorporating a Gaussian process to opportunistically learn the RF environment. By continuously improving RF models, the cascaded planner can continually improve the ferrying system's overall performance.
Neighborhood linking social capital as a predictor of drug abuse: A Swedish national cohort study
Sundquist, Jan; Sjöstedt, Cecilia; Winkleby, Marilyn; Li, Xinjun; Kendler, Kenneth S.; Sundquist, Kristina
2016-01-01
Aims This study examines the association between the incidence of drug abuse (DA) and linking (communal) social capital, a theoretical concept describing the amount of trust between individuals and societal institutions. Methods We present results from an 8-year population-based cohort study that followed all residents in Sweden, aged 15–44, from 2003 through 2010, for a total of 1,700,896 men and 1,642,798 women. Linking social capital was conceptualized as the proportion of people in a geographically defined neighborhood who voted in local government elections. Multilevel logistic regression was used to estimate odds ratios (ORs) and between-neighborhood variance. Results We found robust associations between linking social capital and DA in men and women. For men, the OR for DA in the crude model was 2.11 [95% confidence interval (CI) 2.02–2.21] for those living in neighborhoods with the lowest vs. highest level of social capital. After accounting for neighborhood level deprivation, the OR fell to 1.59 (1.51–1-68). The ORs remained significant after accounting for age, family income, marital status, country of birth, education level, and region of residence, and after further accounting for comorbidities and family history of comorbidities and family history of DA. For women, the OR decreased from 2.15 (2.03–2.27) in the crude model to 1.31 (1.22–1.40) in the final model, adjusted for multiple neighborhood-level, individual-level variables, and family history for DA. Conclusions Our study suggests that low linking social capital may have significant independent effects on DA. PMID:27416013
Water bag modeling of a multispecies plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, P.; Gravier, E.; Besse, N.
2011-03-15
We report in the present paper a new modeling method to study multiple species dynamics in magnetized plasmas. Such a method is based on the gyrowater bag modeling, which consists in using a multistep-like distribution function along the velocity direction parallel to the magnetic field. The choice of a water bag representation allows an elegant link between kinetic and fluid descriptions of a plasma. The gyrowater bag model has been recently adapted to the context of strongly magnetized plasmas. We present its extension to the case of multi ion species magnetized plasmas: each ion species being modeled via a multiwatermore » bag distribution function. The water bag modelization will be discussed in details, under the simplification of a cylindrical geometry that is convenient for linear plasma devices. As an illustration, results obtained in the linear framework for ion temperature gradient instabilities are presented, that are shown to agree qualitatively with older works.« less
NASA Astrophysics Data System (ADS)
Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun
2018-03-01
Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
Measuring Link-Resolver Success: Comparing 360 Link with a Local Implementation of WebBridge
ERIC Educational Resources Information Center
Herrera, Gail
2011-01-01
This study reviewed link resolver success comparing 360 Link and a local implementation of WebBridge. Two methods were used: (1) comparing article-level access and (2) examining technical issues for 384 randomly sampled OpenURLs. Google Analytics was used to collect user-generated OpenURLs. For both methods, 360 Link out-performed the local…
Paller, Vachel Gay V; Besana, Cyrelle M; Valdez, Isabel Kristine M
2017-12-01
Toxocariasis is a zoonotic disease usually caused by dog and cat roundworms, Toxocara canis and T. cati. Detection and diagnosis is difficult in paratenic and accidental hosts, including humans, as they cannot be detected through conventional methods such as fecal examination. Diagnosis therefore relies on immunological methods and molecular methods such as enzyme-linked immunosorbent assay (ELISA) and Western Blot, which are both time-consuming and requires sophisticated equipment. In the Philippines, only a few studies are available on Toxocara seroprevalence. Therefore, there is a need to adapt methods for serodiagnosis of Toxocara infection in humans for the Philippine setting. A dot enzyme linked immunosorbent assay (dot-ELISA) was standardized using T. canis excretory-secretory antigens. Test sera were collected from laboratory rats (Sprague-Dawley strain) experimentally infected with embryonated eggs of T. canis and Ascaris suum as well as rice field rats naturally infected with Taenia taeniaeformis and Nippostrongylus sp. Optimum conditions used were 20 µg/ml antigen concentration and 1:10 serum dilution. The sensitivity, specificity, positive, and negative predictive values were 90% (95% CI 55.5-99.7%), 100% (95% CI 69.2-100.0%), 100% (95% CI 66.4-100%), and 90.9% (95% CI 58.7-99.8%), respectively. Dot-ELISA has the potential to be developed as a cheaper, simpler, and more practical method for detection of anti- Toxocara antibodies on accidental hosts. This is a preliminary study conducted on experimental animals before optimization and standardization for human serum samples.
The complex links between governance and biodiversity.
Barrett, Christopher B; Gibson, Clark C; Hoffman, Barak; McCubbins, Mathew D
2006-10-01
We argue that two problems weaken the claims of those who link corruption and the exploitation of natural resources. The first is conceptual and the second is methodological. Studies that use national-level indicators of corruption fail to note that corruption comes in many forms, at multiple levels, that may affect resource use quite differently: negatively, positively, or not at all. Without a clear causal model of the mechanism by which corruption affects resources, one should treat with caution any estimated relationship between corruption and the state of natural resources. Simple, atheoretical models linking corruption measures and natural resource use typically do not account for other important control variables pivotal to the relationship between humans and natural resources. By way of illustration of these two general concerns, we used statistical methods to demonstrate that the findings of a recent, well-known study that posits a link between corruption and decreases in forests and elephants are not robust to simple conceptual and methodological refinements. In particular, once we controlled for a few plausible anthropogenic and biophysical conditioning factors, estimated the effects in changes rather than levels so as not to confound cross-sectional and longitudinal variation, and incorporated additional observations from the same data sources, corruption levels no longer had any explanatory power.
Practices and discourses of ubuntu: Implications for an African model of disability?
2017-01-01
Background Southern African scholars and activists working in disability studies have argued that ubuntu or unhu is a part of their world view. Objectives Thinking seriously about ubuntu, as a shared collective humanness or social ethics, means to examine how Africans have framed a struggle for this shared humanity in terms of decolonisation and activism. Method Three examples of applications of ubuntu are given, with two mainly linked to making explicit umaka. Firstly, ubuntu is linked to making visible the invisible inequalities for a common humanity in South Africa. Secondly, it becomes correlated to the expression of environmental justice in West and East African countries. Results An African model of disability that encapsulates ubuntu is correlated to how Africans have illustrated a social ethics of a common humanity in their grassroots struggles against oppression and disablement in the 20th century. Ubuntu also locates disability politically within the wider environment and practices of sustainability which are now important to the post-2105 agenda, Convention on the Rights of Persons with Disabilities (CRPD) and the (UN) Sustainable Development Goals linked to climate change. Conclusion A different kind of political action linked to social justice seems to be evolving in line with ubuntu. This has implications for the future of disability studies. PMID:28730067
A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
2013-01-01
Chemical cross-linking of proteins combined with mass spectrometry provides an attractive and novel method for the analysis of native protein structures and protein complexes. Analysis of the data however is complex. Only a small number of cross-linked peptides are produced during sample preparation and must be identified against a background of more abundant native peptides. To facilitate the search and identification of cross-linked peptides, we have developed a novel software suite, named Hekate. Hekate is a suite of tools that address the challenges involved in analyzing protein cross-linking experiments when combined with mass spectrometry. The software is an integrated pipeline for the automation of the data analysis workflow and provides a novel scoring system based on principles of linear peptide analysis. In addition, it provides a tool for the visualization of identified cross-links using three-dimensional models, which is particularly useful when combining chemical cross-linking with other structural techniques. Hekate was validated by the comparative analysis of cytochrome c (bovine heart) against previously reported data.1 Further validation was carried out on known structural elements of DNA polymerase III, the catalytic α-subunit of the Escherichia coli DNA replisome along with new insight into the previously uncharacterized C-terminal domain of the protein. PMID:24010795
NASA Astrophysics Data System (ADS)
Liu, Shuxin; Ji, Xinsheng; Liu, Caixia; Bai, Yi
2017-01-01
Many link prediction methods have been proposed for predicting the likelihood that a link exists between two nodes in complex networks. Among these methods, similarity indices are receiving close attention. Most similarity-based methods assume that the contribution of links with different topological structures is the same in the similarity calculations. This paper proposes a local weighted method, which weights the strength of connection between each pair of nodes. Based on the local weighted method, six local weighted similarity indices extended from unweighted similarity indices (including Common Neighbor (CN), Adamic-Adar (AA), Resource Allocation (RA), Salton, Jaccard and Local Path (LP) index) are proposed. Empirical study has shown that the local weighted method can significantly improve the prediction accuracy of these unweighted similarity indices and that in sparse and weakly clustered networks, the indices perform even better.
Kansei, surfaces and perception engineering
NASA Astrophysics Data System (ADS)
Rosen, B.-G.; Eriksson, L.; Bergman, M.
2016-09-01
The aesthetic and pleasing properties of a product are important and add significantly to the meaning and relevance of a product. Customer sensation and perception are largely about psychological factors. There has been a strong industrial and academic need and interest for methods and tools to quantify and link product properties to the human response but a lack of studies of the impact of surfaces. In this study, affective surface engineering is used to illustrate and model the link between customer expectations and perception to controllable product surface properties. The results highlight the use of the soft metrology concept for linking physical and human factors contributing to the perception of products. Examples of surface applications of the Kansei methodology are presented from sauna bath, health care, architectural and hygiene tissue application areas to illustrate, discuss and confirm the strength of the methodology. In the conclusions of the study, future research in soft metrology is proposed to allow understanding and modelling of product perception and sensations in combination with a development of the Kansei surface engineering methodology and software tools.
Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare
2013-10-01
Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability, and ownership. Few recent exemplars in the literature link shared governance, change management, and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence-based practice in determining the best methods to evaluate the implementation of a new model of care.
Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare
2013-01-01
Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability and ownership. Few recent exemplars in the literature link shared governance, change management and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence based practice in determining the best methods to evaluate the implementation of a new model of care. PMID:24061583
An Open Source modular platform for hydrological model implementation
NASA Astrophysics Data System (ADS)
Kolberg, Sjur; Bruland, Oddbjørn
2010-05-01
An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not: Write or compile computer code, handle file IO for each modules, • Routine implementation and testing. Implementation of new process-simulating methods/equations, specialised objective functions or quality control routines, testing of these in an existing framework. o Need not: Implement user or model interface for the new routine, IO handling, administration of model setup and run, calibration and validation routines etc. From being developed for Norway's largest hydropower producer Statkraft, ENKI is now being turned into an Open Source project. At the time of writing, the licence and the project administration is not established. Also, it remains to port the application to other compilers and computer platforms. However, we hope that ENKI will prove useful for both academic and operational users.
Modeling the Water - Quality Effects of Changes to the Klamath River Upstream of Keno Dam, Oregon
Sullivan, Annett B.; Sogutlugil, I. Ertugrul; Rounds, Stewart A.; Deas, Michael L.
2013-01-01
The Link River to Keno Dam (Link-Keno) reach of the Klamath River, Oregon, generally has periods of water-quality impairment during summer, including low dissolved oxygen, elevated concentrations of ammonia and algae, and high pH. Efforts are underway to improve water quality in this reach through a Total Maximum Daily Load (TMDL) program and other management and operational actions. To assist in planning, a hydrodynamic and water-quality model was used in this study to provide insight about how various actions could affect water quality in the reach. These model scenarios used a previously developed and calibrated CE-QUAL-W2 model of the Link-Keno reach developed by the U.S. Geological Survey (USGS), Watercourse Engineering Inc., and the Bureau of Reclamation for calendar years 2006-09 (referred to as the "USGS model" in this report). Another model of the same river reach was previously developed by Tetra Tech, Inc. and the Oregon Department of Environmental Quality for years 2000 and 2002 and was used in the TMDL process; that model is referred to as the "TMDL model" in this report. This report includes scenarios that (1) assess the effect of TMDL allocations on water quality, (2) provide insight on certain aspects of the TMDL model, (3) assess various methods to improve water quality in this reach, and (4) examine possible water-quality effects of a future warmer climate. Results presented in this report for the first 5 scenarios supersede or augment those that were previously published (scenarios 1 and 2 in Sullivan and others [2011], 3 through 5 in Sullivan and others [2012]); those previous results are still valid, but the results for those scenarios in this report are more current.
Wienke, B R; O'Leary, T R
2008-05-01
Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.
NASA Astrophysics Data System (ADS)
Pozzi, W.; Fekete, B.; Piasecki, M.; McGuinness, D.; Fox, P.; Lawford, R.; Vorosmarty, C.; Houser, P.; Imam, B.
2008-12-01
The inadequacies of water cycle observations for monitoring long-term changes in the global water system, as well as their feedback into the climate system, poses a major constraint on sustainable development of water resources and improvement of water management practices. Hence, The Group on Earth Observations (GEO) has established Task WA-08-01, "Integration of in situ and satellite data for water cycle monitoring," an integrative initiative combining different types of satellite and in situ observations related to key variables of the water cycle with model outputs for improved accuracy and global coverage. This presentation proposes development of the Rapid, Integrated Monitoring System for the Water Cycle (Global-RIMS)--already employed by the GEO Global Terrestrial Network for Hydrology (GTN-H)--as either one of the main components or linked with the Asian system to constitute the modeling system of GEOSS for water cycle monitoring. We further propose expanded, augmented capability to run multiple grids to embrace some of the heterogeneous methods and formats of the Earth Science, Hydrology, and Hydraulic Engineering communities. Different methodologies are employed by the Earth Science (land surface modeling), the Hydrological (GIS), and the Hydraulic Engineering Communities; with each community employing models that require different input data. Data will be routed as input variables to the models through web services, allowing satellite and in situ data to be integrated together within the modeling framework. Semantic data integration will provide the automation to enable this system to operate in near-real-time. Multiple data collections for ground water, precipitation, soil moisture satellite data, such as SMAP, and lake data will require multiple low level ontologies, and an upper level ontology will permit user-friendly water management knowledge to be synthesized. These ontologies will have to have overlapping terms mapped and linked together. so that they can cover an even wider net of data sources. The goal is to develop the means to link together the upper level and lower level ontologies and to have these registered within the GEOSS Registry. Actual operational ontologies that would link to models or link to data collections containing input variables required by models would have to be nested underneath this top level ontology, analogous to the mapping that has been carried out among ontologies within GEON.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
A system dynamics approach to develop a recovery model in the Malaysian automotive industry
NASA Astrophysics Data System (ADS)
Mohamad-Ali, N.; Ghazilla, R. A. R.; Abdul-Rashid, S. H.; Sakundarini, N.; Ahmad-Yazid, A.; Stephenie, L.
2017-06-01
Design strategies play a significant role to enhance recovery effectiveness at the end of product life cycle. By reviewing previous study, there are many factors involved to enhance recovery effectiveness but limited to linking design strategies factors in holistic and dynamics view. Proposed method are explained and an initial model for end-of-life vehicles (ELVs) recovery model illustrated in graphical and numerical data is presented. However this is limited to authors understanding and preliminary data which requires collaboration between designers and other stakeholders to develop a model based on actual situation.
Dynamic analysis of space structures including elastic, multibody, and control behavior
NASA Technical Reports Server (NTRS)
Pinson, Larry; Soosaar, Keto
1989-01-01
The problem is to develop analysis methods, modeling stategies, and simulation tools to predict with assurance the on-orbit performance and integrity of large complex space structures that cannot be verified on the ground. The problem must incorporate large reliable structural models, multi-body flexible dynamics, multi-tier controller interaction, environmental models including 1g and atmosphere, various on-board disturbances, and linkage to mission-level performance codes. All areas are in serious need of work, but the weakest link is multi-body flexible dynamics.
NASA Astrophysics Data System (ADS)
Varotsos, G. K.; Nistazakis, H. E.; Petkovic, M. I.; Djordjevic, G. T.; Tombras, G. S.
2017-11-01
Over the last years terrestrial free-space optical (FSO) communication systems have demonstrated an increasing scientific and commercial interest in response to the growing demands for ultra high bandwidth, cost-effective and secure wireless data transmissions. However, due the signal propagation through the atmosphere, the performance of such links depends strongly on the atmospheric conditions such as weather phenomena and turbulence effect. Additionally, their operation is affected significantly by the pointing errors effect which is caused by the misalignment of the optical beam between the transmitter and the receiver. In order to address this significant performance degradation, several statistical models have been proposed, while particular attention has been also given to diversity methods. Here, the turbulence-induced fading of the received optical signal irradiance is studied through the M (alaga) distribution, which is an accurate model suitable for weak to strong turbulence conditions and unifies most of the well-known, previously emerged models. Thus, taking into account the atmospheric turbulence conditions along with the pointing errors effect with nonzero boresight and the modulation technique that is used, we derive mathematical expressions for the estimation of the average bit error rate performance for SIMO FSO links. Finally, proper numerical results are given to verify our derived expressions and Monte Carlo simulations are also provided to further validate the accuracy of the analysis proposed and the obtained mathematical expressions.
Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai
2016-01-01
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209
NASA Astrophysics Data System (ADS)
Dã¡Vila, Alã¡N.; Escudero, Christian; López, Jorge, , Dr.
2004-10-01
Several methods have been developed in order to study phase transitions in nuclear fragmentation. The one used in this research is Percolation. This method allows us to adjust resulting data to heavy ion collisions experiments. In systems, such as atomic nuclei or molecules, energy is put into the system. The system's particles move away from each other until their links are broken. Some particles will still be linked. The fragments' distribution is found to be a power law. We are witnessing then a critical phenomenon. In our model the particles are represented as occupied spaces in a cubical array. Each particle has a bound to each one of its 6 neighbors. Each bound can be active if the two particles are linked or inactive if they are not. When two or more particles are linked, a fragment is formed. The probability for a specific link to be broken cannot be calculated, so the probability for a bound to be active is going to be used as parameter when trying to adjust the data. For a given probability p several arrays are generated. The fragments are counted. The fragments' distribution is then adjusted to a power law. The probability that generates the better fit is going to be the critical probability that indicates a phase transition. The better fit is found by seeking the fragments' distribution that gives the minimal chi squared when compared to a power law. As additional evidence of criticality the entropy and normalized variance of the mass are also calculated for each probability.
Biological life-support systems
NASA Technical Reports Server (NTRS)
Shepelev, Y. Y.
1975-01-01
The establishment of human living environments by biologic methods, utilizing the appropriate functions of autotrophic and heterotrophic organisms is examined. Natural biologic systems discussed in terms of modeling biologic life support systems (BLSS), the structure of biologic life support systems, and the development of individual functional links in biologic life support systems are among the factors considered. Experimental modeling of BLSS in order to determine functional characteristics, mechanisms by which stability is maintained, and principles underlying control and regulation is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dizier, M.H.; Eliaou, J.F.; Babron, M.C.
In order to investigate the HLA component involved in rheumatoid arthritis (RA), the authors tested genetic models by the marker association-segregation [chi][sup 2] (MASC) method, using the HLA genotypic distribution observed in a sample of 97 RA patients. First they tested models assuming the involvement of a susceptibility gene linked to the DR locus. They showed that the present data are compatible with a simple model assuming the effect of a recessive allele of a biallelic locus linked to the DR locus and without any assumption of synergistic effect. Then they considered models assuming the direct involvement of the DRmore » allele products, and tested the unifying-shared-epitope hypothesis, which has been proposed. Under this hypothesis the DR alleles are assumed to be directly involved in the susceptibility to the disease because of the presence of similar or identical amino acid sequences in position 70-74 of the third hypervariable region of the DRBI molecules, shared by the RA-associated DR alleles DR4Dw4, DR4Dw14, and DR1. This hypothesis was strongly rejected with the present data. In the case of the direct involvement of the DR alleles, hypotheses more complex that the unifying-shared-epitope hypothesis would have to be considered. 28 refs., 2 tabs.« less
A Bayesian method for inferring transmission chains in a partially observed epidemic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef M.; Ray, Jaideep
2008-10-01
We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historicalmore » data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.« less
Trajectory prediction of saccadic eye movements using a compressed exponential model
Han, Peng; Saunders, Daniel R.; Woods, Russell L.; Luo, Gang
2013-01-01
Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment between the gaze direction and image manipulation during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3° to 30°, which were collected with an EyeLink 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R2 > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker for saccades larger than 8°. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2° dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays. PMID:23902753
Vortex instability in turbulent free-space propagation
NASA Astrophysics Data System (ADS)
Lavery, Martin P. J.
2018-04-01
The spatial structuring of optical fields is integral within many next generation optical metrology and communication techniques. A verifiable physical model of the propagation of these optical fields in a turbulent environment is important for developing effective mitigation techniques for the modal degradation that occurs in a free-space link. We present a method to simulate this modal degradation that agrees with recently reported experimental findings. A 1.5 km free-space link is emulated by decomposing the optical turbulence that accumulates over a long distance link, into many, weakly perturbing steps of 10 m. This simulation shows that the high-order vortex at the centre of the helical phase profiles in modes that carry orbital angular momentum of | {\\ell }| ≥slant 2{\\hslash } are unstable and fracture into many vortices when they propagate over the link. This splitting presents issues for the application of turbulence mitigation techniques. The usefulness of pre-correction, post-correction, and complex field conjugation techniques are discussed.
Zhuang, Chen; Shi, Chengmei; Tao, Furong; Cui, Yuezhi
2017-12-01
The functionalized cellulose ester MCN was firstly synthesized and used to cross-link gelatin by amidation between -NH 2 in gelatin and active ester groups in MCN to form a composite polymer network Gel-MCN, which was confirmed by Van Slyke method, FTIR, XRD and TGA-DTG spectra. The model drug omeprazole was loaded in Gel-MCN composites mainly by electrostatic interaction and hydrogen bonds, which were certified by FTIR, XRD and TGA-DSC. Thermal stability, anti-biodegradability, mechanical property and surface hydrophobicity of the composites with different cross-linking extents and drug loading were systematically investigated. SEM images demonstrated the honeycomb structural cells of cross-linked gelatin networks and this ensured drug entrapment. The drug release mechanism was dominated by a combined effect of diffusion and degradation, and the release rate decreased with cross-linking degree increased. The developed drug delivery system had profound significance in improving pesticide effect and bioavailability of drugs. Copyright © 2017. Published by Elsevier B.V.
Design and modelling of a link monitoring mechanism for the Common Data Link (CDL)
NASA Astrophysics Data System (ADS)
Eichelberger, John W., III
1994-09-01
The Common Data Link (CDL) is a full duplex, point-to-point microwave communications system used in imagery and signals intelligence collection systems. It provides a link between two remote Local Area Networks (LAN's) aboard collection and surface platforms. In a hostile environment, there is an overwhelming need to dynamically monitor the link and thus, limit the impact of jamming. This work describes steps taken to design, model, and evaluate a link monitoring system suitable for the CDL. The monitoring system is based on features and monitoring constructs of the Link Control Protocol (LCP) in the Point-to-Point Protocol (PPP) suite. The CDL model is based on a system of two remote Fiber Distributed Data Interface (FDDI) LAN's. In particular, the policies and mechanisms associated with monitoring are described in detail. An implementation of the required mechanisms using the OPNET network engineering tool is described. Performance data related to monitoring parameters is reported. Finally, integration of the FDDI-CDL model with the OPNET Internet model is described.
NASA Astrophysics Data System (ADS)
Satyaramesh, P. V.; RadhaKrishna, C.
2013-06-01
A generalized pricing structure for procurement of power under frequency ancillary service is developed in this paper. It is a frequency linked-price model and suitable for deregulation market environment. This model takes into consideration: governor characteristics and frequency characteristics of generator as additional parameters in load flow method. The main objective of the new approach proposed in this paper is to establish bidding price structure for frequency regulation services in competitive ancillary electrical markets under steady state condition. Lot of literatures are available for calculating the frequency deviations with respect to load changes by using dynamic simulation methods. But in this paper, the model computes the frequency deviations for additional requirements of power under steady state with considering power system network topology. An attempt is also made in this paper to develop optimal bidding price structure for the frequency-regulated systems. It gives a signal to traders or bidders that the power demand can be assessed more accurately much closer to real time and helps participants bid more accurate quantities on day-ahead market. The recent trends of frequency linked-price model existing in Indian power systems issues required for attention are also dealt in this paper. Test calculations have been performed on 30-bus system. The paper also explains adoptability of 33 this model to practical Indian power system. The results presented are analyzed and useful conclusions are drawn.
NASA Astrophysics Data System (ADS)
Derakhshani, S. M.; Schott, D. L.; Lodewijks, G.
2013-06-01
Dust emissions can have significant effects on the human health, environment and industry equipment. Understanding the dust generation process helps to select a suitable dust preventing approach and also is useful to evaluate the environmental impact of dust emission. To describe these processes, numerical methods such as Computational Fluid Dynamics (CFD) are widely used, however nowadays particle based methods like Discrete Element Method (DEM) allow researchers to model interaction between particles and fluid flow. In this study, air flow over a stockpile, dust emission, erosion and surface deformation of granular material in the form of stockpile are studied by using DEM and CFD as a coupled method. Two and three dimensional simulations are respectively developed for CFD and DEM methods to minimize CPU time. The standard κ-ɛ turbulence model is used in a fully developed turbulent flow. The continuous gas phase and the discrete particle phase link to each other through gas-particle void fractions and momentum transfer. In addition to stockpile deformation, dust dispersion is studied and finally the accuracy of stockpile deformation results obtained by CFD-DEM modelling will be validated by the agreement with the existing experimental data.
Alkallas, Rached; Fish, Lisa; Goodarzi, Hani; Najafabadi, Hamed S
2017-10-13
The abundance of mRNA is mainly determined by the rates of RNA transcription and decay. Here, we present a method for unbiased estimation of differential mRNA decay rate from RNA-sequencing data by modeling the kinetics of mRNA metabolism. We show that in all primary human tissues tested, and particularly in the central nervous system, many pathways are regulated at the mRNA stability level. We present a parsimonious regulatory model consisting of two RNA-binding proteins and four microRNAs that modulate the mRNA stability landscape of the brain, which suggests a new link between RBFOX proteins and Alzheimer's disease. We show that downregulation of RBFOX1 leads to destabilization of mRNAs encoding for synaptic transmission proteins, which may contribute to the loss of synaptic function in Alzheimer's disease. RBFOX1 downregulation is more likely to occur in older and female individuals, consistent with the association of Alzheimer's disease with age and gender."mRNA abundance is determined by the rates of transcription and decay. Here, the authors propose a method for estimating the rate of differential mRNA decay from RNA-seq data and model mRNA stability in the brain, suggesting a link between mRNA stability and Alzheimer's disease."
Translating three states of knowledge--discovery, invention, and innovation
2010-01-01
Background Knowledge Translation (KT) has historically focused on the proper use of knowledge in healthcare delivery. A knowledge base has been created through empirical research and resides in scholarly literature. Some knowledge is amenable to direct application by stakeholders who are engaged during or after the research process, as shown by the Knowledge to Action (KTA) model. Other knowledge requires multiple transformations before achieving utility for end users. For example, conceptual knowledge generated through science or engineering may become embodied as a technology-based invention through development methods. The invention may then be integrated within an innovative device or service through production methods. To what extent is KT relevant to these transformations? How might the KTA model accommodate these additional development and production activities while preserving the KT concepts? Discussion Stakeholders adopt and use knowledge that has perceived utility, such as a solution to a problem. Achieving a technology-based solution involves three methods that generate knowledge in three states, analogous to the three classic states of matter. Research activity generates discoveries that are intangible and highly malleable like a gas; development activity transforms discoveries into inventions that are moderately tangible yet still malleable like a liquid; and production activity transforms inventions into innovations that are tangible and immutable like a solid. The paper demonstrates how the KTA model can accommodate all three types of activity and address all three states of knowledge. Linking the three activities in one model also illustrates the importance of engaging the relevant stakeholders prior to initiating any knowledge-related activities. Summary Science and engineering focused on technology-based devices or services change the state of knowledge through three successive activities. Achieving knowledge implementation requires methods that accommodate these three activities and knowledge states. Accomplishing beneficial societal impacts from technology-based knowledge involves the successful progression through all three activities, and the effective communication of each successive knowledge state to the relevant stakeholders. The KTA model appears suitable for structuring and linking these processes. PMID:20205873
Exploring the future with anticipatory networks
NASA Astrophysics Data System (ADS)
Skulimowski, A. M. J.
2013-01-01
This paper presents a theory of anticipatory networks that originates from anticipatory models of consequences in multicriteria decision problems. When making a decision, the decision maker takes into account the anticipated outcomes of each future decision problem linked by the causal relations with the present one. In a network of linked decision problems, the causal relations are defined between time-ordered nodes. The scenarios of future consequences of each decision are modeled by multiple vertices starting from an appropriate node. The network is supplemented by one or more relations of anticipation, or future feedback, which describe a situation where decision makers take into account the anticipated results of some future optimization problems while making their choice. So arises a multigraph of decision problems linked causally and by one or more anticipation relation, termed here the anticipatory network. We will present the properties of anticipatory networks and propose a method of reducing, transforming and using them to solve current decision problems. Furthermore, it will be shown that most anticipatory networks can be regarded as superanticipatory systems, i.e. systems that are anticipatory in the Rosen sense and contain a future model of at least one other anticipatory system. The anticipatory networks can also be applied to filter the set of future scenarios in a foresight exercise.
Coarse-Grained Models for Protein-Cell Membrane Interactions
Bradley, Ryan; Radhakrishnan, Ravi
2015-01-01
The physiological properties of biological soft matter are the product of collective interactions, which span many time and length scales. Recent computational modeling efforts have helped illuminate experiments that characterize the ways in which proteins modulate membrane physics. Linking these models across time and length scales in a multiscale model explains how atomistic information propagates to larger scales. This paper reviews continuum modeling and coarse-grained molecular dynamics methods, which connect atomistic simulations and single-molecule experiments with the observed microscopic or mesoscale properties of soft-matter systems essential to our understanding of cells, particularly those involved in sculpting and remodeling cell membranes. PMID:26613047
Meta-analyses of Theory use in Medication Adherence Intervention Research
Conn, Vicki S.; Enriquez, Maithe; Ruppar, Todd M.; Chan, Keith C.
2016-01-01
Objective This systematic review applied meta-analytic procedures to integrate primary research that examined theory- or model-linked medication adherence interventions. Methods Extensive literature searching strategies were used to locate trials testing interventions with medication adherence behavior outcomes measured by electronic event monitoring, pharmacy refills, pill counts, and self-reports. Random-effects model analysis was used to calculate standardized mean difference effect sizes for medication adherence outcomes. Results Codable data were extracted from 146 comparisons with 19,348 participants. The most common theories and models were social cognitive theory and motivational interviewing. The overall weighted effect size for all interventions comparing treatment and control participants was 0.294. The effect size for interventions based on single-theories was 0.323 and for multiple-theory interventions was 0.214. Effect sizes for individual theories and models ranged from 0.041 to 0.447. The largest effect sizes were for interventions based on the health belief model (0.477) and adult learning theory (0.443). The smallest effect sizes were for interventions based on PRECEDE (0.041) and self-regulation (0.118). Conclusion These findings suggest that theory- and model-linked interventions have a significant but modest effect on medication adherence outcomes. PMID:26931748
Link prediction in the network of global virtual water trade
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca
2016-04-01
Through the international food-trade, water resources are 'virtually' transferred from the country of production to the country of consumption. The international food-trade, thus, implies a network of virtual water flows from exporting to importing countries (i.e., nodes). Given the dynamical behavior of the network, where food-trade relations (i.e., links) are created and dismissed every year, link prediction becomes a challenge. In this study, we propose a novel methodology for link prediction in the virtual water network. The model aims at identifying the main factors (among 17 different variables) driving the creation of a food-trade relation between any two countries, along the period between 1986 and 2011. Furthermore, the model can be exploited to investigate the network configuration in the future, under different possible (climatic and demographic) scenarios. The model grounds the existence of a link between any two nodes on the link weight (i.e., the virtual water flow): a link exists when the nodes exchange a minimum (fixed) volume of virtual water. Starting from a set of potential links between any two nodes, we fit the associated virtual water flows (both the real and the null ones) by means of multivariate linear regressions. Then, links with estimated flows higher than a minimum value (i.e., threshold) are considered active-links, while the others are non-active ones. The discrimination between active and non-active links through the threshold introduces an error (called link-prediction error) because some real links are lost (i.e., missed links) and some non-existing links (i.e., spurious links) are inevitably introduced in the network. The major drivers are those significantly minimizing the link-prediction error. Once the structure of the unweighted virtual water network is known, we apply, again, linear regressions to assess the major factors driving the fluxes traded along (modelled) active-links. Results indicate that, on the one hand, population and fertilizer use, together with link properties (such as the distance between nodes), are the major factors driving the links creation; on the other hand, population, distance, and gross domestic product are essential to model the flux entity. The results are promising since the model is able to correctly predict the 85% of the 16422 food-trade links (15% are missed), by spuriously adding to the real network only the 5% of non-existing links. The link-prediction error, evaluated as the sum of the percentage of missed and spurious links, is around 20% and it is constant over the study period. Only the 0.01% of the global virtual water flow is traded along missed links and an even lower flow is added by the spurious links (0.003%).
NASA Astrophysics Data System (ADS)
Quiers, M.; Perrette, Y.; Etienne, D.; Develle, A. L.; Jacq, K.
2017-12-01
The use of organic proxies increases in paleoenvironmental reconstructions from natural archives. Major advances have been achieved by the development of new highly informative molecular proxies usually linked to specific compounds. While studies focused on targeted compounds, offering a high information degree, advances on bulk organic matter are limited. However, this bulk is the main contributor to carbon cycle and has been shown to be a driver of many mineral or organic compounds transfer and record. Development of target proxies need complementary information on bulk organic matter to understand biases link to controlling factors or analytical methods, and provide a robust interpretation. Fluorescence methods have often been employed to characterize and quantify organic matter. However, these technics are mainly developed for liquid samples, inducing material and resolution loss when working on natural archives (either stalagmite or sediments). High-resolution solid phase fluorescence (SPF) was developed on speleothems. This method allows now to analyse organic matter quality and quantity if procedure to constrain the optical density are adopted. In fact, a calibration method using liquid phase fluorescence (LPF) was developed for speleothem, allowing to quantify organic carbon at high-resolution. We report here an application of such a procedure SPF/LPF measurements on lake sediments. In order to avoid sediment matrix effects on the fluorescence signal, a calibration using LPF measurements was realised. First results using this method provided organic matter quality record of different organic matter compounds (humic-like, protein-like and chlorophylle-like compounds) at high resolution for the sediment core. High resolution organic matter fluxes are obtained in a second time, applying pragmatic chemometrics model (non linear models, partial least square models) on high resolution fluorescence data. SPF method can be considered as a promising tool for high resolution record on organic matter quality and quantity. Potential application of this method will be evocated (lake ecosystem dynamic, changes in trophic levels)
A physically based analytical model of flood frequency curves
NASA Astrophysics Data System (ADS)
Basso, S.; Schirmer, M.; Botter, G.
2016-09-01
Predicting magnitude and frequency of floods is a key issue in hydrology, with implications in many fields ranging from river science and geomorphology to the insurance industry. In this paper, a novel physically based approach is proposed to estimate the recurrence intervals of seasonal flow maxima. The method links the extremal distribution of streamflows to the stochastic dynamics of daily discharge, providing an analytical expression of the seasonal flood frequency curve. The parameters involved in the formulation embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which is linked to the antecedent wetness condition in the watershed, needs to be calibrated on the observed maxima. The performance of the method is discussed through a set of applications in four rivers featuring heterogeneous daily flow regimes. The model provides reliable estimates of seasonal maximum flows in different climatic settings and is able to capture diverse shapes of flood frequency curves emerging in erratic and persistent flow regimes. The proposed method exploits experimental information on the full range of discharges experienced by rivers. As a consequence, model performances do not deteriorate when the magnitude of events with return times longer than the available sample size is estimated. The approach provides a framework for the prediction of floods based on short data series of rainfall and daily streamflows that may be especially valuable in data scarce regions of the world.
Bayesian microsaccade detection
Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji
2017-01-01
Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483
FARVATX: FAmily-based Rare Variant Association Test for X-linked genes
Choi, Sungkyoung; Lee, Sungyoung; Qiao, Dandi; Hardin, Megan; Cho, Michael H.; Silverman, Edwin K; Park, Taesung; Won, Sungho
2016-01-01
Although the X chromosome has many genes that are functionally related to human diseases, the complicated biological properties of the X chromosome have prevented efficient genetic association analyses, and only a few significantly associated X-linked variants have been reported for complex traits. For instance, dosage compensation of X-linked genes is often achieved via the inactivation of one allele in each X-linked variant in females; however, some X-linked variants can escape this X chromosome inactivation. Efficient genetic analyses cannot be conducted without prior knowledge about the gene expression process of X-linked variants, and misspecified information can lead to power loss. In this report, we propose new statistical methods for rare X-linked variant genetic association analysis of dichotomous phenotypes with family-based samples. The proposed methods are computationally efficient and can complete X-linked analyses within a few hours. Simulation studies demonstrate the statistical efficiency of the proposed methods, which were then applied to rare-variant association analysis of the X chromosome in chronic obstructive pulmonary disease (COPD). Some promising significant X-linked genes were identified, illustrating the practical importance of the proposed methods. PMID:27325607
FARVATX: Family-Based Rare Variant Association Test for X-Linked Genes.
Choi, Sungkyoung; Lee, Sungyoung; Qiao, Dandi; Hardin, Megan; Cho, Michael H; Silverman, Edwin K; Park, Taesung; Won, Sungho
2016-09-01
Although the X chromosome has many genes that are functionally related to human diseases, the complicated biological properties of the X chromosome have prevented efficient genetic association analyses, and only a few significantly associated X-linked variants have been reported for complex traits. For instance, dosage compensation of X-linked genes is often achieved via the inactivation of one allele in each X-linked variant in females; however, some X-linked variants can escape this X chromosome inactivation. Efficient genetic analyses cannot be conducted without prior knowledge about the gene expression process of X-linked variants, and misspecified information can lead to power loss. In this report, we propose new statistical methods for rare X-linked variant genetic association analysis of dichotomous phenotypes with family-based samples. The proposed methods are computationally efficient and can complete X-linked analyses within a few hours. Simulation studies demonstrate the statistical efficiency of the proposed methods, which were then applied to rare-variant association analysis of the X chromosome in chronic obstructive pulmonary disease. Some promising significant X-linked genes were identified, illustrating the practical importance of the proposed methods. © 2016 WILEY PERIODICALS, INC.
Ferguson, Eamonn; Gallagher, Laura
2007-11-01
People respond differently when information is framed either positively or negatively (frame valence). Two prominent models propose that the effects of valence are moderated by (1) the method of framing (attributes vs. goals: Levin, Schneider, & Gaeth, 1998) and (2) perceived risk (Rothman & Salovey, 1997). This experiment (N=200) explores the joint influence of both of these moderators with respect to decisions about a flu vaccination. The study extends previous work by integrating these two models and exploring the moderating effects of two different aspects of perceived risk (personal outcome effectiveness and procedural risk). The results show that personal outcome effectiveness indirectly links frames to intentions. Procedural risk moderates the relationship between valence and method in a manner consistent with predictions from Levin et al.. Partial support for the model proposed by Rothman and Salovey are observed for goal frames only.
Optimal accelerometer placement on a robot arm for pose estimation
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.
2017-05-01
The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin Mingde; Marshall, Craig T.; Qi, Yi
Purpose: The use of preclinical rodent models of disease continues to grow because these models help elucidate pathogenic mechanisms and provide robust test beds for drug development. Among the major anatomic and physiologic indicators of disease progression and genetic or drug modification of responses are measurements of blood vessel caliber and flow. Moreover, cardiopulmonary blood flow is a critical indicator of gas exchange. Current methods of measuring cardiopulmonary blood flow suffer from some or all of the following limitations--they produce relative values, are limited to global measurements, do not provide vasculature visualization, are not able to measure acute changes, aremore » invasive, or require euthanasia. Methods: In this study, high-spatial and high-temporal resolution x-ray digital subtraction angiography (DSA) was used to obtain vasculature visualization, quantitative blood flow in absolute metrics (ml/min instead of arbitrary units or velocity), and relative blood volume dynamics from discrete regions of interest on a pixel-by-pixel basis (100x100 {mu}m{sup 2}). Results: A series of calibrations linked the DSA flow measurements to standard physiological measurement using thermodilution and Fick's method for cardiac output (CO), which in eight anesthetized Fischer-344 rats was found to be 37.0{+-}5.1 ml/min. Phantom experiments were conducted to calibrate the radiographic density to vessel thickness, allowing a link of DSA cardiac output measurements to cardiopulmonary blood flow measurements in discrete regions of interest. The scaling factor linking relative DSA cardiac output measurements to the Fick's absolute measurements was found to be 18.90xCO{sub DSA}=CO{sub Fick}. Conclusions: This calibrated DSA approach allows repeated simultaneous visualization of vasculature and measurement of blood flow dynamics on a regional level in the living rat.« less
Modeling highway travel time distribution with conditional probability models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less
Tang, Xiaoming; Qu, Hongchun; Wang, Ping; Zhao, Meng
2015-03-01
This paper investigates the off-line synthesis approach of model predictive control (MPC) for a class of networked control systems (NCSs) with network-induced delays. A new augmented model which can be readily applied to time-varying control law, is proposed to describe the NCS where bounded deterministic network-induced delays may occur in both sensor to controller (S-A) and controller to actuator (C-A) links. Based on this augmented model, a sufficient condition of the closed-loop stability is derived by applying the Lyapunov method. The off-line synthesis approach of model predictive control is addressed using the stability results of the system, which explicitly considers the satisfaction of input and state constraints. Numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Generation of muscular dystrophy model rats with a CRISPR/Cas system.
Nakamura, Katsuyuki; Fujii, Wataru; Tsuboi, Masaya; Tanihata, Jun; Teramoto, Naomi; Takeuchi, Shiho; Naito, Kunihiko; Yamanouchi, Keitaro; Nishihara, Masugi
2014-07-09
Duchenne muscular dystrophy (DMD) is an X-linked lethal muscle disorder caused by mutations in the Dmd gene encoding Dystrophin. DMD model animals, such as mdx mice and canine X-linked muscular dystrophy dogs, have been widely utilized in the development of a treatment for DMD. Here, we demonstrate the generation of Dmd-mutated rats using a clustered interspaced short palindromic repeats (CRISPR)/Cas system, an RNA-based genome engineering technique that is also adaptive to rats. We simultaneously targeted two exons in the rat Dmd gene, which resulted in the absence of Dystrophin expression in the F0 generation. Dmd-mutated rats exhibited a decline in muscle strength, and the emergence of degenerative/regenerative phenotypes in the skeletal muscle, heart, and diaphragm. These mutations were heritable by the next generation, and F1 male rats exhibited similar phenotypes in their skeletal muscles. These model rats should prove to be useful for developing therapeutic methods to treat DMD.
Enzyme-linked immunosorbent assay detection and bioactivity of Cry1Ab protein fragments.
Albright, Vurtice C; Hellmich, Richard L; Coats, Joel R
2016-12-01
The continuing use of transgenic crops has led to an increased interest in the fate of insecticidal crystalline (Cry) proteins in the environment. Enzyme-linked immunosorbent assays (ELISAs) have emerged as the preferred detection method for Cry proteins in environmental matrices. Concerns exist that ELISAs are capable of detecting fragments of Cry proteins, which may lead to an overestimation of the concentration of these proteins in the environment. Five model systems were used to generate fragments of the Cry1Ab protein, which were then analyzed by ELISAs and bioassays. Fragments from 4 of the model systems were not detectable by ELISA and did not retain bioactivity. Fragments from the proteinase K model system were detectable by ELISA and retained bioactivity. In most cases, ELISAs appear to provide an accurate estimation of the amount of Cry proteins in the environment, as detectable fragments retained bioactivity and nondetectable fragments did not retain bioactivity. Environ Toxicol Chem 2016;35:3101-3112. © 2016 SETAC. © 2016 SETAC.
Some new exact solitary wave solutions of the van der Waals model arising in nature
NASA Astrophysics Data System (ADS)
Bibi, Sadaf; Ahmed, Naveed; Khan, Umar; Mohyud-Din, Syed Tauseef
2018-06-01
This work proposes two well-known methods, namely, Exponential rational function method (ERFM) and Generalized Kudryashov method (GKM) to seek new exact solutions of the van der Waals normal form for the fluidized granular matter, linked with natural phenomena and industrial applications. New soliton solutions such as kink, periodic and solitary wave solutions are established coupled with 2D and 3D graphical patterns for clarity of physical features. Our comparison reveals that the said methods excel several existing methods. The worked-out solutions show that the suggested methods are simple and reliable as compared to many other approaches which tackle nonlinear equations stemming from applied sciences.
NASA Astrophysics Data System (ADS)
Alexander, M. Joan; Stephan, Claudia
2015-04-01
In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional circulation and provides a new method for future development of realistic convective gravity wave parameterizations.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.; Waas, Anthony M.
2013-01-01
A mesh objective crack band model was implemented within the generalized method of cells micromechanics theory. This model was linked to a macroscale finite element model to predict post-peak strain softening in composite materials. Although a mesh objective theory was implemented at the microscale, it does not preclude pathological mesh dependence at the macroscale. To ensure mesh objectivity at both scales, the energy density and the energy release rate must be preserved identically across the two scales. This requires a consistent characteristic length or localization limiter. The effects of scaling (or not scaling) the dimensions of the microscale repeating unit cell (RUC), according to the macroscale element size, in a multiscale analysis was investigated using two examples. Additionally, the ramifications of the macroscale element shape, compared to the RUC, was studied.
Decentralising Curriculum Reform: The Link Teacher Model of In-Service Training.
ERIC Educational Resources Information Center
Wildy, Helen; And Others
1996-01-01
Presents a (Western Australian) case study of the link teacher model, a decentralized, "train the trainer" approach to inservice education. Discusses the model's perceived effectiveness, the link teachers' role, central authority support, and new experimentation opportunities. Combining centralized syllabus change with decentralized…
Optimal Linking Design for Response Model Parameters
ERIC Educational Resources Information Center
Barrett, Michelle D.; van der Linden, Wim J.
2017-01-01
Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.
The economic impact of drag in general aviation
NASA Technical Reports Server (NTRS)
Neal, R. D.
1975-01-01
General aviation aircraft fuel consumption and operating costs are closely linked to drag reduction methods. Improvements in airplane drag are envisioned for new models; their effects will be in the 5 to 10% range. Major improvements in fuel consumption over existing turbofan airplanes will be the combined results of improved aerodynamics plus additional effects from advanced turbofan engine designs.
Modeling the Effect of Nail Corrosion on the Lateral Strength of Joints
Samuel L. Zelinka; Douglas R. Rammer
2012-01-01
This article describes a theoretical method of linking fastener corrosion in wood connections to potential reduction in lateral shear strength. It builds upon published quantitative data of corrosion rates of metals in contact with treated wood for several different wood preservatives. These corrosion rates are then combined with yield theory equations to calculate a...
ERIC Educational Resources Information Center
Smith, Dedrick A.
2010-01-01
This dissertation reviews the knowledge management's role in organizational maturity in project management. It draws a direct linked between organizational maturity knowledge channels both informal and then formal and organizational project management maturity. The study uses a mixed method approach through online and telephone surveys that draws…
When Cannabis Is Available and Visible at School--A Multilevel Analysis of Students' Cannabis Use
ERIC Educational Resources Information Center
Kuntsche, Emmanuel
2010-01-01
Aims: To investigate the links between the visibility of cannabis use in school (measured by teachers' reports of students being under the influence of cannabis on school premises), the proportion of cannabis users in the class, perceived availability of cannabis, as well as adolescent cannabis use. Methods: A multilevel regression model was…
ERIC Educational Resources Information Center
Raffo, Carlo; O'Connor, Justin; Lovatt, Andy; Banks, Mark
2000-01-01
Presents arguments supporting a social model of learning linked to situated learning and cultural capital. Critiques training methods used in cultural industries (arts, publishing, broadcasting, design, fashion, restaurants). Uses case study evidence to demonstrates inadequacies of formal training in this sector. (Contains 49 references.) (SK)
ERIC Educational Resources Information Center
Aspelmeier, Jeffery E.; Elliott, Ann N.; Smith, Christopher H.
2007-01-01
Objective: The present study tests a model linking attachment, childhood sexual abuse (CSA), and adult psychological functioning. It expands on previous work by assessing the degree to which attachment security moderates the relationship between a history of child sexual abuse and trauma-related symptoms in college females. Method: Self-reports of…
Preparing Graduate Students to Teach Math: Engaging with Activities and Viewing Teaching Models
ERIC Educational Resources Information Center
Mongillo, Maria Boeke
2016-01-01
Teacher self-efficacy is the belief a teacher holds that he or she can make a difference in student achievement, even when the student is difficult or unmotivated (Guskey & Passaro, 1994). It has been linked to positive teacher practices and student outcomes. This mixed methods study of preservice elementary and early childhood math teachers…
A great deal of academic, private sector, and government research has been initiated to apply advanced molecular biological methods to the discovery of toxicity pathways in wildlife and humans. One aim is the prediction of health outcomes based on the combination of refined chemi...
ERIC Educational Resources Information Center
Correa-Fernandez, Virmarie; Ji, Lingyun; Castro, Yessenia; Heppner, Whitney L.; Vidrine, Jennifer Irvin; Costello, Tracy J.; Mullen, Patricia Dolan; Cofta-Woerpel, Ludmila; Velasquez, Mary M.; Greisinger, Anthony; Cinciripini, Paul M.; Wetter, David W.
2012-01-01
Objective: Based on conceptual models of addiction and affect regulation, this study examined the mechanisms linking current major depressive syndrome (MDS) and anxiety syndrome (AS) to postpartum smoking relapse. Method: Data were collected in a randomized clinical trial from 251 women who quit smoking during pregnancy. Simple and multiple…
Nursing Home Staffing and Quality under the Nursing Home Reform Act
ERIC Educational Resources Information Center
Zhang, Xinzhi; Grabowski, David C.
2004-01-01
Purpose: We examine whether the Nursing Home Reform Act (NHRA) improved nursing home staffing and quality. Design and Methods: Data from 5,092 nursing homes were linked across the 1987 Medicare/Medicaid Automated Certification System and the 1993 Online Survey, Certification and Reporting system. A dummy-year model was used to examine the effects…
The Role of Education, Parents and Peers in Adolescent Heavy Episodic Drinking
ERIC Educational Resources Information Center
Vermeulen-Smit, Evelien; Ter Bogt, Tom F. M.; Verdurmen, Jacqueline E. E.; Van Dorsselaer, Saskia A. F. M.; Vollebergh, Wilma A. M.
2012-01-01
Heavy episodic drinking is more common among adolescents with a lower educational level. Aim: This study probed into the mechanism through which a lower educational level is linked to heavier adolescent drinking. Methods: Structural equation modelling was conducted using data from the 2005 Health Behaviour in School-aged Children Survey (n =…
ERIC Educational Resources Information Center
Kaldon, Carolyn R.; Zoblotsky, Todd A.
2014-01-01
Previous research has linked inquiry-based science instruction (i.e., science instruction that engages students in doing science rather than just learning about science) with greater gains in student learning than text-book based methods (Vanosdall, Klentschy, Hedges & Weisbaum, 2007; Banilower, 2007; Ferguson 2009; Bredderman, 1983;…
NASA Astrophysics Data System (ADS)
Zunino, Andrea; Mosegaard, Klaus
2017-04-01
Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.
a Numerical Comparison of Langrange and Kane's Methods of AN Arm Segment
NASA Astrophysics Data System (ADS)
Rambely, Azmin Sham; Halim, Norhafiza Ab.; Ahmad, Rokiah Rozita
A 2-D model of a two-link kinematic chain is developed using two dynamics equations of motion, namely Kane's and Lagrange Methods. The dynamics equations are reduced to first order differential equation and solved using modified Euler and fourth order Runge Kutta to approximate the shoulder and elbow joint angles during a smash performance in badminton. Results showed that Runge-Kutta produced a better and exact approximation than that of modified Euler and both dynamic equations produced better absolute errors.
Authorship attribution based on Life-Like Network Automata.
Machicao, Jeaneth; Corrêa, Edilson A; Miranda, Gisele H B; Amancio, Diego R; Bruno, Odemir M
2018-01-01
The authorship attribution is a problem of considerable practical and technical interest. Several methods have been designed to infer the authorship of disputed documents in multiple contexts. While traditional statistical methods based solely on word counts and related measurements have provided a simple, yet effective solution in particular cases; they are prone to manipulation. Recently, texts have been successfully modeled as networks, where words are represented by nodes linked according to textual similarity measurements. Such models are useful to identify informative topological patterns for the authorship recognition task. However, there is no consensus on which measurements should be used. Thus, we proposed a novel method to characterize text networks, by considering both topological and dynamical aspects of networks. Using concepts and methods from cellular automata theory, we devised a strategy to grasp informative spatio-temporal patterns from this model. Our experiments revealed an outperformance over structural analysis relying only on topological measurements, such as clustering coefficient, betweenness and shortest paths. The optimized results obtained here pave the way for a better characterization of textual networks.
Perez-Brena, Norma J.; Cookston, Jeffrey T.; Fabricius, William V.; Saenz, Delia
2013-01-01
A mixed-method study identified profiles of fathers who mentioned key dimensions of their parenting and linked profile membership to adolescents’ adjustment using data from 337 European American, Mexican American and Mexican immigrant fathers and their early adolescent children. Father narratives about what fathers do well as parents were thematically coded for the presence of five fathering dimensions: emotional quality (how well father and child get along), involvement (amount of time spent together), provisioning (the amount of resources provided), discipline (the amount and success in parental control), and role modeling (teaching life lessons through example). Next, latent class analysis was used to identify three patterns of the likelihood of mentioning certain fathering dimensions: an emotionally-involved group mentioned emotional quality and involvement; an affective-control group mentioned emotional quality, involvement, discipline and role modeling; and an affective-model group mentioned emotional quality and role modeling. Profiles were significantly associated with subsequent adolescents’ reports of adjustment such that adolescents of affective-control fathers reported significantly more externalizing behaviors than adolescents of emotionally-involved fathers. PMID:24883049
NASA Astrophysics Data System (ADS)
Jacobs-Crisioni, C.; Koopmans, C. C.
2016-07-01
This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness is defined as a function of variables in which revenue and broader societal benefits may play a role and can be based on empirically underpinned parameters that may differ according to private or public interests. The choice set is selected from an exhaustive set of links and presumably contains those investment options that best meet private operator's objectives by balancing the revenues of additional fare against construction costs. The investment options consist of geographically plausible routes with potential detours. These routes are generated using a fine-meshed regularly latticed network and shortest path finding methods. Additionally, two indicators of the geographic accuracy of the simulated networks are introduced. A historical case study is presented to demonstrate the model's first results. These results show that the modelled networks reproduce relevant results of the historically built network with reasonable accuracy.
Link-topic model for biomedical abbreviation disambiguation.
Kim, Seonho; Yoon, Juntae
2015-02-01
The ambiguity of biomedical abbreviations is one of the challenges in biomedical text mining systems. In particular, the handling of term variants and abbreviations without nearby definitions is a critical issue. In this study, we adopt the concepts of topic of document and word link to disambiguate biomedical abbreviations. We newly suggest the link topic model inspired by the latent Dirichlet allocation model, in which each document is perceived as a random mixture of topics, where each topic is characterized by a distribution over words. Thus, the most probable expansions with respect to abbreviations of a given abstract are determined by word-topic, document-topic, and word-link distributions estimated from a document collection through the link topic model. The model allows two distinct modes of word generation to incorporate semantic dependencies among words, particularly long form words of abbreviations and their sentential co-occurring words; a word can be generated either dependently on the long form of the abbreviation or independently. The semantic dependency between two words is defined as a link and a new random parameter for the link is assigned to each word as well as a topic parameter. Because the link status indicates whether the word constitutes a link with a given specific long form, it has the effect of determining whether a word forms a unigram or a skipping/consecutive bigram with respect to the long form. Furthermore, we place a constraint on the model so that a word has the same topic as a specific long form if it is generated in reference to the long form. Consequently, documents are generated from the two hidden parameters, i.e. topic and link, and the most probable expansion of a specific abbreviation is estimated from the parameters. Our model relaxes the bag-of-words assumption of the standard topic model in which the word order is neglected, and it captures a richer structure of text than does the standard topic model by considering unigrams and semantically associated bigrams simultaneously. The addition of semantic links improves the disambiguation accuracy without removing irrelevant contextual words and reduces the parameter space of massive skipping or consecutive bigrams. The link topic model achieves 98.42% disambiguation accuracy on 73,505 MEDLINE abstracts with respect to 21 three letter abbreviations and their 139 distinct long forms. Copyright © 2014 Elsevier Inc. All rights reserved.
Advanced 3D Characterization and Reconstruction of Reactor Materials FY16 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fromm, Bradley; Hauch, Benjamin; Sridharan, Kumar
2016-12-01
A coordinated effort to link advanced materials characterization methods and computational modeling approaches is critical to future success for understanding and predicting the behavior of reactor materials that operate at extreme conditions. The difficulty and expense of working with nuclear materials have inhibited the use of modern characterization techniques on this class of materials. Likewise, mesoscale simulation efforts have been impeded due to insufficient experimental data necessary for initialization and validation of the computer models. The objective of this research is to develop methods to integrate advanced materials characterization techniques developed for reactor materials with state-of-the-art mesoscale modeling and simulationmore » tools. Research to develop broad-ion beam sample preparation, high-resolution electron backscatter diffraction, and digital microstructure reconstruction techniques; and methods for integration of these techniques into mesoscale modeling tools are detailed. Results for both irradiated and un-irradiated reactor materials are presented for FY14 - FY16 and final remarks are provided.« less
Soil organic carbon stocks in Alaska estimated with spatial and pedon data
Bliss, Norman B.; Maursetter, J.
2010-01-01
Temperatures in high-latitude ecosystems are increasing faster than the average rate of global warming, which may lead to a positive feedback for climate change by increasing the respiration rates of soil organic C. If a positive feedback is confirmed, soil C will represent a source of greenhouse gases that is not currently considered in international protocols to regulate C emissions. We present new estimates of the stocks of soil organic C in Alaska, calculated by linking spatial and field data developed by the USDA NRCS. The spatial data are from the State Soil Geographic database (STATSGO), and the field and laboratory data are from the National Soil Characterization Database, also known as the pedon database. The new estimates range from 32 to 53 Pg of soil organic C for Alaska, formed by linking the spatial and field data using the attributes of Soil Taxonomy. For modelers, we recommend an estimation method based on taxonomic subgroups with interpolation for missing areas, which yields an estimate of 48 Pg. This is a substantial increase over a magnitude of 13 Pg estimated from only the STATSGO data as originally distributed in 1994, but the increase reflects different estimation methods and is not a measure of the change in C on the landscape. Pedon samples were collected between 1952 and 2002, so the results do not represent a single point in time. The linked databases provide an improved basis for modeling the impacts of climate change on net ecosystem exchange.
NASA Technical Reports Server (NTRS)
Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.
1971-01-01
This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.
NASA Astrophysics Data System (ADS)
Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet
2015-09-01
The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.
A General Method for Targeted Quantitative Cross-Linking Mass Spectrometry.
Chavez, Juan D; Eng, Jimmy K; Schweppe, Devin K; Cilia, Michelle; Rivera, Keith; Zhong, Xuefei; Wu, Xia; Allen, Terrence; Khurgel, Moshe; Kumar, Akhilesh; Lampropoulos, Athanasios; Larsson, Mårten; Maity, Shuvadeep; Morozov, Yaroslav; Pathmasiri, Wimal; Perez-Neut, Mathew; Pineyro-Ruiz, Coriness; Polina, Elizabeth; Post, Stephanie; Rider, Mark; Tokmina-Roszyk, Dorota; Tyson, Katherine; Vieira Parrine Sant'Ana, Debora; Bruce, James E
2016-01-01
Chemical cross-linking mass spectrometry (XL-MS) provides protein structural information by identifying covalently linked proximal amino acid residues on protein surfaces. The information gained by this technique is complementary to other structural biology methods such as x-ray crystallography, NMR and cryo-electron microscopy[1]. The extension of traditional quantitative proteomics methods with chemical cross-linking can provide information on the structural dynamics of protein structures and protein complexes. The identification and quantitation of cross-linked peptides remains challenging for the general community, requiring specialized expertise ultimately limiting more widespread adoption of the technique. We describe a general method for targeted quantitative mass spectrometric analysis of cross-linked peptide pairs. We report the adaptation of the widely used, open source software package Skyline, for the analysis of quantitative XL-MS data as a means for data analysis and sharing of methods. We demonstrate the utility and robustness of the method with a cross-laboratory study and present data that is supported by and validates previously published data on quantified cross-linked peptide pairs. This advance provides an easy to use resource so that any lab with access to a LC-MS system capable of performing targeted quantitative analysis can quickly and accurately measure dynamic changes in protein structure and protein interactions.
A Public Health Model for the Molecular Surveillance of HIV Transmission in San Diego, California
May, Susanne; Tweeten, Samantha; Drumright, Lydia; Pacold, Mary E.; Kosakovsky Pond, Sergei L.; Pesano, Rick L.; Lie, Yolanda S.; Richman, Douglas D.; Frost, Simon D.W.; Woelk, Christopher H.; Little, Susan J.
2009-01-01
Background Current public health efforts often use molecular technologies to identify and contain communicable disease networks, but not for HIV. Here, we investigate how molecular epidemiology can be used to identify highly-related HIV networks within a population and how voluntary contact tracing of sexual partners can be used to selectively target these networks. Methods We evaluated the use of HIV-1 pol sequences obtained from participants of a community-recruited cohort (n=268) and a primary infection research cohort (n=369) to define highly related transmission clusters and the use of contact tracing to link other individuals (n=36) within these clusters. The presence of transmitted drug resistance was interpreted from the pol sequences (Calibrated Population Resistance v3.0). Results Phylogenetic clustering was conservatively defined when the genetic distance between any two pol sequences was <1%, which identified 34 distinct transmission clusters within the combined community-recruited and primary infection research cohorts containing 160 individuals. Although sequences from the epidemiologically-linked partners represented approximately 5% of the total sequences, they clustered with 60% of the sequences that clustered from the combined cohorts (O.R. 21.7; p=<0.01). Major resistance to at least one class of antiretroviral medication was found in 19% of clustering sequences. Conclusions Phylogenetic methods can be used to identify individuals who are within highly related transmission groups, and contact tracing of epidemiologically-linked partners of recently infected individuals can be used to link into previously-defined transmission groups. These methods could be used to implement selectively targeted prevention interventions. PMID:19098493
Using expert knowledge for test linking.
Bolsinova, Maria; Hoijtink, Herbert; Vermeulen, Jorine Adinda; Béguin, Anton
2017-12-01
Linking and equating procedures are used to make the results of different test forms comparable. In the cases where no assumption of random equivalent groups can be made some form of linking design is used. In practice the amount of data available to link the two tests is often very limited due to logistic and security reasons, which affects the precision of linking procedures. This study proposes to enhance the quality of linking procedures based on sparse data by using Bayesian methods which combine the information in the linking data with background information captured in informative prior distributions. We propose two methods for the elicitation of prior knowledge about the difference in difficulty of two tests from subject-matter experts and explain how these results can be used in the specification of priors. To illustrate the proposed methods and evaluate the quality of linking with and without informative priors, an empirical example of linking primary school mathematics tests is presented. The results suggest that informative priors can increase the precision of linking without decreasing the accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Simulation of Controller Pilot Data Link Communications over VHF Digital Link Mode 3
NASA Technical Reports Server (NTRS)
Bretmersky, Steven C.; Murawski, Robert; Nguyen, Thanh C.; Raghavan, Rajesh S.
2004-01-01
The Federal Aviation Administration (FAA) has established an operational plan for the future Air Traffic Management (ATM) system, in which the Controller Pilot Data Link Communications (CPDLC) is envisioned to evolve into digital messaging that will take on an ever increasing role in controller to pilot communications, significantly changing the way the National Airspace System (NAS) is operating. According to FAA, CPDLC represents the first phase of the transition from the current analog voice system to an International Civil Aviation Organization (ICAO) compliant system in which digital communication becomes the alternate and perhaps primary method of routine communication. The CPDLC application is an Air Traffic Service (ATS) application in which pilots and controllers exchange messages via an addressed data link. CPDLC includes a set of clearance, information, and request message elements that correspond to existing phraseology employed by current Air Traffic Control (ATC) procedures. These message elements encompass altitude assignments, crossing constraints, lateral deviations, route changes and clearances, speed assignments, radio frequency assignments, and various requests for information. The pilot is provided with the capability to respond to messages, to request clearances and information, to report information, and to declare/rescind an emergency. A 'free text' capability is also provided to exchange information not conforming to defined formats. This paper presents simulated results of the aeronautical telecommunication application Controller Pilot Data Link Communications over VHF Digital Link Mode 3 (VDL Mode 3). The objective of this simulation study was to determine the impact of CPDLC traffic loads, in terms of timely message delivery and capacity of the VDL Mode 3 subnetwork. The traffic model is based on and is used for generating air/ground messages with different priorities. Communication is modeled for the en route domain of the Cleveland Center air traffic (ZOB ARTCC).
A Typology for Modeling Processes in Clinical Guidelines and Protocols
NASA Astrophysics Data System (ADS)
Tu, Samson W.; Musen, Mark A.
We analyzed the graphical representations that are used by various guideline-modeling methods to express process information embodied in clinical guidelines and protocols. From this analysis, we distilled four modeling formalisms and the processes they typically model: (1) flowcharts for capturing problem-solving processes, (2) disease-state maps that link decision points in managing patient problems over time, (3) plans that specify sequences of activities that contribute toward a goal, (4) workflow specifications that model care processes in an organization. We characterized the four approaches and showed that each captures some aspect of what a guideline may specify. We believe that a general guideline-modeling system must provide explicit representation for each type of process.
Linking stressors and ecological responses
Gentile, J.H.; Solomon, K.R.; Butcher, J.B.; Harrass, M.; Landis, W.G.; Power, M.; Rattner, B.A.; Warren-Hicks, W.J.; Wenger, R.; Foran, Jeffery A.; Ferenc, Susan A.
1999-01-01
To characterize risk, it is necessary to quantify the linkages and interactions between chemical, physical and biological stressors and endpoints in the conceptual framework for ecological risk assessment (ERA). This can present challenges in a multiple stressor analysis, and it will not always be possible to develop a quantitative stressor-response profile. This review commences with a conceptual representation of the problem of developing a linkage analysis for multiple stressors and responses. The remainder of the review surveys a variety of mathematical and statistical methods (e.g., ranking methods, matrix models, multivariate dose-response for mixtures, indices, visualization, simulation modeling and decision-oriented methods) for accomplishing the linkage analysis for multiple stressors. Describing the relationships between multiple stressors and ecological effects are critical components of 'effects assessment' in the ecological risk assessment framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teng, Chunlin; Xiao, Hanxi; Cai, Qing
Two new 3D network organic-inorganic hybrid supramolecular complexes ([Na{sub 6}(CoEDTA){sub 2}(H{sub 2}O){sub 13}]·(H{sub 2}SiW{sub 12}O{sub 40})·xH{sub 2}O)n (1) and [CoH{sub 4}EDTA(H{sub 2}O)]{sub 2}(SiW{sub 12}O{sub 40})·15H{sub 2}O (2) (H{sub 4}EDTA=Ethylenediamine tetraacetic acid) have been successfully synthesized by solution method, and characterized by infrared spectrum (IR), thermogravimetric-differential thermal analysis (TG-DTA), cyclic voltammetry (CV) and single{sup −}crystal X-ray diffraction (XRD). Both of the complexes are the supramolecules, but with different liking mode, they are two representative models of supramolecule. complex (1) is a 3D infinite network supramolecular coordination polymer with a rare multi-metal sturcture of sodium-cobalt-containing, which is mainly linked through coordinate-covalent bonds.more » While complex (2) is normal supramolecule, which linked by non-covalent interactions, such as H-bonding interaction, electrostatic interaction and van der waals force. Both of complex (1) and (2) exhibit good catalytic activities for catalytic oxidation of methanol, when the initial concentration of methanol is 3.0 g m{sup −3}, flow rate is 10 mL min{sup −1}, and the quality of catalyst is 0.2 g, for complex (1) and complex (2) the maximum elimination rates of methanol are 85% (150 °C) and 92% (120 °C), respectively. - Graphical abstract: Two new organic-inorganic hybrid supramolecular complexes based on Co-EDTA, and Keggin polyanions have been successfully synthesized with different pH value by solution method. They are attributed to two representative models of supramolecule. Complex(1) is an infinite coordination polymer with a rare multi-metal sturcture of sodium-cobalt-containing, which is mainly linked through covalent bonds. Complex (2) is a normal supramolecule, which linked by non-covalent interactions of H-bonding interaction, electrostatic interaction and van der waals force. - Highlights: • Two supramolecules are linked by covalent or non-covalent interactions. • They are attributed to two representative models of supramolecule. • A rare multi-metal infinite supramolecular coordination polymer was formed. • They exhibit good catalytic activities for catalytic oxidation of methanol.« less
Literature-based concept profiles for gene annotation: the issue of weighting.
Jelier, Rob; Schuemie, Martijn J; Roes, Peter-Jan; van Mulligen, Erik M; Kors, Jan A
2008-05-01
Text-mining has been used to link biomedical concepts, such as genes or biological processes, to each other for annotation purposes or the generation of new hypotheses. To relate two concepts to each other several authors have used the vector space model, as vectors can be compared efficiently and transparently. Using this model, a concept is characterized by a list of associated concepts, together with weights that indicate the strength of the association. The associated concepts in the vectors and their weights are derived from a set of documents linked to the concept of interest. An important issue with this approach is the determination of the weights of the associated concepts. Various schemes have been proposed to determine these weights, but no comparative studies of the different approaches are available. Here we compare several weighting approaches in a large scale classification experiment. Three different techniques were evaluated: (1) weighting based on averaging, an empirical approach; (2) the log likelihood ratio, a test-based measure; (3) the uncertainty coefficient, an information-theory based measure. The weighting schemes were applied in a system that annotates genes with Gene Ontology codes. As the gold standard for our study we used the annotations provided by the Gene Ontology Annotation project. Classification performance was evaluated by means of the receiver operating characteristics (ROC) curve using the area under the curve (AUC) as the measure of performance. All methods performed well with median AUC scores greater than 0.84, and scored considerably higher than a binary approach without any weighting. Especially for the more specific Gene Ontology codes excellent performance was observed. The differences between the methods were small when considering the whole experiment. However, the number of documents that were linked to a concept proved to be an important variable. When larger amounts of texts were available for the generation of the concepts' vectors, the performance of the methods diverged considerably, with the uncertainty coefficient then outperforming the two other methods.
Li, Haitao; Ning, Xin; Li, Wenzhuo
2017-03-01
In order to improve the reliability and reduce power consumption of the high speed BLDC motor system, this paper presents a model free adaptive control (MFAC) based position sensorless drive with only a dc-link current sensor. The initial commutation points are obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. According to the commutation error caused by the low pass filter (LPF) and other factors, the relationship between commutation error angle and dc-link current is analyzed, a corresponding MFAC based control method is proposed, and the commutation error can be corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Effect of different methods of pulse width modulation on power losses in an induction motor
NASA Astrophysics Data System (ADS)
Gulyaev, Alexander; Fokin, Dmitrii; Shuharev, Sergey; Ten, Evgenii
2017-10-01
We consider the calculation of modulation power losses in a system “induction motor-inverter” for various pulse width modulation (PWM) methods of the supply voltage. Presented values of modulation power losses are the result of modeling a system “DC link - two-level three-phase voltage inverter - induction motor - load”. In this study the power losses in a system “induction motor - inverter” are computed, as well as losses caused by higher harmonics of PWM supply voltage, followed by definition of active power consumed by the DC link for a specified value mechanical power on the induction motor shaft. Mechanical power was determined by the rotation speed and the torque on the motor shaft in various quasi-sinusoidal supply voltage PWM modes. These calculations reveal the best coefficient of performance (COP) in a system of a variable frequency drive (VFD) with independent voltage inverter controlled by induction motor PWM.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Gamberg, Leonard; Schlegel, Marc
2010-01-18
In the factorized picture of semi-inclusive hadronic processes the naive time reversal-odd parton distributions exist by virtue of the gauge link which renders it color gauge invariant. The link characterizes the dynamical effect of initial/final-state interactions of the active parton due soft gluon exchanges with the target remnant. Though these interactions are non-perturbative, studies of final-state interaction have been approximated by perturbative one-gluon approximation in Abelian models. We include higher-order contributions by applying non-perturbative eikonal methods incorporating color degrees of freedom in a calculation of the Boer-Mulders function of the pion. Lastly, using this framework we explore under what conditionsmore » the Boer Mulders function can be described in terms of factorization of final state interactions and a spatial distribution in impact parameter space.« less
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Tseng, Zhijie Jack; Flynn, John J.
2015-01-01
Performance of the masticatory system directly influences feeding and survival, so adaptive hypotheses often are proposed to explain craniodental evolution via functional morphology changes. However, the prevalence of “many-to-one” association of cranial forms and functions in vertebrates suggests a complex interplay of ecological and evolutionary histories, resulting in redundant morphology-diet linkages. Here we examine the link between cranial biomechanical properties for taxa with different dietary preferences in crown clade Carnivora, the most diverse clade of carnivorous mammals. We test whether hypercarnivores and generalists can be distinguished based on cranial mechanical simulation models, and how such diet-biomechanics linkages relate to morphology. Comparative finite element and geometric morphometrics analyses document that predicted bite force is positively allometric relative to skull strain energy; this is achieved in part by increased stiffness in larger skull models and shape changes that resist deformation and displacement. Size-standardized strain energy levels do not reflect feeding preferences; instead, caniform models have higher strain energy than feliform models. This caniform-feliform split is reinforced by a sensitivity analysis using published models for six additional taxa. Nevertheless, combined bite force-strain energy curves distinguish hypercarnivorous versus generalist feeders. These findings indicate that the link between cranial biomechanical properties and carnivoran feeding preference can be clearly defined and characterized, despite phylogenetic and allometric effects. Application of this diet-biomechanics linkage model to an analysis of an extinct stem carnivoramorphan and an outgroup creodont species provides biomechanical evidence for the evolution of taxa into distinct hypercarnivorous and generalist feeding styles prior to the appearance of crown carnivoran clades with similar feeding preferences. PMID:25923776
Atomistic to continuum modeling of solidification microstructures
Karma, Alain; Tourret, Damien
2015-09-26
We summarize recent advances in modeling of solidification microstructures using computational methods that bridge atomistic to continuum scales. We first discuss progress in atomistic modeling of equilibrium and non-equilibrium solid–liquid interface properties influencing microstructure formation, as well as interface coalescence phenomena influencing the late stages of solidification. The latter is relevant in the context of hot tearing reviewed in the article by M. Rappaz in this issue. We then discuss progress to model microstructures on a continuum scale using phase-field methods. We focus on selected examples in which modeling of 3D cellular and dendritic microstructures has been directly linked tomore » experimental observations. Finally, we discuss a recently introduced coarse-grained dendritic needle network approach to simulate the formation of well-developed dendritic microstructures. The approach reliably bridges the well-separated scales traditionally simulated by phase-field and grain structure models, hence opening new avenues for quantitative modeling of complex intra- and inter-grain dynamical interactions on a grain scale.« less
Development of 3-D lithostratigraphic and confidence models at Yucca Mountain, Nevada
Buesch, D.C.; Nelson, J.E.; Dickerson, R.P.; Spengler, R.W.
1993-01-01
Computerized three-dimensional geologic models of potential high-level nuclear waste repositories such as Yucca Moutain, Nevada, are important for visualizing the complex interplay of (1) thickness and facies variations in lithostratigraphic units and (2) the disruption of these units by faults. The concept of a 'model of confidence' in the lithostratigraphic model is introduced to show where data are located versus regions where interpolations are included. Models of confidence can be based on (1) expert judgment, (2) geostatistical analysis, or (3) a simplified combination of these two methods. Linking of lithostratigraphic models and models of confidence provide guidelines for future characterization and modeling activities, as well as for design and construction of the Exploratory Studies Facility.
Social Relationships, Leisure Activity, and Health in Older Adults
Chang, Po-Ju; Wray, Linda; Lin, Yeqiang
2015-01-01
Objective Although the link between enhanced social relationships and better health has generally been well established, few studies have examined the role of leisure activity in this link. This study examined how leisure influences the link between social relationships and health in older age. Methods Using data from the 2006 and 2010 waves of the nationally representative U.S. Health and Retirement Study and structural equation modelling analyses, we examined data on 2,965 older participants to determine if leisure activities mediated the link between social relationships and health in 2010, controlling for race, education level, and health in 2006. Results The results demonstrated that leisure activities mediate the link between social relationships and health in these age groups. Perceptions of positive social relationships were associated with greater involvement in leisure activities, and greater involvement in leisure activities was associated with better health in older age. Discussion & Conclusions The contribution of leisure to health in these age groups is receiving increasing attention, and the results of this study add to the literature on this topic, by identifying the mediating effect of leisure activity on the link between social relationships and health. Future studies aimed at increasing leisure activity may contribute to improved health outcomes in older adults. PMID:24884905
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekar, Venkateswaran; Fiondella, Lance; Chatterjee, Samrat
Transportation networks are critical to the social and economic function of nations. Given the continuing increase in the populations of cities throughout the world, the criticality of transportation infrastructure is expected to increase. Thus, it is ever more important to mitigate congestion as well as to assess the impact disruptions would have on individuals who depend on transportation for their work and livelihood. Moreover, several government organizations are responsible for ensuring transportation networks are available despite the constant threat of natural disasters and terrorist activities. Most of the previous transportation network vulnerability research has been performed in the context ofmore » static traffic models, many of which are formulated as traditional optimization problems. However, transportation networks are dynamic because their usage varies over time. Thus, more appropriate methods to characterize the vulnerability of transportation networks should consider their dynamic properties. This paper presents a quantitative approach to assess the vulnerability of a transportation network to disruptions with methods from traffic simulation. Our approach can prioritize the critical links over time and is generalizable to the case where both link and node disruptions are of concern. We illustrate the approach through a series of examples. Our results demonstrate that the approach provides quantitative insight into the time varying criticality of links. Such an approach could be used as the objective function of less traditional optimization methods that use simulation and other techniques to evaluate the relative utility of a particular network defense to reduce vulnerability and increase resilience.« less
NASA Astrophysics Data System (ADS)
Kemp, C.; Car, N. J.
2016-12-01
Geoscience Australia (GA) is a government agency that provides advice on the geology and geography of Australia. It is the custodian of many digital and physical datasets of national significance. For several years GA has been implementing an enterprise approach to provenance management. The goal for transparency and reproducibility for all of GA's information products; an objective supported at the highest levels and explicitly listed in its Science Principles. Currently GA is finalising a set of enterprise tools to assist with provenance management and rolling out provenance reporting to different science areas. GA has adopted or developed: provenance storage systems; provenance collection code libraries (for use within automated systems); reporting interfaces (for manual use) and provenance representation capability within legacy catalogues. Using these tools within GA's science areas involves modelling the scenario first and then assessing whether the area has its data managed in such a way that allows links to data within provenance to be resolvable in perpetuity. We don't just want to represent provenance (demonstrating transparency), we want to access data via provenance (allowing for reproducibility). A subtask of GA's current work is to link physical samples to information products (datasets, reports, papers) by uniquely and persistently identifying samples using International GeoSample Numbers and then modelling automated & manual laboratory workflows and associated tasks, such as data delivery to corporate databases using the W3C's PROV Data Model. We use PROV DM throughout our modelling and systems. We are also moving to deliver all sample and digital dataset metadata across the agency in the Web Ontology Language (OWL) and exposing it via Linked Data methods in order to allow Semantic Web querying of multiple systems allowing provenance to be leveraged using as a single method and query point. Through the Science First Transformation Program GA is undergoing a significant rethinking of its data architecture, curation and access to support the Digital Science capability for which Provenance management is an output.
Morphogenesis of early stage melanoma
NASA Astrophysics Data System (ADS)
Chatelain, Clément; Amar, Martine Ben
2015-08-01
Melanoma early detection is possible by simple skin examination and can insure a high survival probability when successful. However it requires efficient methods for identifying malignant lesions from common moles. This paper provides an overview first of the biological and physical mechanisms controlling melanoma early evolution, and then of the clinical tools available today for detecting melanoma in vivo at an early stage. It highlights the lack of diagnosis methods rationally linking macroscopic observables to the microscopic properties of the tissue, which define the malignancy of the tumor. The possible inputs of multiscale models for improving these methods are shortly discussed.
Bartrolí, J; Martin, M J; Rigola, M
2001-10-16
The great complexity of the nitrogen cycle, including anthropogenic contributions, makes it necessary to carry out local studies, which allow us to identify the specific cause-effect links in a particular society. Models of local societies that are based on methods such as Substance Flow Analysis (SFA), which study and characterise the performance of metabolic exchanges between human society and the environment, are a useful tools for directing local policy towards sustainable management of the nitrogen cycle. In this paper, the selection of geographical boundaries for SFA application is discussed. Data availability and accuracy, and the possibility of linking the results with instructions for decision making, are critical aspects for proper scale selection. The experience obtained in the construction of the model for Catalonia is used to draw attention to the difficulties found in regional studies.
NASA Technical Reports Server (NTRS)
Kwon, Dong-Soo
1991-01-01
All research results about flexible manipulator control were integrated to show a control scenario of a bracing manipulator. First, dynamic analysis of a flexible manipulator was done for modeling. Second, from the dynamic model, the inverse dynamic equation was derived, and the time domain inverse dynamic method was proposed for the calculation of the feedforward torque and the desired flexible coordinate trajectories. Third, a tracking controller was designed by combining the inverse dynamic feedforward control with the joint feedback control. The control scheme was applied to the tip position control of a single link flexible manipulator for zero and non-zero initial condition cases. Finally, the contact control scheme was added to the position tracking control. A control scenario of a bracing manipulator is provided and evaluated through simulation and experiment on a single link flexible manipulator.
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
ERIC Educational Resources Information Center
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
A Unified Approach to IRT Scale Linking and Scale Transformations. Research Report. RR-04-09
ERIC Educational Resources Information Center
von Davier, Matthias; von Davier, Alina A.
2004-01-01
This paper examines item response theory (IRT) scale transformations and IRT scale linking methods used in the Non-Equivalent Groups with Anchor Test (NEAT) design to equate two tests, X and Y. It proposes a unifying approach to the commonly used IRT linking methods: mean-mean, mean-var linking, concurrent calibration, Stocking and Lord and…
Efficient network disintegration under incomplete information: the comic effect of link prediction
NASA Astrophysics Data System (ADS)
Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin
2016-03-01
The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.
Efficient network disintegration under incomplete information: the comic effect of link prediction.
Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin
2016-03-10
The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the "comic effect" of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.
Efficient network disintegration under incomplete information: the comic effect of link prediction
Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin
2016-01-01
The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized. PMID:26960247
Multi-locus analysis of genomic time series data from experimental evolution.
Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S
2015-04-01
Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.
Multiple system modelling of waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eriksson, Ola, E-mail: ola.eriksson@hig.se; Department of Building, Energy and Environmental Engineering, University of Gaevle, SE 801 76 Gaevle; Bisaillon, Mattias, E-mail: mattias.bisaillon@profu.se
2011-12-15
Highlights: > Linking of models will provide a more complete, correct and credible picture of the systems. > The linking procedure is easy to perform and also leads to activation of project partners. > The simulation procedure is a bit more complicated and calls for the ability to run both models. - Abstract: Due to increased environmental awareness, planning and performance of waste management has become more and more complex. Therefore waste management has early been subject to different types of modelling. Another field with long experience of modelling and systems perspective is energy systems. The two modelling traditions havemore » developed side by side, but so far there are very few attempts to combine them. Waste management systems can be linked together with energy systems through incineration plants. The models for waste management can be modelled on a quite detailed level whereas surrounding systems are modelled in a more simplistic way. This is a problem, as previous studies have shown that assumptions on the surrounding system often tend to be important for the conclusions. In this paper it is shown how two models, one for the district heating system (MARTES) and another one for the waste management system (ORWARE), can be linked together. The strengths and weaknesses with model linking are discussed when compared to simplistic assumptions on effects in the energy and waste management systems. It is concluded that the linking of models will provide a more complete, correct and credible picture of the consequences of different simultaneous changes in the systems. The linking procedure is easy to perform and also leads to activation of project partners. However, the simulation procedure is a bit more complicated and calls for the ability to run both models.« less
Kouyi, G Lipeme; Fraisse, D; Rivière, N; Guinot, V; Chocat, B
2009-01-01
Many investigations have been carried out in order to develop models which allow the linking of complex physical processes involved in urban flooding. The modelling of the interactions between overland flows on streets and flooding flows from rivers and sewer networks is one of the main objectives of recent and current research programs in hydraulics and urban hydrology. This paper outlines the original one-dimensional linking of heavy rainfall-runoff in urban areas and flooding flows from rivers and sewer networks under the RIVES project framework (Estimation of Scenario and Risks of Urban Floods). The first part of the paper highlights the capacity of Canoe software to simulate the street flows. In the second part, we show the original method of connection which enables the modelling of interactions between processes in urban flooding. Comparisons between simulated results and the results of Despotovic et al. or Gomez & Mur show a good agreement for the calibrated one-dimensional connection model. The connection operates likes a manhole with the orifice/weir coefficients used as calibration parameters. The influence of flooding flows from river was taken into account as a variable water depth boundary condition.
Bandyopadhyay, Pradipta; Kuntz, Irwin D
2009-01-01
The determination of protein structure using distance constraints is a new and promising field of study. One implementation involves attaching residues of a protein using a cross-linking agent, followed by protease digestion, analysis of the resulting peptides by mass spectroscopy, and finally sequence threading to detect the protein folds. In the present work, we carry out computational modeling of the kinetics of cross-linking reactions in proteins using the master equation approach. The rate constants of the cross-linking reactions are estimated using the pKas and the solvent-accessible surface areas of the residues involved. This model is tested with fibroblast growth factor (FGF) and cytochrome C. It is consistent with the initial experimental rate data for individual lysine residues for cytochrome C. Our model captures all observed cross-links for FGF and almost 90% of the observed cross-links for cytochrome C, although it also predicts cross-links that were not observed experimentally (false positives). However, the analysis of the false positive results is complicated by the fact that experimental detection of cross-links can be difficult and may depend on specific experimental conditions such as pH, ionic strength. Receiver operator characteristic plots showed that our model does a good job in predicting the observed cross-links. Molecular dynamics simulations showed that for cytochrome C, in general, the two lysines come closer for the observed cross-links as compared to the false positive ones. For FGF, no such clear pattern exists. The kinetic model and MD simulation can be used to study proposed cross-linking protocols.
Sun, Hokeun; Wang, Shuang
2013-05-30
The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.
Negriff, Sonya; Brensilver, Matthew; Trickett, Penelope K.
2015-01-01
Purpose To test models linking pubertal timing, peer substance use, sexual behavior, and substance use for maltreated versus comparison adolescents. Three theoretical mechanisms were tested: 1) peer influence links early pubertal timing to later sexual behavior and substance use, 2) early maturers engage in substance use on their own and then select substance-using friends, or 3) early maturers initiate sexual behaviors which leads them to substance-using peers. Methods The data came from a longitudinal study of the effects of child maltreatment on adolescent development (303 maltreated and 151 comparison adolescents; age: 9–13 years at initial wave). Multiple-group structural equation models tested the hypotheses across three timepoints including variables of pubertal timing, perception of peer substance use, sexual behavior, and self-reported substance use. Results Early pubertal timing was associated with substance-using peers only for maltreated adolescents, indicating the mediation path from early pubertal timing through substance-using peers to subsequent adolescent substance use and sexual behavior only holds for maltreated adolescents. Mediation via sexual behavior was significant for both maltreated and comparison adolescents. This indicates that sexual behavior may be a more universal mechanism linking early maturation with risky friends regardless of adverse life experiences. Conclusions The findings are a step toward elucidating the developmental pathways from early puberty to risk behavior and identifying early experiences that may alter mediation effects. PMID:26003577
26 CFR 1.472-8 - Dollar-value method of pricing LIFO inventories.
Code of Federal Regulations, 2014 CFR
2014-04-01
... “link-chain” method will be approved for taxable years beginning after December 31, 1960, only in those... nature of the pool. A taxpayer using either an index or link-chain method shall attach to his income tax return for the first taxable year beginning after December 31, 1960, for which the index or link-chain...
26 CFR 1.472-8 - Dollar-value method of pricing LIFO inventories.
Code of Federal Regulations, 2013 CFR
2013-04-01
... “link-chain” method will be approved for taxable years beginning after December 31, 1960, only in those... nature of the pool. A taxpayer using either an index or link-chain method shall attach to his income tax return for the first taxable year beginning after December 31, 1960, for which the index or link-chain...
26 CFR 1.472-8 - Dollar-value method of pricing LIFO inventories.
Code of Federal Regulations, 2011 CFR
2011-04-01
... “link-chain” method will be approved for taxable years beginning after December 31, 1960, only in those... nature of the pool. A taxpayer using either an index or link-chain method shall attach to his income tax return for the first taxable year beginning after December 31, 1960, for which the index or link-chain...
26 CFR 1.472-8 - Dollar-value method of pricing LIFO inventories.
Code of Federal Regulations, 2010 CFR
2010-04-01
... “link-chain” method will be approved for taxable years beginning after December 31, 1960, only in those... nature of the pool. A taxpayer using either an index or link-chain method shall attach to his income tax return for the first taxable year beginning after December 31, 1960, for which the index or link-chain...
26 CFR 1.472-8 - Dollar-value method of pricing LIFO inventories.
Code of Federal Regulations, 2012 CFR
2012-04-01
... “link-chain” method will be approved for taxable years beginning after December 31, 1960, only in those... nature of the pool. A taxpayer using either an index or link-chain method shall attach to his income tax return for the first taxable year beginning after December 31, 1960, for which the index or link-chain...
Stanton, Neville A; Harvey, Catherine
2017-02-01
Risk assessments in Sociotechnical Systems (STS) tend to be based on error taxonomies, yet the term 'human error' does not sit easily with STS theories and concepts. A new break-link approach was proposed as an alternative risk assessment paradigm to reveal the effect of information communication failures between agents and tasks on the entire STS. A case study of the training of a Royal Navy crew detecting a low flying Hawk (simulating a sea-skimming missile) is presented using EAST to model the Hawk-Frigate STS in terms of social, information and task networks. By breaking 19 social links and 12 task links, 137 potential risks were identified. Discoveries included revealing the effect of risk moving around the system; reducing the risks to the Hawk increased the risks to the Frigate. Future research should examine the effects of compounded information communication failures on STS performance. Practitioner Summary: The paper presents a step-by-step walk-through of EAST to show how it can be used for risk assessment in sociotechnical systems. The 'broken-links' method takes a systemic, rather than taxonomic, approach to identify information communication failures in social and task networks.
NASA Astrophysics Data System (ADS)
McIntire, John P.; Osesina, O. Isaac; Bartley, Cecilia; Tudoreanu, M. Eduard; Havig, Paul R.; Geiselman, Eric E.
2012-06-01
Ensuring the proper and effective ways to visualize network data is important for many areas of academia, applied sciences, the military, and the public. Fields such as social network analysis, genetics, biochemistry, intelligence, cybersecurity, neural network modeling, transit systems, communications, etc. often deal with large, complex network datasets that can be difficult to interact with, study, and use. There have been surprisingly few human factors performance studies on the relative effectiveness of different graph drawings or network diagram techniques to convey information to a viewer. This is particularly true for weighted networks which include the strength of connections between nodes, not just information about which nodes are linked to other nodes. We describe a human factors study in which participants performed four separate network analysis tasks (finding a direct link between given nodes, finding an interconnected node between given nodes, estimating link strengths, and estimating the most densely interconnected nodes) on two different network visualizations: an adjacency matrix with a heat-map versus a node-link diagram. The results should help shed light on effective methods of visualizing network data for some representative analysis tasks, with the ultimate goal of improving usability and performance for viewers of network data displays.
NASA Astrophysics Data System (ADS)
Wang, Liping; Jiang, Yao; Li, Tiemin
2014-09-01
Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.
Link Prediction in Evolving Networks Based on Popularity of Nodes.
Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian
2017-08-02
Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.
Sentiments Analysis of Reviews Based on ARCNN Model
NASA Astrophysics Data System (ADS)
Xu, Xiaoyu; Xu, Ming; Xu, Jian; Zheng, Ning; Yang, Tao
2017-10-01
The sentiments analysis of product reviews is designed to help customers understand the status of the product. The traditional method of sentiments analysis relies on the input of a fixed feature vector which is performance bottleneck of the basic codec architecture. In this paper, we propose an attention mechanism with BRNN-CNN model, referring to as ARCNN model. In order to have a good analysis of the semantic relations between words and solves the problem of dimension disaster, we use the GloVe algorithm to train the vector representations for words. Then, ARCNN model is proposed to deal with the problem of deep features training. Specifically, BRNN model is proposed to investigate non-fixed-length vectors and keep time series information perfectly and CNN can study more connection of deep semantic links. Moreover, the attention mechanism can automatically learn from the data and optimize the allocation of weights. Finally, a softmax classifier is designed to complete the sentiment classification of reviews. Experiments show that the proposed method can improve the accuracy of sentiment classification compared with benchmark methods.
NASA Technical Reports Server (NTRS)
Young, S. Lee
1987-01-01
Intersatellite Link (ISL) applications can improve and expand communication satellite services in a number of ways. As the demand for orbital slots within prime regions of the geostationary arc increases, attention is being focused on ISLs as a method to utilize this resource more efficiently and circumvent saturation. Various GEO-to-GEO applications were determined that provide potential benefits over existing communication systems. A set of criteria was developed to assess the potential applications. Intersatellite link models, network system architectures, and payload configurations were developed. For each of the chosen ISL applications, ISL versus non-ISL satellite systems architectures were derived. Both microwave and optical ISL implementation approaches were evaluated for payload sizing and cost analysis. The technological availability for ISL implementations was assessed. Critical subsystems technology areas were identified, and an estamate of the schedule and cost to advance the technology to the requiered state of readiness was made.
Biotea: semantics for Pubmed Central.
Garcia, Alexander; Lopez, Federico; Garcia, Leyla; Giraldo, Olga; Bucheli, Victor; Dumontier, Michel
2018-01-01
A significant portion of biomedical literature is represented in a manner that makes it difficult for consumers to find or aggregate content through a computational query. One approach to facilitate reuse of the scientific literature is to structure this information as linked data using standardized web technologies. In this paper we present the second version of Biotea, a semantic, linked data version of the open-access subset of PubMed Central that has been enhanced with specialized annotation pipelines that uses existing infrastructure from the National Center for Biomedical Ontology. We expose our models, services, software and datasets. Our infrastructure enables manual and semi-automatic annotation, resulting data are represented as RDF-based linked data and can be readily queried using the SPARQL query language. We illustrate the utility of our system with several use cases. Our datasets, methods and techniques are available at http://biotea.github.io.
Kohn, K W; Ewig, R A
1979-03-28
DNA-protien crosslinks produced in mouse leukemia L1210 cells by trans-Pt(II)diamminedichloride were quantitated using the technique of DNA alkaline elution. DNA single-strand segments that were or were not linked to protein were separable into distinct components by alkaline elution after exposure of the cells to 2--15 kR of X-ray. Protein-linked DNA strands were separated on the basis of their retention of filters at pH 12 while free DNA strands of the size generated by 2--15 kR of X-ray passed rapidly through the filters. The retention of protein-linked DNA strands was attributable to adsorption of protein to the filter under the conditions of alkaline elution. The results obeyed a simple quantitative model according to which the frequency of DNA-protein crosslinks could be calculated.
Modeling of Adaptive Optics-Based Free-Space Communications Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilks, S C; Morris, J R; Brase, J M
2002-08-06
We introduce a wave-optics based simulation code written for air-optic laser communications links, that includes a detailed model of an adaptive optics compensation system. We present the results obtained by this model, where the phase of a communications laser beam is corrected, after it propagates through a turbulent atmosphere. The phase of the received laser beam is measured using a Shack-Hartmann wavefront sensor, and the correction method utilizes a MEMS mirror. Strehl improvement and amount of power coupled to the receiving fiber for both 1 km horizontal and 28 km slant paths are presented.
A general method for targeted quantitative cross-linking mass spectrometry
USDA-ARS?s Scientific Manuscript database
Chemical cross-linking mass spectrometry (XL-MS) provides protein structural information by identifying covalently linked proximal amino acid residues on protein surfaces. The information gained by this technique is complementary to other structural biology methods such as x-ray crystallography, NM...
Efficient weighting strategy for enhancing synchronizability of complex networks
NASA Astrophysics Data System (ADS)
Wang, Youquan; Yu, Feng; Huang, Shucheng; Tu, Juanjuan; Chen, Yan
2018-04-01
Networks with high propensity to synchronization are desired in many applications ranging from biology to engineering. In general, there are two ways to enhance the synchronizability of a network: link rewiring and/or link weighting. In this paper, we propose a new link weighting strategy based on the concept of the neighborhood subgroup. The neighborhood subgroup of a node i through node j in a network, i.e. Gi→j, means that node u belongs to Gi→j if node u belongs to the first-order neighbors of j (not include i). Our proposed weighting schema used the local and global structural properties of the networks such as the node degree, betweenness centrality and closeness centrality measures. We applied the method on scale-free and Watts-Strogatz networks of different structural properties and show the good performance of the proposed weighting scheme. Furthermore, as model networks cannot capture all essential features of real-world complex networks, we considered a number of undirected and unweighted real-world networks. To the best of our knowledge, the proposed weighting strategy outperformed the previously published weighting methods by enhancing the synchronizability of these real-world networks.
A Method to Analyze How Various Parts of Clouds Influence Each Other's Brightness
NASA Technical Reports Server (NTRS)
Varnai, Tamas; Marshak, Alexander; Lau, William (Technical Monitor)
2001-01-01
This paper proposes a method for obtaining new information on 3D radiative effects that arise from horizontal radiative interactions in heterogeneous clouds. Unlike current radiative transfer models, it can not only calculate how 3D effects change radiative quantities at any given point, but can also determine which areas contribute to these 3D effects, to what degree, and through what mechanisms. After describing the proposed method, the paper illustrates its new capabilities both for detailed case studies and for the statistical processing of large datasets. Because the proposed method makes it possible, for the first time, to link a particular change in cloud properties to the resulting 3D effect, in future studies it can be used to develop new radiative transfer parameterizations that would consider 3D effects in practical applications currently limited to 1D theory-such as remote sensing of cloud properties and dynamical cloud modeling.
A revised version of the transfer matrix method to analyze one-dimensional structures
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1983-01-01
A new and general method to analyze both free and forced vibration characteristics of one-dimensional structures is discussed in this paper. This scheme links for the first time the classical transfer matrix method with the recently developed integrating matrix technique to integrate systems of differential equations. Two alternative approaches to the problem are presented. The first is based upon the lumped parameter model to account for the inertia properties of the structure. The second releases that constraint allowing a more precise description of the physical system. The free vibration of a straight uniform beam under different support conditions is analyzed to test the accuracy of the two models. Finally some results for the free vibration of a 12th order system representing a curved, rotating beam prove that the present method is conveniently extended to more complicated structural dynamics problems.
ERIC Educational Resources Information Center
Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G.; Marchenko, Yevgen; Volkau, Ihar
2009-01-01
Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to "Terminologia…
NASA Astrophysics Data System (ADS)
Korayem, M. H.; Shafei, A. M.
2013-02-01
The goal of this paper is to describe the application of Gibbs-Appell (G-A) formulation and the assumed modes method to the mathematical modeling of N-viscoelastic link manipulators. The paper's focus is on obtaining accurate and complete equations of motion which encompass the most related structural properties of lightweight elastic manipulators. In this study, two important damping mechanisms, namely, the structural viscoelasticity (Kelvin-Voigt) effect (as internal damping) and the viscous air effect (as external damping) have been considered. To include the effects of shear and rotational inertia, the assumption of Timoshenko beam (TB) theory (TBT) has been applied. Gravity, torsion, and longitudinal elongation effects have also been included in the formulations. To systematically derive the equations of motion and improve the computational efficiency, a recursive algorithm has been used in the modeling of the system. In this algorithm, all the mathematical operations are carried out by only 3×3 and 3×1 matrices. Finally, a computational simulation for a manipulator with two elastic links is performed in order to verify the proposed method.
Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai
2016-01-01
Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).
Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai
2016-01-01
Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639
Ayllón, Daniel; Grimm, Volker; Attinger, Sabine; Hauhs, Michael; Simmer, Clemens; Vereecken, Harry; Lischeid, Gunnar
2018-05-01
Terrestrial environmental systems are characterised by numerous feedback links between their different compartments. However, scientific research is organized into disciplines that focus on processes within the respective compartments rather than on interdisciplinary links. Major feedback mechanisms between compartments might therefore have been systematically overlooked so far. Without identifying these gaps, initiatives on future comprehensive environmental monitoring schemes and experimental platforms might fail. We performed a comprehensive overview of feedbacks between compartments currently represented in environmental sciences and explores to what degree missing links have already been acknowledged in the literature. We focused on process models as they can be regarded as repositories of scientific knowledge that compile findings of numerous single studies. In total, 118 simulation models from 23 model types were analysed. Missing processes linking different environmental compartments were identified based on a meta-review of 346 published reviews, model intercomparison studies, and model descriptions. Eight disciplines of environmental sciences were considered and 396 linking processes were identified and ascribed to the physical, chemical or biological domain. There were significant differences between model types and scientific disciplines regarding implemented interdisciplinary links. The most wide-spread interdisciplinary links were between physical processes in meteorology, hydrology and soil science that drive or set the boundary conditions for other processes (e.g., ecological processes). In contrast, most chemical and biological processes were restricted to links within the same compartment. Integration of multiple environmental compartments and interdisciplinary knowledge was scarce in most model types. There was a strong bias of suggested future research foci and model extensions towards reinforcing existing interdisciplinary knowledge rather than to open up new interdisciplinary pathways. No clear pattern across disciplines exists with respect to suggested future research efforts. There is no evidence that environmental research would clearly converge towards more integrated approaches or towards an overarching environmental systems theory. Copyright © 2017 Elsevier B.V. All rights reserved.
Heat Transfer Processes Linking Fire Behavior and Tree Mortality
NASA Astrophysics Data System (ADS)
Michaletz, S. T.; Johnson, E. A.
2004-12-01
Traditional methods for predicting post-fire tree mortality employ statistical models which neglect the processes linking fire behavior to physiological mortality mechanisms. Here we present a physical process approach which predicts tree mortality by linking fireline intensity with lateral (vascular cambium) and apical (vegetative bud) meristem necrosis. We use a linefire plume model with independently validated conduction and lumped capacitance heat transfer analyses to predict lethal meristem temperatures in tree stems, branches, and buds. These models show that meristem necrosis in large diameter (Bi ≥ 0.3) stems/branches is governed by meristem height, bark thickness, and bark water content, while meristem necrosis in small diameter (Bi < 0.3) branches/buds is governed by meristem height, branch/bud size, branch/bud water content, and foliage architecture. To investigate effects of interspecfic variation in these properties, we compare model results for Picea glauca (Moench) Voss and Pinus contorta Loudon var. latifolia Engelm. at fireline intensities from 50 to 3000 kWm-1. Parameters are obtained from allometric models which relate stem/branch diameter to bark thickness and height, as well as bark and bud water content data collected in the southern Canadian Rocky Mountains. Variation in foliage architecture is quantified using forced convection heat transfer coefficients measured in a laminar flow wind tunnel at Re from 100 to 2000, typical for branches/buds in a linefire plume. Results indicate that in unfoliated stems/branches, P. glauca meristems are more protected due to thicker bark, whereas in foliated branches/buds, P. contorta meristems are more protected due to larger bud size and foliage architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.
2010-03-01
The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
2014-01-01
Pollinator decline has been linked to landscape change, through both habitat fragmentation and the loss of habitat suitable for the pollinators to live within. One method for exploring why landscape change should affect pollinator populations is to combine individual-level behavioural ecological techniques with larger-scale landscape ecology. A modelling framework is described that uses spatially-explicit individual-based models to explore the effects of individual behavioural rules within a landscape. The technique described gives a simple method for exploring the effects of the removal of wild corridors, and the creation of wild set-aside fields: interventions that are common to many national agricultural policies. The effects of these manipulations on central-place nesting pollinators are varied, and depend upon the behavioural rules that the pollinators are using to move through the environment. The value of this modelling framework is discussed, and future directions for exploration are identified. PMID:24795848
Network Model-Assisted Inference from Respondent-Driven Sampling Data
Gile, Krista J.; Handcock, Mark S.
2015-01-01
Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328
Network Model-Assisted Inference from Respondent-Driven Sampling Data.
Gile, Krista J; Handcock, Mark S
2015-06-01
Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.
[Study of cuttings identification using laser-induced breakdown spectroscopy].
Tian, Ye; Wang, Zhen-nan; Hou, Hua-ming; Zhai, Xiao-wei; Ci, Xing-hua; Zheng, Rong-er
2012-08-01
Cutting identification is one of the most important links in the course of cutting logging which is very significant in the process of oil drilling. In the present paper, LIBS was used for identification of four kinds of cutting samples coming from logging field, and then multivariate analysis was used in data processing. The whole spectra model and the feature model were built for cuttings identification using PLS-DA method. The accuracy of the whole spectra model was 88.3%, a little more than the feature model with an accuracy of 86.7%. While in the aspect of data size, the variables were decreased from 24,041 to 27 by feature extraction, which increased the efficiency of data processing observably. The obtained results demonstrate that LIBS combined with chemometrics method could be developed as a rapid and valid approach to cutting identification and has great potential to be used in logging field.
Rands, Sean A
2014-01-01
Pollinator decline has been linked to landscape change, through both habitat fragmentation and the loss of habitat suitable for the pollinators to live within. One method for exploring why landscape change should affect pollinator populations is to combine individual-level behavioural ecological techniques with larger-scale landscape ecology. A modelling framework is described that uses spatially-explicit individual-based models to explore the effects of individual behavioural rules within a landscape. The technique described gives a simple method for exploring the effects of the removal of wild corridors, and the creation of wild set-aside fields: interventions that are common to many national agricultural policies. The effects of these manipulations on central-place nesting pollinators are varied, and depend upon the behavioural rules that the pollinators are using to move through the environment. The value of this modelling framework is discussed, and future directions for exploration are identified.
Characterization of YBa2Cu3O7, including critical current density Jc, by trapped magnetic field
NASA Technical Reports Server (NTRS)
Chen, In-Gann; Liu, Jianxiong; Weinstein, Roy; Lau, Kwong
1992-01-01
Spatial distributions of persistent magnetic field trapped by sintered and melt-textured ceramic-type high-temperature superconductor (HTS) samples have been studied. The trapped field can be reproduced by a model of the current consisting of two components: (1) a surface current Js and (2) a uniform volume current Jv. This Js + Jv model gives a satisfactory account of the spatial distribution of the magnetic field trapped by different types of HTS samples. The magnetic moment can be calculated, based on the Js + Jv model, and the result agrees well with that measured by standard vibrating sample magnetometer (VSM). As a consequence, Jc predicted by VSM methods agrees with Jc predicted from the Js + Jv model. The field mapping method described is also useful to reveal the granular structure of large HTS samples and regions of weak links.
Prediction of missing links and reconstruction of complex networks
NASA Astrophysics Data System (ADS)
Zhang, Cheng-Jun; Zeng, An
2016-04-01
Predicting missing links in complex networks is of great significance from both theoretical and practical point of view, which not only helps us understand the evolution of real systems but also relates to many applications in social, biological and online systems. In this paper, we study the features of different simple link prediction methods, revealing that they may lead to the distortion of networks’ structural and dynamical properties. Moreover, we find that high prediction accuracy is not definitely corresponding to a high performance in preserving the network properties when using link prediction methods to reconstruct networks. Our work highlights the importance of considering the feedback effect of the link prediction methods on network properties when designing the algorithms.
The Formation Mechanism of Hydrogels.
Lu, Liyan; Yuan, Shiliang; Wang, Jing; Shen, Yun; Deng, Shuwen; Xie, Luyang; Yang, Qixiang
2017-06-12
Hydrogels are degradable polymeric networks, in which cross-links play a vital role in structure formation and degradation. Cross-linking is a stabilization process in polymer chemistry that leads to the multi-dimensional extension of polymeric chains, resulting in network structures. By cross-linking, hydrogels are formed into stable structures that differ from their raw materials. Generally, hydrogels can be prepared from either synthetic or natural polymers. Based on the types of cross-link junctions, hydrogels can be categorized into two groups: the chemically cross-linked and the physically cross-linked. Chemically cross-linked gels have permanent junctions, in which covalent bonds are present between different polymer chains, thus leading to excellent mechanical strength. Although chemical cross-linking is a highly resourceful method for the formation of hydrogels, the cross-linkers used in hydrogel preparation should be extracted from the hydrogels before use, due to their reported toxicity, while, in physically cross-linked gels, dissolution is prevented by physical interactions, such as ionic interactions, hydrogen bonds or hydrophobic interactions. Physically cross-linked methods for the preparation of hydrogels are the alternate solution for cross-linker toxicity. Both methods will be discussed in this essay. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Elashoff, Robert M.; Li, Gang; Li, Ning
2009-01-01
Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153