Semantic Networks and Social Networks
ERIC Educational Resources Information Center
Downes, Stephen
2005-01-01
Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…
Advances in Artificial Neural Networks - Methodological Development and Application
USDA-ARS?s Scientific Manuscript database
Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...
A multi-criteria decision aid methodology to design electric vehicles public charging networks
NASA Astrophysics Data System (ADS)
Raposo, João; Rodrigues, Ana; Silva, Carlos; Dentinho, Tomaz
2015-05-01
This article presents a new multi-criteria decision aid methodology, dynamic-PROMETHEE, here used to design electric vehicle charging networks. In applying this methodology to a Portuguese city, results suggest that it is effective in designing electric vehicle charging networks, generating time and policy based scenarios, considering offer and demand and the city's urban structure. Dynamic-PROMETHE adds to the already known PROMETHEE's characteristics other useful features, such as decision memory over time, versatility and adaptability. The case study, used here to present the dynamic-PROMETHEE, served as inspiration and base to create this new methodology. It can be used to model different problems and scenarios that may present similar requirement characteristics.
Vein matching using artificial neural network in vein authentication systems
NASA Astrophysics Data System (ADS)
Noori Hoshyar, Azadeh; Sulaiman, Riza
2011-10-01
Personal identification technology as security systems is developing rapidly. Traditional authentication modes like key; password; card are not safe enough because they could be stolen or easily forgotten. Biometric as developed technology has been applied to a wide range of systems. According to different researchers, vein biometric is a good candidate among other biometric traits such as fingerprint, hand geometry, voice, DNA and etc for authentication systems. Vein authentication systems can be designed by different methodologies. All the methodologies consist of matching stage which is too important for final verification of the system. Neural Network is an effective methodology for matching and recognizing individuals in authentication systems. Therefore, this paper explains and implements the Neural Network methodology for finger vein authentication system. Neural Network is trained in Matlab to match the vein features of authentication system. The Network simulation shows the quality of matching as 95% which is a good performance for authentication system matching.
Three-dimensional stochastic adjustment of volcano geodetic network in Arenal volcano, Costa Rica
NASA Astrophysics Data System (ADS)
Muller, C.; van der Laat, R.; Cattin, P.-H.; Del Potro, R.
2009-04-01
Volcano geodetic networks are a key instrument to understanding magmatic processes and, thus, forecasting potentially hazardous activity. These networks are extensively used on volcanoes worldwide and generally comprise a number of different traditional and modern geodetic surveying techniques such as levelling, distances, triangulation and GNSS. However, in most cases, data from the different methodologies are surveyed, adjusted and analysed independently. Experience shows that the problem with this procedure is the mismatch between the excellent correlation of position values within a single technique and the low cross-correlation of such values within different techniques or when the same network is surveyed shortly after using the same technique. Moreover one different independent network for each geodetic surveying technique strongly increase logistics and thus the cost of each measurement campaign. It is therefore important to develop geodetic networks which combine the different geodetic surveying technique, and to adjust geodetic data together in order to better quantify the uncertainties associated to the measured displacements. In order to overcome the lack of inter-methodology data integration, the Geomatic Institute of the University of Applied Sciences of Western Switzerland (HEIG-VD) has developed a methodology which uses a 3D stochastic adjustment software of redundant geodetic networks, TRINET+. The methodology consists of using each geodetic measurement technique for its strengths relative to other methodologies. Also, the combination of the measurements in a single network allows more cost-effective surveying. The geodetic data are thereafter adjusted and analysed in the same referential frame. The adjustment methodology is based on the least mean square method and links the data with the geometry. Trinet+ also allows to run a priori simulations of the network, hence testing the quality and resolution to be expected for a determined network even before it is built. Moreover, a posterior analysis enables identifying, and hence dismissing, measurement errors (antenna height, atmospheric effects, etc.). Here we present a preliminary effort to apply this technique to volcano deformation. A geodetic network has been developed on the western flank of the Arenal volcano in Costa Rica. It is surveyed with GNSS, angular and EDM (Electronic Distance Measurements) measurements. Three measurement campaigns were carried out between February and June 2008. The results show consistent and accurate output of deformation and uncertainty for each of the 12 benchmarks surveyed. The three campaigns also prove the repeatability and consistency of the statistical indicators and the displacement vectors. Although, this methodology has only recently been applied to volcanoes, we suggest that due to its cost-effective high-quality results it has the potential to be incorporated into the design and analysis of volcano geodetic networks worldwide.
Statistical Model Applied to NetFlow for Network Intrusion Detection
NASA Astrophysics Data System (ADS)
Proto, André; Alexandre, Leandro A.; Batista, Maira L.; Oliveira, Isabela L.; Cansian, Adriano M.
The computers and network services became presence guaranteed in several places. These characteristics resulted in the growth of illicit events and therefore the computers and networks security has become an essential point in any computing environment. Many methodologies were created to identify these events; however, with increasing of users and services on the Internet, many difficulties are found in trying to monitor a large network environment. This paper proposes a methodology for events detection in large-scale networks. The proposal approaches the anomaly detection using the NetFlow protocol, statistical methods and monitoring the environment in a best time for the application.
Value-Creating Networks: Organizational Issues and Challenges
ERIC Educational Resources Information Center
Allee, Verna
2009-01-01
Purpose: The purpose of this paper is to provide examples of evaluating value-creating networks and to address the organizational issues and challenges of a network orientation. Design/methodology/approach: Value network analysis was first developed in 1993 and was adapted in 1997 for intangible asset management. It has been applied from shopfloor…
A negotiation methodology and its application to cogeneration planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S.M.; Liu, C.C.; Luu, S.
Power system planning has become a complex process in utilities today. This paper presents a methodology for integrated planning with multiple objectives. The methodology uses a graphical representation (Goal-Decision Network) to capture the planning knowledge. The planning process is viewed as a negotiation process that applies three negotiation operators to search for beneficial decisions in a GDN. Also, the negotiation framework is applied to the problem of planning for cogeneration interconnection. The simulation results are presented to illustrate the cogeneration planning process.
Hosseini, Marjan; Kerachian, Reza
2017-09-01
This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.
Quantitative Analysis of the Interdisciplinarity of Applied Mathematics.
Xie, Zheng; Duan, Xiaojun; Ouyang, Zhenzheng; Zhang, Pengyuan
2015-01-01
The increasing use of mathematical techniques in scientific research leads to the interdisciplinarity of applied mathematics. This viewpoint is validated quantitatively here by statistical and network analysis on the corpus PNAS 1999-2013. A network describing the interdisciplinary relationships between disciplines in a panoramic view is built based on the corpus. Specific network indicators show the hub role of applied mathematics in interdisciplinary research. The statistical analysis on the corpus content finds that algorithms, a primary topic of applied mathematics, positively correlates, increasingly co-occurs, and has an equilibrium relationship in the long-run with certain typical research paradigms and methodologies. The finding can be understood as an intrinsic cause of the interdisciplinarity of applied mathematics.
ERIC Educational Resources Information Center
Putnik, Goran; Costa, Eric; Alves, Cátia; Castro, Hélio; Varela, Leonilde; Shah, Vaibhav
2016-01-01
Social network-based engineering education (SNEE) is designed and implemented as a model of Education 3.0 paradigm. SNEE represents a new learning methodology, which is based on the concept of social networks and represents an extended model of project-led education. The concept of social networks was applied in the real-life experiment,…
Semantic Social Network Portal for Collaborative Online Communities
ERIC Educational Resources Information Center
Neumann, Marco; O'Murchu, Ina; Breslin, John; Decker, Stefan; Hogan, Deirdre; MacDonaill, Ciaran
2005-01-01
Purpose: The motivation for this investigation is to apply social networking features to a semantic network portal, which supports the efforts in enterprise training units to up-skill the employee in the company, and facilitates the creation and reuse of knowledge in online communities. Design/methodology/approach: The paper provides an overview…
A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks.
Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua
2015-12-17
Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism.
A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks
Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua
2015-01-01
Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism. PMID:26694409
Who "owns" the network: a case study of new media artists' use of high-bandwidth networks
NASA Astrophysics Data System (ADS)
Lesage, F.
The objective of this paper is to briefly give an overview of a research project dealing with the social construction of use of information communication technologies among new media artists interested in online collaboration. It will outline the theoretical and methodological tools applied to the case study of the MARCEL Network.
Díaz Córdova, Diego
2016-01-01
The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.
GFD-Net: A novel semantic similarity methodology for the analysis of gene networks.
Díaz-Montaña, Juan J; Díaz-Díaz, Norberto; Gómez-Vela, Francisco
2017-04-01
Since the popularization of biological network inference methods, it has become crucial to create methods to validate the resulting models. Here we present GFD-Net, the first methodology that applies the concept of semantic similarity to gene network analysis. GFD-Net combines the concept of semantic similarity with the use of gene network topology to analyze the functional dissimilarity of gene networks based on Gene Ontology (GO). The main innovation of GFD-Net lies in the way that semantic similarity is used to analyze gene networks taking into account the network topology. GFD-Net selects a functionality for each gene (specified by a GO term), weights each edge according to the dissimilarity between the nodes at its ends and calculates a quantitative measure of the network functional dissimilarity, i.e. a quantitative value of the degree of dissimilarity between the connected genes. The robustness of GFD-Net as a gene network validation tool was demonstrated by performing a ROC analysis on several network repositories. Furthermore, a well-known network was analyzed showing that GFD-Net can also be used to infer knowledge. The relevance of GFD-Net becomes more evident in Section "GFD-Net applied to the study of human diseases" where an example of how GFD-Net can be applied to the study of human diseases is presented. GFD-Net is available as an open-source Cytoscape app which offers a user-friendly interface to configure and execute the algorithm as well as the ability to visualize and interact with the results(http://apps.cytoscape.org/apps/gfdnet). Copyright © 2017 Elsevier Inc. All rights reserved.
Economic development evaluation based on science and patents
NASA Astrophysics Data System (ADS)
Jokanović, Bojana; Lalic, Bojan; Milovančević, Miloš; Simeunović, Nenad; Marković, Dusan
2017-09-01
Economic development could be achieved through many factors. Science and technology factors could influence economic development drastically. Therefore the main aim in this study was to apply computational intelligence methodology, artificial neural network approach, for economic development estimation based on different science and technology factors. Since economic analyzing could be very challenging task because of high nonlinearity, in this study was applied computational intelligence methodology, artificial neural network approach, to estimate the economic development based on different science and technology factors. As economic development measure, gross domestic product (GDP) was used. As the science and technology factors, patents in different field were used. It was found that the patents in electrical engineering field have the highest influence on the economic development or the GDP.
The application of complex network time series analysis in turbulent heated jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.
In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less
The application of complex network time series analysis in turbulent heated jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.
2014-06-15
In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less
Effective network inference through multivariate information transfer estimation
NASA Astrophysics Data System (ADS)
Dahlqvist, Carl-Henrik; Gnabo, Jean-Yves
2018-06-01
Network representation has steadily gained in popularity over the past decades. In many disciplines such as finance, genetics, neuroscience or human travel to cite a few, the network may not directly be observable and needs to be inferred from time-series data, leading to the issue of separating direct interactions between two entities forming the network from indirect interactions coming through its remaining part. Drawing on recent contributions proposing strategies to deal with this problem such as the so-called "global silencing" approach of Barzel and Barabasi or "network deconvolution" of Feizi et al. (2013), we propose a novel methodology to infer an effective network structure from multivariate conditional information transfers. Its core principal is to test the information transfer between two nodes through a step-wise approach by conditioning the transfer for each pair on a specific set of relevant nodes as identified by our algorithm from the rest of the network. The methodology is model free and can be applied to high-dimensional networks with both inter-lag and intra-lag relationships. It outperforms state-of-the-art approaches for eliminating the redundancies and more generally retrieving simulated artificial networks in our Monte-Carlo experiments. We apply the method to stock market data at different frequencies (15 min, 1 h, 1 day) to retrieve the network of US largest financial institutions and then document how bank's centrality measurements relate to bank's systemic vulnerability.
Improved classification of drainage networks using junction angles and secondary tributary lengths
NASA Astrophysics Data System (ADS)
Jung, Kichul; Marpu, Prashanth R.; Ouarda, Taha B. M. J.
2015-06-01
River networks in different regions have distinct characteristics generated by geological processes. These differences enable classification of drainage networks using several measures with many features of the networks. In this study, we propose a new approach that only uses the junction angles with secondary tributary lengths to directly classify different network types. This methodology is based on observations on 50 predefined channel networks. The cumulative distributions of secondary tributary lengths for different ranges of junction angles are used to obtain the descriptive values that are defined using a power-law representation. The averages of the values for the known networks are used to represent the classes, and any unclassified network can be classified based on the similarity of the representative values to those of the known classes. The methodology is applied to 10 networks in the United Arab Emirates and Oman and five networks in the USA, and the results are validated using the classification obtained with other methods.
Architecture for networked electronic patient record systems.
Takeda, H; Matsumura, Y; Kuwata, S; Nakano, H; Sakamoto, N; Yamamoto, R
2000-11-01
There have been two major approaches to the development of networked electronic patient record (EPR) architecture. One uses object-oriented methodologies for constructing the model, which include the GEHR project, Synapses, HL7 RIM and so on. The second approach uses document-oriented methodologies, as applied in examples of HL7 PRA. It is practically beneficial to take the advantages of both approaches and to add solution technologies for network security such as PKI. In recognition of the similarity with electronic commerce, a certificate authority as a trusted third party will be organised for establishing networked EPR system. This paper describes a Japanese functional model that has been developed, and proposes a document-object-oriented architecture, which is-compared with other existing models.
Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.
Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam
2016-01-01
We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.
Functional approximation using artificial neural networks in structural mechanics
NASA Technical Reports Server (NTRS)
Alam, Javed; Berke, Laszlo
1993-01-01
The artificial neural networks (ANN) methodology is an outgrowth of research in artificial intelligence. In this study, the feed-forward network model that was proposed by Rumelhart, Hinton, and Williams was applied to the mapping of functions that are encountered in structural mechanics problems. Several different network configurations were chosen to train the available data for problems in materials characterization and structural analysis of plates and shells. By using the recall process, the accuracy of these trained networks was assessed.
Deformable image registration using convolutional neural networks
NASA Astrophysics Data System (ADS)
Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.
2018-03-01
Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.
Design of a Competitive and Collaborative Learning Strategy in a Communication Networks Course
ERIC Educational Resources Information Center
Regueras, L. M.; Verdu, E.; Verdu, M. J.; de Castro, J. P.
2011-01-01
In this paper, an educational methodology based on collaborative and competitive learning is proposed. The suggested approach has been successfully applied to an undergraduate communication networks course, which is part of the core curriculum of the three-year degree in telecommunications engineering at the University of Valladolid in Spain. This…
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
2012-01-01
networks has become fast , cheap, and easy (Shapiro, 1971; Trigg & Weiser, 1986). Modern information and communication technologies, such as the internet...However, once the model is learned, inference time is not subject to this constraint. Therefore, applying the model in end-user applications is fast ...products that facilitate the fast collection and assessment of these networks. For the purpose of analyzing socio-technical networks of geopolitical
Policy Mobilities and Methodology: A Proposition for Inventive Methods in Education Policy Studies
ERIC Educational Resources Information Center
Gulson, Kalervo N.; Lewis, Steven; Lingard, Bob; Lubienski, Christopher; Takayama, Keita; Webb, P. Taylor
2017-01-01
The argument of this paper is that new methodologies associated with the emerging field of "policy mobilities" can be applied, and are in fact required, to examine and research the networked and relational, or "topological", nature of globalised education policy, which cuts across the new spaces of policymaking and new modes of…
A neural network based methodology to predict site-specific spectral acceleration values
NASA Astrophysics Data System (ADS)
Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.
2010-12-01
A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.
Pendular behavior of public transport networks
NASA Astrophysics Data System (ADS)
Izawa, Mirian M.; Oliveira, Fernando A.; Cajueiro, Daniel O.; Mello, Bernardo A.
2017-07-01
In this paper, we propose a methodology that bears close resemblance to the Fourier analysis of the first harmonic to study networks subjected to pendular behavior. In this context, pendular behavior is characterized by the phenomenon of people's dislocation from their homes to work in the morning and people's dislocation in the opposite direction in the afternoon. Pendular behavior is a relevant phenomenon that takes place in public transport networks because it may reduce the overall efficiency of the system as a result of the asymmetric utilization of the system in different directions. We apply this methodology to the bus transport system of Brasília, which is a city that has commercial and residential activities in distinct boroughs. We show that this methodology can be used to characterize the pendular behavior of this system, identifying the most critical nodes and times of the day when this system is in more severe demanded.
Modeling the resilience of critical infrastructure: the role of network dependencies.
Guidotti, Roberto; Chmielewski, Hana; Unnikrishnan, Vipin; Gardoni, Paolo; McAllister, Therese; van de Lindt, John
2016-01-01
Water and wastewater network, electric power network, transportation network, communication network, and information technology network are among the critical infrastructure in our communities; their disruption during and after hazard events greatly affects communities' well-being, economic security, social welfare, and public health. In addition, a disruption in one network may cause disruption to other networks and lead to their reduced functionality. This paper presents a unified theoretical methodology for the modeling of dependent/interdependent infrastructure networks and incorporates it in a six-step probabilistic procedure to assess their resilience. Both the methodology and the procedure are general, can be applied to any infrastructure network and hazard, and can model different types of dependencies between networks. As an illustration, the paper models the direct effects of seismic events on the functionality of a potable water distribution network and the cascading effects of the damage of the electric power network (EPN) on the potable water distribution network (WN). The results quantify the loss of functionality and delay in the recovery process due to dependency of the WN on the EPN. The results show the importance of capturing the dependency between networks in modeling the resilience of critical infrastructure.
Modeling the resilience of critical infrastructure: the role of network dependencies
Guidotti, Roberto; Chmielewski, Hana; Unnikrishnan, Vipin; Gardoni, Paolo; McAllister, Therese; van de Lindt, John
2017-01-01
Water and wastewater network, electric power network, transportation network, communication network, and information technology network are among the critical infrastructure in our communities; their disruption during and after hazard events greatly affects communities’ well-being, economic security, social welfare, and public health. In addition, a disruption in one network may cause disruption to other networks and lead to their reduced functionality. This paper presents a unified theoretical methodology for the modeling of dependent/interdependent infrastructure networks and incorporates it in a six-step probabilistic procedure to assess their resilience. Both the methodology and the procedure are general, can be applied to any infrastructure network and hazard, and can model different types of dependencies between networks. As an illustration, the paper models the direct effects of seismic events on the functionality of a potable water distribution network and the cascading effects of the damage of the electric power network (EPN) on the potable water distribution network (WN). The results quantify the loss of functionality and delay in the recovery process due to dependency of the WN on the EPN. The results show the importance of capturing the dependency between networks in modeling the resilience of critical infrastructure. PMID:28825037
Real-time hydraulic interval state estimation for water transport networks: a case study
NASA Astrophysics Data System (ADS)
Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.
2018-03-01
Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.
Hilgers, Ralf-Dieter; Bogdan, Malgorzata; Burman, Carl-Fredrik; Dette, Holger; Karlsson, Mats; König, Franz; Male, Christoph; Mentré, France; Molenberghs, Geert; Senn, Stephen
2018-05-11
IDeAl (Integrated designs and analysis of small population clinical trials) is an EU funded project developing new statistical design and analysis methodologies for clinical trials in small population groups. Here we provide an overview of IDeAl findings and give recommendations to applied researchers. The description of the findings is broken down by the nine scientific IDeAl work packages and summarizes results from the project's more than 60 publications to date in peer reviewed journals. In addition, we applied text mining to evaluate the publications and the IDeAl work packages' output in relation to the design and analysis terms derived from in the IRDiRC task force report on small population clinical trials. The results are summarized, describing the developments from an applied viewpoint. The main result presented here are 33 practical recommendations drawn from the work, giving researchers a comprehensive guidance to the improved methodology. In particular, the findings will help design and analyse efficient clinical trials in rare diseases with limited number of patients available. We developed a network representation relating the hot topics developed by the IRDiRC task force on small population clinical trials to IDeAl's work as well as relating important methodologies by IDeAl's definition necessary to consider in design and analysis of small-population clinical trials. These network representation establish a new perspective on design and analysis of small-population clinical trials. IDeAl has provided a huge number of options to refine the statistical methodology for small-population clinical trials from various perspectives. A total of 33 recommendations developed and related to the work packages help the researcher to design small population clinical trial. The route to improvements is displayed in IDeAl-network representing important statistical methodological skills necessary to design and analysis of small-population clinical trials. The methods are ready for use.
Evaluating multiple determinants of the structure of plant-animal mutualistic networks.
Vázquez, Diego P; Chacoff, Natacha P; Cagnolo, Luciano
2009-08-01
The structure of mutualistic networks is likely to result from the simultaneous influence of neutrality and the constraints imposed by complementarity in species phenotypes, phenologies, spatial distributions, phylogenetic relationships, and sampling artifacts. We develop a conceptual and methodological framework to evaluate the relative contributions of these potential determinants. Applying this approach to the analysis of a plant-pollinator network, we show that information on relative abundance and phenology suffices to predict several aggregate network properties (connectance, nestedness, interaction evenness, and interaction asymmetry). However, such information falls short of predicting the detailed network structure (the frequency of pairwise interactions), leaving a large amount of variation unexplained. Taken together, our results suggest that both relative species abundance and complementarity in spatiotemporal distribution contribute substantially to generate observed network patters, but that this information is by no means sufficient to predict the occurrence and frequency of pairwise interactions. Future studies could use our methodological framework to evaluate the generality of our findings in a representative sample of study systems with contrasting ecological conditions.
WGCNA: an R package for weighted correlation network analysis.
Langfelder, Peter; Horvath, Steve
2008-12-29
Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.
Network representations of angular regions for electromagnetic scattering
2017-01-01
Network modeling in electromagnetics is an effective technique in treating scattering problems by canonical and complex structures. Geometries constituted of angular regions (wedges) together with planar layers can now be approached with the Generalized Wiener-Hopf Technique supported by network representation in spectral domain. Even if the network representations in spectral planes are of great importance by themselves, the aim of this paper is to present a theoretical base and a general procedure for the formulation of complex scattering problems using network representation for the Generalized Wiener Hopf Technique starting basically from the wave equation. In particular while the spectral network representations are relatively well known for planar layers, the network modelling for an angular region requires a new theory that will be developed in this paper. With this theory we complete the formulation of a network methodology whose effectiveness is demonstrated by the application to a complex scattering problem with practical solutions given in terms of GTD/UTD diffraction coefficients and total far fields for engineering applications. The methodology can be applied to other physics fields. PMID:28817573
Szaleniec, Maciej
2012-01-01
Artificial Neural Networks (ANNs) are introduced as robust and versatile tools in quantitative structure-activity relationship (QSAR) modeling. Their application to the modeling of enzyme reactivity is discussed, along with methodological issues. Methods of input variable selection, optimization of network internal structure, data set division and model validation are discussed. The application of ANNs in the modeling of enzyme activity over the last 20 years is briefly recounted. The discussed methodology is exemplified by the case of ethylbenzene dehydrogenase (EBDH). Intelligent Problem Solver and genetic algorithms are applied for input vector selection, whereas k-means clustering is used to partition the data into training and test cases. The obtained models exhibit high correlation between the predicted and experimental values (R(2) > 0.9). Sensitivity analyses and study of the response curves are used as tools for the physicochemical interpretation of the models in terms of the EBDH reaction mechanism. Neural networks are shown to be a versatile tool for the construction of robust QSAR models that can be applied to a range of aspects important in drug design and the prediction of biological activity.
Fuzzy neural network methodology applied to medical diagnosis
NASA Technical Reports Server (NTRS)
Gorzalczany, Marian B.; Deutsch-Mcleish, Mary
1992-01-01
This paper presents a technique for building expert systems that combines the fuzzy-set approach with artificial neural network structures. This technique can effectively deal with two types of medical knowledge: a nonfuzzy one and a fuzzy one which usually contributes to the process of medical diagnosis. Nonfuzzy numerical data is obtained from medical tests. Fuzzy linguistic rules describing the diagnosis process are provided by a human expert. The proposed method has been successfully applied in veterinary medicine as a support system in the diagnosis of canine liver diseases.
An Interdisciplinary Approach for Designing Kinetic Models of the Ras/MAPK Signaling Pathway.
Reis, Marcelo S; Noël, Vincent; Dias, Matheus H; Albuquerque, Layra L; Guimarães, Amanda S; Wu, Lulu; Barrera, Junior; Armelin, Hugo A
2017-01-01
We present in this article a methodology for designing kinetic models of molecular signaling networks, which was exemplarily applied for modeling one of the Ras/MAPK signaling pathways in the mouse Y1 adrenocortical cell line. The methodology is interdisciplinary, that is, it was developed in a way that both dry and wet lab teams worked together along the whole modeling process.
WGCNA: an R package for weighted correlation network analysis
Langfelder, Peter; Horvath, Steve
2008-01-01
Background Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. Results The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. Conclusion The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at . PMID:19114008
Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-01-01
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625
Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K
2017-06-21
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-09
..., network leadership, program administrators, and research site staff. Survey 2500 1 30/60 1250 Interview... Research Programs (MIRP) SUMMARY: In compliance with the requirement of Section 3506(c)(2)(A) of the... to Understand How NIH Programs Apply Methodologies to Improve Their Research Programs (MIRP), 0925New...
Seismic activity prediction using computational intelligence techniques in northern Pakistan
NASA Astrophysics Data System (ADS)
Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat
2017-10-01
Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.
Multiobjective assessment of distributed energy storage location in electricity networks
NASA Astrophysics Data System (ADS)
Ribeiro Gonçalves, José António; Neves, Luís Pires; Martins, António Gomes
2017-07-01
This paper presents a methodology to provide information to a decision maker on the associated impacts, both of economic and technical nature, of possible management schemes of storage units for choosing the best location of distributed storage devices, with a multiobjective optimisation approach based on genetic algorithms. The methodology was applied to a case study, a known distribution network model in which the installation of distributed storage units was tested, using lithium-ion batteries. The obtained results show a significant influence of the charging/discharging profile of batteries on the choice of their best location, as well as the relevance that these choices may have for the different network management objectives, for example, for reducing network energy losses or minimising voltage deviations. Results also show a difficult cost-effectiveness of an energy-only service, with the tested systems, both due to capital cost and due to the efficiency of conversion.
Jacobs, Wura; Goodson, Patricia; Barry, Adam E; McLeroy, Kenneth R
2016-05-01
Despite previous research indicating an adolescents' alcohol, tobacco, and other drug (ATOD) use is dependent upon their sex and the sex composition of their social network, few social network studies consider sex differences and network sex composition as a determinant of adolescents' ATOD use behavior. This systematic literature review examining how social network analytic studies examine adolescent ATOD use behavior is guided by the following research questions: (1) How do studies conceptualize sex and network sex composition? (2) What types of network affiliations are employed to characterize adolescent networks? (3) What is the methodological quality of included studies? After searching several electronic databases (PsycINFO, EBSCO, and Communication Abstract) and applying our inclusion/exclusion criteria, 48 studies were included in the review. Overall, few studies considered sex composition of networks in which adolescents are embedded as a determinant that influences adolescent ATOD use. Although included studies all exhibited high methodological quality, the majority only used friendship networks to characterize adolescent social networks and subsequently failed to capture the influence of other network types, such as romantic networks. School-based prevention programs could be strengthened by (1) selecting and targeting peer leaders based on sex, and (2) leveraging other types of social networks beyond simply friendships. © 2016, American School Health Association.
Akosah, Eric; Ohemeng-Dapaah, Seth; Sakyi Baah, Joseph; Kanter, Andrew S
2013-01-01
Background The network structure of an organization influences how well or poorly an organization communicates and manages its resources. In the Millennium Villages Project site in Bonsaaso, Ghana, a mobile phone closed user group has been introduced for use by the Bonsaaso Millennium Villages Project Health Team and other key individuals. No assessment on the benefits or barriers of the use of the closed user group had been carried out. Objective The purpose of this research was to make the case for the use of social network analysis methods to be applied in health systems research—specifically related to mobile health. Methods This study used mobile phone voice records of, conducted interviews with, and reviewed call journals kept by a mobile phone closed user group consisting of the Bonsaaso Millennium Villages Project Health Team. Social network analysis methodology complemented by a qualitative component was used. Monthly voice data of the closed user group from Airtel Bharti Ghana were analyzed using UCINET and visual depictions of the network were created using NetDraw. Interviews and call journals kept by informants were analyzed using NVivo. Results The methodology was successful in helping identify effective organizational structure. Members of the Health Management Team were the more central players in the network, rather than the Community Health Nurses (who might have been expected to be central). Conclusions Social network analysis methodology can be used to determine the most productive structure for an organization or team, identify gaps in communication, identify key actors with greatest influence, and more. In conclusion, this methodology can be a useful analytical tool, especially in the context of mobile health, health services, and operational and managerial research. PMID:23552721
Kaonga, Nadi Nina; Labrique, Alain; Mechael, Patricia; Akosah, Eric; Ohemeng-Dapaah, Seth; Sakyi Baah, Joseph; Kodie, Richmond; Kanter, Andrew S; Levine, Orin
2013-04-03
The network structure of an organization influences how well or poorly an organization communicates and manages its resources. In the Millennium Villages Project site in Bonsaaso, Ghana, a mobile phone closed user group has been introduced for use by the Bonsaaso Millennium Villages Project Health Team and other key individuals. No assessment on the benefits or barriers of the use of the closed user group had been carried out. The purpose of this research was to make the case for the use of social network analysis methods to be applied in health systems research--specifically related to mobile health. This study used mobile phone voice records of, conducted interviews with, and reviewed call journals kept by a mobile phone closed user group consisting of the Bonsaaso Millennium Villages Project Health Team. Social network analysis methodology complemented by a qualitative component was used. Monthly voice data of the closed user group from Airtel Bharti Ghana were analyzed using UCINET and visual depictions of the network were created using NetDraw. Interviews and call journals kept by informants were analyzed using NVivo. The methodology was successful in helping identify effective organizational structure. Members of the Health Management Team were the more central players in the network, rather than the Community Health Nurses (who might have been expected to be central). Social network analysis methodology can be used to determine the most productive structure for an organization or team, identify gaps in communication, identify key actors with greatest influence, and more. In conclusion, this methodology can be a useful analytical tool, especially in the context of mobile health, health services, and operational and managerial research.
Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia
2015-01-01
The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.
Unique sodium phosphosilicate glasses designed through extended topological constraint theory.
Zeng, Huidan; Jiang, Qi; Liu, Zhao; Li, Xiang; Ren, Jing; Chen, Guorong; Liu, Fude; Peng, Shou
2014-05-15
Sodium phosphosilicate glasses exhibit unique properties with mixed network formers, and have various potential applications. However, proper understanding on the network structures and property-oriented methodology based on compositional changes are lacking. In this study, we have developed an extended topological constraint theory and applied it successfully to analyze the composition dependence of glass transition temperature (Tg) and hardness of sodium phosphosilicate glasses. It was found that the hardness and Tg of glasses do not always increase with the content of SiO2, and there exist maximum hardness and Tg at a certain content of SiO2. In particular, a unique glass (20Na2O-17SiO2-63P2O5) exhibits a low glass transition temperature (589 K) but still has relatively high hardness (4.42 GPa) mainly due to the high fraction of highly coordinated network former Si((6)). Because of its convenient forming and manufacturing, such kind of phosphosilicate glasses has a lot of valuable applications in optical fibers, optical amplifiers, biomaterials, and fuel cells. Also, such methodology can be applied to other types of phosphosilicate glasses with similar structures.
Evolving RBF neural networks for adaptive soft-sensor design.
Alexandridis, Alex
2013-12-01
This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.
Plazas-Nossa, Leonardo; Hofer, Thomas; Gruber, Günter; Torres, Andres
2017-02-01
This work proposes a methodology for the forecasting of online water quality data provided by UV-Vis spectrometry. Therefore, a combination of principal component analysis (PCA) to reduce the dimensionality of a data set and artificial neural networks (ANNs) for forecasting purposes was used. The results obtained were compared with those obtained by using discrete Fourier transform (DFT). The proposed methodology was applied to four absorbance time series data sets composed by a total number of 5705 UV-Vis spectra. Absolute percentage errors obtained by applying the proposed PCA/ANN methodology vary between 10% and 13% for all four study sites. In general terms, the results obtained were hardly generalizable, as they appeared to be highly dependent on specific dynamics of the water system; however, some trends can be outlined. PCA/ANN methodology gives better results than PCA/DFT forecasting procedure by using a specific spectra range for the following conditions: (i) for Salitre wastewater treatment plant (WWTP) (first hour) and Graz West R05 (first 18 min), from the last part of UV range to all visible range; (ii) for Gibraltar pumping station (first 6 min) for all UV-Vis absorbance spectra; and (iii) for San Fernando WWTP (first 24 min) for all of UV range to middle part of visible range.
Measuring political polarization: Twitter shows the two sides of Venezuela
NASA Astrophysics Data System (ADS)
Morales, A. J.; Borondo, J.; Losada, J. C.; Benito, R. M.
2015-03-01
We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.
Rain/No-Rain Identification from Bispectral Satellite Information using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Tao, Y.
2016-12-01
Satellite-based precipitation estimation products have the advantage of high resolution and global coverage. However, they still suffer from insufficient accuracy. To accurately estimate precipitation from satellite data, there are two most important aspects: sufficient precipitation information in the satellite information and proper methodologies to extract such information effectively. This study applies the state-of-the-art machine learning methodologies to bispectral satellite information for Rain/No-Rain detection. Specifically, we use deep neural networks to extract features from infrared and water vapor channels and connect it to precipitation identification. To evaluate the effectiveness of the methodology, we first applies it to the infrared data only (Model DL-IR only), the most commonly used inputs for satellite-based precipitation estimation. Then we incorporates water vapor data (Model DL-IR + WV) to further improve the prediction performance. Radar stage IV dataset is used as ground measurement for parameter calibration. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), is used as a reference to compare the performance of both models in both winter and summer seasons.The experiments show significant improvement for both models in precipitation identification. The overall performance gains in the Critical Success Index (CSI) are 21.60% and 43.66% over the verification periods for Model DL-IR only and Model DL-IR+WV model compared to PERSIANN-CCS, respectively. Moreover, specific case studies show that the water vapor channel information and the deep neural networks effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.
Discovering SIFIs in Interbank Communities
Pecora, Nicolò; Rovira Kaltwasser, Pablo; Spelta, Alessandro
2016-01-01
This paper proposes a new methodology based on non-negative matrix factorization to detect communities and to identify central nodes in a network as well as within communities. The method is specifically designed for directed weighted networks and, consequently, it has been applied to the interbank network derived from the e-MID interbank market. In an interbank network indeed links are directed, representing flows of funds between lenders and borrowers. Besides distinguishing between Systemically Important Borrowers and Lenders, the technique complements the detection of systemically important banks, revealing the community structure of the network, that proxies the most plausible areas of contagion of institutions’ distress. PMID:28002445
Levack, William M; Meyer, Thorsten; Negrini, Stefano; Malmivaara, Antti
2017-10-01
Cochrane Rehabilitation aims to improve the application of evidence-based practice in rehabilitation. It also aims to support Cochrane in the production of reliable, clinically meaningful syntheses of evidence related to the practice of rehabilitation, while accommodating the many methodological challenges facing the field. To this end, Cochrane Rehabilitation established a Methodology Committee to examine, explore and find solutions for the methodological challenges related to evidence synthesis and knowledge translation in rehabilitation. We conducted an international online survey via Cochrane Rehabilitation networks to canvass opinions regarding the future work priorities for this committee and to seek information on people's current capabilities to assist with this work. The survey findings indicated strongest interest in work on how reviewers have interpreted and applied Cochrane methods in reviews on rehabilitation topics in the past, and on gathering a collection of existing publications on review methods for undertaking systematic reviews relevant to rehabilitation. Many people are already interested in contributing to the work of the Methodology Committee and there is a large amount of expertise for this work in the extended Cochrane Rehabilitation network already.
Bio-inspired algorithms applied to molecular docking simulations.
Heberlé, G; de Azevedo, W F
2011-01-01
Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.
Naegle, Kristen M; Welsch, Roy E; Yaffe, Michael B; White, Forest M; Lauffenburger, Douglas A
2011-07-01
Advances in proteomic technologies continue to substantially accelerate capability for generating experimental data on protein levels, states, and activities in biological samples. For example, studies on receptor tyrosine kinase signaling networks can now capture the phosphorylation state of hundreds to thousands of proteins across multiple conditions. However, little is known about the function of many of these protein modifications, or the enzymes responsible for modifying them. To address this challenge, we have developed an approach that enhances the power of clustering techniques to infer functional and regulatory meaning of protein states in cell signaling networks. We have created a new computational framework for applying clustering to biological data in order to overcome the typical dependence on specific a priori assumptions and expert knowledge concerning the technical aspects of clustering. Multiple clustering analysis methodology ('MCAM') employs an array of diverse data transformations, distance metrics, set sizes, and clustering algorithms, in a combinatorial fashion, to create a suite of clustering sets. These sets are then evaluated based on their ability to produce biological insights through statistical enrichment of metadata relating to knowledge concerning protein functions, kinase substrates, and sequence motifs. We applied MCAM to a set of dynamic phosphorylation measurements of the ERRB network to explore the relationships between algorithmic parameters and the biological meaning that could be inferred and report on interesting biological predictions. Further, we applied MCAM to multiple phosphoproteomic datasets for the ERBB network, which allowed us to compare independent and incomplete overlapping measurements of phosphorylation sites in the network. We report specific and global differences of the ERBB network stimulated with different ligands and with changes in HER2 expression. Overall, we offer MCAM as a broadly-applicable approach for analysis of proteomic data which may help increase the current understanding of molecular networks in a variety of biological problems. © 2011 Naegle et al.
Rabadan, Jose; Perez-Jimenez, Rafael
2017-01-01
Visible Light Communications (VLC) is a cutting edge technology for data communication that is being considered to be implemented in a wide range of applications such as Inter-vehicle communication or Local Area Network (LAN) communication. As a novel technology, some aspects of the implementation of VLC have not been deeply considered or tested. Among these aspects, security and its implementation may become an obstacle for VLCs broad usage. In this article, we have used the well-known Risk Matrix methodology to determine the relative risk that several common attacks have in a VLC network. Four examples: a War Driving, a Queensland alike Denial of Service, a Preshared Key Cracking, and an Evil Twin attack, illustrate the utilization of the methodology over a VLC implementation. The used attacks also covered the different areas delimited by the attack taxonomy used in this work. By defining and determining which attacks present a greater risk, the results of this work provide a lead into which areas should be invested to increase the safety of VLC networks. PMID:29186184
Marin-Garcia, Ignacio; Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Perez-Jimenez, Rafael
2017-01-01
Visible Light Communications (VLC) is a cutting edge technology for data communication that is being considered to be implemented in a wide range of applications such as Inter-vehicle communication or Local Area Network (LAN) communication. As a novel technology, some aspects of the implementation of VLC have not been deeply considered or tested. Among these aspects, security and its implementation may become an obstacle for VLCs broad usage. In this article, we have used the well-known Risk Matrix methodology to determine the relative risk that several common attacks have in a VLC network. Four examples: a War Driving, a Queensland alike Denial of Service, a Preshared Key Cracking, and an Evil Twin attack, illustrate the utilization of the methodology over a VLC implementation. The used attacks also covered the different areas delimited by the attack taxonomy used in this work. By defining and determining which attacks present a greater risk, the results of this work provide a lead into which areas should be invested to increase the safety of VLC networks.
Batalle, Dafnis; Muñoz-Moreno, Emma; Figueras, Francesc; Bargallo, Nuria; Eixarch, Elisenda; Gratacos, Eduard
2013-12-01
Obtaining individual biomarkers for the prediction of altered neurological outcome is a challenge of modern medicine and neuroscience. Connectomics based on magnetic resonance imaging (MRI) stands as a good candidate to exhaustively extract information from MRI by integrating the information obtained in a few network features that can be used as individual biomarkers of neurological outcome. However, this approach typically requires the use of diffusion and/or functional MRI to extract individual brain networks, which require high acquisition times and present an extreme sensitivity to motion artifacts, critical problems when scanning fetuses and infants. Extraction of individual networks based on morphological similarity from gray matter is a new approach that benefits from the power of graph theory analysis to describe gray matter morphology as a large-scale morphological network from a typical clinical anatomic acquisition such as T1-weighted MRI. In the present paper we propose a methodology to normalize these large-scale morphological networks to a brain network with standardized size based on a parcellation scheme. The proposed methodology was applied to reconstruct individual brain networks of 63 one-year-old infants, 41 infants with intrauterine growth restriction (IUGR) and 22 controls, showing altered network features in the IUGR group, and their association with neurodevelopmental outcome at two years of age by means of ordinal regression analysis of the network features obtained with Bayley Scale for Infant and Toddler Development, third edition. Although it must be more widely assessed, this methodology stands as a good candidate for the development of biomarkers for altered neurodevelopment in the pediatric population. © 2013 Elsevier Inc. All rights reserved.
An extensive assessment of network alignment algorithms for comparison of brain connectomes.
Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario
2017-06-06
Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.
The Use of Multi-Criteria Evaluation and Network Analysis in the Area Development Planning Process
2013-03-01
layouts. The alternative layout scoring process, base in multi-criteria evaluation, returns a quantitative score for each alternative layout and a...The purpose of this research was to develop improvements to the area development planning process. These plans are used to improve operations within...an installation sub-section by altering the physical layout of facilities. One methodology was developed based on apply network analysis concepts to
Backbone of complex networks of corporations: the flow of control.
Glattfelder, J B; Battiston, S
2009-09-01
We present a methodology to extract the backbone of complex networks based on the weight and direction of links, as well as on nontopological properties of nodes. We show how the methodology can be applied in general to networks in which mass or energy is flowing along the links. In particular, the procedure enables us to address important questions in economics, namely, how control and wealth are structured and concentrated across national markets. We report on the first cross-country investigation of ownership networks, focusing on the stock markets of 48 countries around the world. On the one hand, our analysis confirms results expected on the basis of the literature on corporate control, namely, that in Anglo-Saxon countries control tends to be dispersed among numerous shareholders. On the other hand, it also reveals that in the same countries, control is found to be highly concentrated at the global level, namely, lying in the hands of very few important shareholders. Interestingly, the exact opposite is observed for European countries. These results have previously not been reported as they are not observable without the kind of network analysis developed here.
Dynamic modeling and optimization for space logistics using time-expanded networks
NASA Astrophysics Data System (ADS)
Ho, Koki; de Weck, Olivier L.; Hoffman, Jeffrey A.; Shishko, Robert
2014-12-01
This research develops a dynamic logistics network formulation for lifecycle optimization of mission sequences as a system-level integrated method to find an optimal combination of technologies to be used at each stage of the campaign. This formulation can find the optimal transportation architecture considering its technology trades over time. The proposed methodologies are inspired by the ground logistics analysis techniques based on linear programming network optimization. Particularly, the time-expanded network and its extension are developed for dynamic space logistics network optimization trading the quality of the solution with the computational load. In this paper, the methodologies are applied to a human Mars exploration architecture design problem. The results reveal multiple dynamic system-level trades over time and give recommendation of the optimal strategy for the human Mars exploration architecture. The considered trades include those between In-Situ Resource Utilization (ISRU) and propulsion technologies as well as the orbit and depot location selections over time. This research serves as a precursor for eventual permanent settlement and colonization of other planets by humans and us becoming a multi-planet species.
Backbone of complex networks of corporations: The flow of control
NASA Astrophysics Data System (ADS)
Glattfelder, J. B.; Battiston, S.
2009-09-01
We present a methodology to extract the backbone of complex networks based on the weight and direction of links, as well as on nontopological properties of nodes. We show how the methodology can be applied in general to networks in which mass or energy is flowing along the links. In particular, the procedure enables us to address important questions in economics, namely, how control and wealth are structured and concentrated across national markets. We report on the first cross-country investigation of ownership networks, focusing on the stock markets of 48 countries around the world. On the one hand, our analysis confirms results expected on the basis of the literature on corporate control, namely, that in Anglo-Saxon countries control tends to be dispersed among numerous shareholders. On the other hand, it also reveals that in the same countries, control is found to be highly concentrated at the global level, namely, lying in the hands of very few important shareholders. Interestingly, the exact opposite is observed for European countries. These results have previously not been reported as they are not observable without the kind of network analysis developed here.
Auditing SNOMED Relationships Using a Converse Abstraction Network
Wei, Duo; Halper, Michael; Elhanan, Gai; Chen, Yan; Perl, Yehoshua; Geller, James; Spackman, Kent A.
2009-01-01
In SNOMED CT, a given kind of attribute relationship is defined between two hierarchies, a source and a target. Certain hierarchies (or subhierarchies) serve only as targets, with no outgoing relationships of their own. However, converse relationships—those pointing in a direction opposite to the defined relationships—while not explicitly represented in SNOMED’s inferred view, can be utilized in forming an alternative view of a source. In particular, they can help shed light on a source hierarchy’s overall relationship structure. Toward this end, an abstraction network, called the converse abstraction network (CAN), derived automatically from a given SNOMED hierarchy is presented. An auditing methodology based on the CAN is formulated. The methodology is applied to SNOMED’s Device subhierarchy and the related device relationships of the Procedure hierarchy. The results indicate that the CAN is useful in finding opportunities for refining and improving SNOMED. PMID:20351941
Games network and application to PAs system.
Chettaoui, C; Delaplace, F; Manceny, M; Malo, M
2007-02-01
In this article, we present a game theory based framework, named games network, for modeling biological interactions. After introducing the theory, we more precisely describe the methodology to model biological interactions. Then we apply it to the plasminogen activator system (PAs) which is a signal transduction pathway involved in cancer cell migration. The games network theory extends game theory by including the locality of interactions. Each game in a games network represents local interactions between biological agents. The PAs system is implicated in cytoskeleton modifications via regulation of actin and microtubules, which in turn favors cell migration. The games network model has enabled us a better understanding of the regulation involved in the PAs system.
A data fusion-based methodology for optimal redesign of groundwater monitoring networks
NASA Astrophysics Data System (ADS)
Hosseini, Marjan; Kerachian, Reza
2017-09-01
In this paper, a new data fusion-based methodology is presented for spatio-temporal (S-T) redesigning of Groundwater Level Monitoring Networks (GLMNs). The kriged maps of three different criteria (i.e. marginal entropy of water table levels, estimation error variances of mean values of water table levels, and estimation values of long-term changes in water level) are combined for determining monitoring sub-areas of high and low priorities in order to consider different spatial patterns for each sub-area. The best spatial sampling scheme is selected by applying a new method, in which a regular hexagonal gridding pattern and the Thiessen polygon approach are respectively utilized in sub-areas of high and low monitoring priorities. An Artificial Neural Network (ANN) and a S-T kriging models are used to simulate water level fluctuations. To improve the accuracy of the predictions, results of the ANN and S-T kriging models are combined using a data fusion technique. The concept of Value of Information (VOI) is utilized to determine two stations with maximum information values in both sub-areas with high and low monitoring priorities. The observed groundwater level data of these two stations are considered for the power of trend detection, estimating periodic fluctuations and mean values of the stationary components, which are used for determining non-uniform sampling frequencies for sub-areas. The proposed methodology is applied to the Dehgolan plain in northwestern Iran. The results show that a new sampling configuration with 35 and 7 monitoring stations and sampling intervals of 20 and 32 days, respectively in sub-areas with high and low monitoring priorities, leads to a more efficient monitoring network than the existing one containing 52 monitoring stations and monthly temporal sampling.
The added value of thorough economic evaluation of telemedicine networks.
Le Goff-Pronost, Myriam; Sicotte, Claude
2010-02-01
This paper proposes a thorough framework for the economic evaluation of telemedicine networks. A standard cost analysis methodology was used as the initial base, similar to the evaluation method currently being applied to telemedicine, and to which we suggest adding subsequent stages that enhance the scope and sophistication of the analytical methodology. We completed the methodology with a longitudinal and stakeholder analysis, followed by the calculation of a break-even threshold, a calculation of the economic outcome based on net present value (NPV), an estimate of the social gain through external effects, and an assessment of the probability of social benefits. In order to illustrate the advantages, constraints and limitations of the proposed framework, we tested it in a paediatric cardiology tele-expertise network. The results demonstrate that the project threshold was not reached after the 4 years of the study. Also, the calculation of the project's NPV remained negative. However, the additional analytical steps of the proposed framework allowed us to highlight alternatives that can make this service economically viable. These included: use over an extended period of time, extending the network to other telemedicine specialties, or including it in the services offered by other community hospitals. In sum, the results presented here demonstrate the usefulness of an economic evaluation framework as a way of offering decision makers the tools they need to make comprehensive evaluations of telemedicine networks.
Coevolution of Epidemics, Social Networks, and Individual Behavior: A Case Study
NASA Astrophysics Data System (ADS)
Chen, Jiangzhuo; Marathe, Achla; Marathe, Madhav
This research shows how a limited supply of antivirals can be distributed optimally between the hospitals and the market so that the attack rate is minimized and enough revenue is generated to recover the cost of the antivirals. Results using an individual based model find that prevalence elastic demand behavior delays the epidemic and change in the social contact network induced by isolation reduces the peak of the epidemic significantly. A microeconomic analysis methodology combining behavioral economics and agent-based simulation is a major contribution of this work. In this paper we apply this methodology to analyze the fairness of the stockpile distribution, and the response of human behavior to disease prevalence level and its interaction with the market.
Validation and quantification of uncertainty in coupled climate models using network analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bracco, Annalisa
We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies.more » At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets, and their differences quantified using two metrics. Projected changes from 2051 to 2300 under the scenario with the highest representative and extended concentration pathways (RCP8.5 and ECP8.5) have then been determined. The network of models capable of reproducing well major climate modes in the recent past, changes little during this century. In contrast, among those models the uncertainties in the projections after 2100 remain substantial, and primarily associated with divergences in the representation of the modes of variability, particularly of the El Niño Southern Oscillation (ENSO), and their connectivity, and therefore with their intrinsic predictability, more so than with differences in the mean state evolution. Additionally, we evaluated the relation between the size and the ‘strength’ of the area identified by the network analysis as corresponding to ENSO noting that only a small subset of models can reproduce realistically the observations.« less
Actor-network theory: a tool to support ethical analysis of commercial genetic testing.
Williams-Jones, Bryn; Graham, Janice E
2003-12-01
Social, ethical and policy analysis of the issues arising from gene patenting and commercial genetic testing is enhanced by the application of science and technology studies, and Actor-Network Theory (ANT) in particular. We suggest the potential for transferring ANT's flexible nature to an applied heuristic methodology for gathering empirical information and for analysing the complex networks involved in the development of genetic technologies. Three concepts are explored in this paper--actor-networks, translation, and drift--and applied to the case of Myriad Genetics and their commercial BRACAnalysis genetic susceptibility test for hereditary breast cancer. Treating this test as an active participant in socio-technical networks clarifies the extent to which it interacts with, shapes and is shaped by people, other technologies, and institutions. Such an understanding enables more sophisticated and nuanced technology assessment, academic analysis, as well as public debate about the social, ethical and policy implications of the commercialization of new genetic technologies.
Charge transport network dynamics in molecular aggregates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Nicholas E.; Chen, Lin X.; Ratner, Mark A.
2016-07-20
Due to the nonperiodic nature of charge transport in disordered systems, generating insight into static charge transport networks, as well as analyzing the network dynamics, can be challenging. Here, we apply time-dependent network analysis to scrutinize the charge transport networks of two representative molecular semiconductors: a rigid n-type molecule, perylenediimide, and a flexible p-type molecule, bBDT(TDPP)2. Simulations reveal the relevant timescale for local transfer integral decorrelation to be ~100 fs, which is shown to be faster than that of a crystalline morphology of the same molecule. Using a simple graph metric, global network changes are observed over timescales competitive withmore » charge carrier lifetimes. These insights demonstrate that static charge transport networks are qualitatively inadequate, whereas average networks often overestimate network connectivity. Finally, a simple methodology for tracking dynamic charge transport properties is proposed.« less
Thompson-Bean, E; Das, R; McDaid, A
2016-10-31
We present a novel methodology for the design and manufacture of complex biologically inspired soft robotic fluidic actuators. The methodology is applied to the design and manufacture of a prosthetic for the hand. Real human hands are scanned to produce a 3D model of a finger, and pneumatic networks are implemented within it to produce a biomimetic bending motion. The finger is then partitioned into material sections, and a genetic algorithm based optimization, using finite element analysis, is employed to discover the optimal material for each section. This is based on two biomimetic performance criteria. Two sets of optimizations using two material sets are performed. Promising optimized material arrangements are fabricated using two techniques to validate the optimization routine, and the fabricated and simulated results are compared. We find that the optimization is successful in producing biomimetic soft robotic fingers and that fabrication of the fingers is possible. Limitations and paths for development are discussed. This methodology can be applied for other fluidic soft robotic devices.
Han, Ping; Luan, Feng; Yan, Xizu; Gao, Yuan; Liu, Huitao
2012-01-01
A method for the separation and determination of honokiol and magnolol in Magnolia officinalis and its medicinal preparation is developed by capillary zone electrophoresis and response surface methodology. The concentration of borate, content of organic modifier, and applied voltage are selected as variables. The optimized conditions (i.e., 16 mmol/L sodium tetraborate at pH 10.0, 11% methanol, applied voltage of 25 kV and UV detection at 210 nm) are obtained and successfully applied to the analysis of honokiol and magnolol in Magnolia officinalis and Huoxiang Zhengqi Liquid. Good separation is achieved within 6 min. The limits of detection are 1.67 µg/mL for honokiol and 0.83 µg/mL for magnolol, respectively. In addition, an artificial neural network with “3-7-1” structure based on the ratio of peak resolution to the migration time of the later component (Rs/t) given by Box-Behnken design is also reported, and the predicted results are in good agreement with the values given by the mathematic software and the experimental results. PMID:22291059
Yabalak, Erdal
2018-05-18
This study was performed to investigate the mineralization of ticarcillin in the artificially prepared aqueous solution presenting ticarcillin contaminated waters, which constitute a serious problem for human health. 81.99% of total organic carbon removal, 79.65% of chemical oxygen demand removal, and 94.35% of ticarcillin removal were achieved by using eco-friendly, time-saving, powerful and easy-applying, subcritical water oxidation method in the presence of a safe-to-use oxidizing agent, hydrogen peroxide. Central composite design, which belongs to the response surface methodology, was applied to design the degradation experiments, to optimize the methods, to evaluate the effects of the system variables, namely, temperature, hydrogen peroxide concentration, and treatment time, on the responses. In addition, theoretical equations were proposed in each removal processes. ANOVA tests were utilized to evaluate the reliability of the performed models. F values of 245.79, 88.74, and 48.22 were found for total organic carbon removal, chemical oxygen demand removal, and ticarcillin removal, respectively. Moreover, artificial neural network modeling was applied to estimate the response in each case and its prediction and optimizing performance was statistically examined and compared to the performance of central composite design.
Stabilization of perturbed Boolean network attractors through compensatory interactions
2014-01-01
Background Understanding and ameliorating the effects of network damage are of significant interest, due in part to the variety of applications in which network damage is relevant. For example, the effects of genetic mutations can cascade through within-cell signaling and regulatory networks and alter the behavior of cells, possibly leading to a wide variety of diseases. The typical approach to mitigating network perturbations is to consider the compensatory activation or deactivation of system components. Here, we propose a complementary approach wherein interactions are instead modified to alter key regulatory functions and prevent the network damage from triggering a deregulatory cascade. Results We implement this approach in a Boolean dynamic framework, which has been shown to effectively model the behavior of biological regulatory and signaling networks. We show that the method can stabilize any single state (e.g., fixed point attractors or time-averaged representations of multi-state attractors) to be an attractor of the repaired network. We show that the approach is minimalistic in that few modifications are required to provide stability to a chosen attractor and specific in that interventions do not have undesired effects on the attractor. We apply the approach to random Boolean networks, and further show that the method can in some cases successfully repair synchronous limit cycles. We also apply the methodology to case studies from drought-induced signaling in plants and T-LGL leukemia and find that it is successful in both stabilizing desired behavior and in eliminating undesired outcomes. Code is made freely available through the software package BooleanNet. Conclusions The methodology introduced in this report offers a complementary way to manipulating node expression levels. A comprehensive approach to evaluating network manipulation should take an "all of the above" perspective; we anticipate that theoretical studies of interaction modification, coupled with empirical advances, will ultimately provide researchers with greater flexibility in influencing system behavior. PMID:24885780
Using Networks To Understand Medical Data: The Case of Class III Malocclusions
Scala, Antonio; Auconi, Pietro; Scazzocchio, Marco; Caldarelli, Guido; McNamara, James A.; Franchi, Lorenzo
2012-01-01
A system of elements that interact or regulate each other can be represented by a mathematical object called a network. While network analysis has been successfully applied to high-throughput biological systems, less has been done regarding their application in more applied fields of medicine; here we show an application based on standard medical diagnostic data. We apply network analysis to Class III malocclusion, one of the most difficult to understand and treat orofacial anomaly. We hypothesize that different interactions of the skeletal components can contribute to pathological disequilibrium; in order to test this hypothesis, we apply network analysis to 532 Class III young female patients. The topology of the Class III malocclusion obtained by network analysis shows a strong co-occurrence of abnormal skeletal features. The pattern of these occurrences influences the vertical and horizontal balance of disharmony in skeletal form and position. Patients with more unbalanced orthodontic phenotypes show preponderance of the pathological skeletal nodes and minor relevance of adaptive dentoalveolar equilibrating nodes. Furthermore, by applying Power Graphs analysis we identify some functional modules among orthodontic nodes. These modules correspond to groups of tightly inter-related features and presumably constitute the key regulators of plasticity and the sites of unbalance of the growing dentofacial Class III system. The data of the present study show that, in their most basic abstraction level, the orofacial characteristics can be represented as graphs using nodes to represent orthodontic characteristics, and edges to represent their various types of interactions. The applications of this mathematical model could improve the interpretation of the quantitative, patient-specific information, and help to better targeting therapy. Last but not least, the methodology we have applied in analyzing orthodontic features can be applied easily to other fields of the medical science. PMID:23028552
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua Daniel; Carr, Christina; Pettit, Erin C.
We apply a fully autonomous icequake detection methodology to a single day of high-sample rate (200 Hz) seismic network data recorded from the terminus of Taylor Glacier, ANT that temporally coincided with a brine release episode near Blood Falls (May 13, 2014). We demonstrate a statistically validated procedure to assemble waveforms triggered by icequakes into populations of clusters linked by intra-event waveform similarity. Our processing methodology implements a noise-adaptive power detector coupled with a complete-linkage clustering algorithm and noise-adaptive correlation detector. This detector-chain reveals a population of 20 multiplet sequences that includes ~150 icequakes and produces zero false alarms onmore » the concurrent, diurnally variable noise. Our results are very promising for identifying changes in background seismicity associated with the presence or absence of brine release episodes. We thereby suggest that our methodology could be applied to longer time periods to establish a brine-release monitoring program for Blood Falls that is based on icequake detections.« less
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
Applying graphs and complex networks to football metric interpretation.
Arriaza-Ardiles, E; Martín-González, J M; Zuniga, M D; Sánchez-Flores, J; de Saa, Y; García-Manso, J M
2018-02-01
This work presents a methodology for analysing the interactions between players in a football team, from the point of view of graph theory and complex networks. We model the complex network of passing interactions between players of a same team in 32 official matches of the Liga de Fútbol Profesional (Spain), using a passing/reception graph. This methodology allows us to understand the play structure of the team, by analysing the offensive phases of game-play. We utilise two different strategies for characterising the contribution of the players to the team: the clustering coefficient, and centrality metrics (closeness and betweenness). We show the application of this methodology by analyzing the performance of a professional Spanish team according to these metrics and the distribution of passing/reception in the field. Keeping in mind the dynamic nature of collective sports, in the future we will incorporate metrics which allows us to analyse the performance of the team also according to the circumstances of game-play and to different contextual variables such as, the utilisation of the field space, the time, and the ball, according to specific tactical situations. Copyright © 2017 Elsevier B.V. All rights reserved.
Reduction of streamflow monitoring networks by a reference point approach
NASA Astrophysics Data System (ADS)
Cetinkaya, Cem P.; Harmancioglu, Nilgun B.
2014-05-01
Adoption of an integrated approach to water management strongly forces policy and decision-makers to focus on hydrometric monitoring systems as well. Existing hydrometric networks need to be assessed and revised against the requirements on water quantity data to support integrated management. One of the questions that a network assessment study should resolve is whether a current monitoring system can be consolidated in view of the increased expenditures in time, money and effort imposed on the monitoring activity. Within the last decade, governmental monitoring agencies in Turkey have foreseen an audit on all their basin networks in view of prevailing economic pressures. In particular, they question how they can decide whether monitoring should be continued or terminated at a particular site in a network. The presented study is initiated to address this question by examining the applicability of a method called “reference point approach” (RPA) for network assessment and reduction purposes. The main objective of the study is to develop an easily applicable and flexible network reduction methodology, focusing mainly on the assessment of the “performance” of existing streamflow monitoring networks in view of variable operational purposes. The methodology is applied to 13 hydrometric stations in the Gediz Basin, along the Aegean coast of Turkey. The results have shown that the simplicity of the method, in contrast to more complicated computational techniques, is an asset that facilitates the involvement of decision makers in application of the methodology for a more interactive assessment procedure between the monitoring agency and the network designer. The method permits ranking of hydrometric stations with regard to multiple objectives of monitoring and the desired attributes of the basin network. Another distinctive feature of the approach is that it also assists decision making in cases with limited data and metadata. These features of the RPA approach highlight its advantages over the existing network assessment and reduction methods.
High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.
Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent
2016-08-01
Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.
Motif-Synchronization: A new method for analysis of dynamic brain networks with EEG
NASA Astrophysics Data System (ADS)
Rosário, R. S.; Cardoso, P. T.; Muñoz, M. A.; Montoya, P.; Miranda, J. G. V.
2015-12-01
The major aim of this work was to propose a new association method known as Motif-Synchronization. This method was developed to provide information about the synchronization degree and direction between two nodes of a network by counting the number of occurrences of some patterns between any two time series. The second objective of this work was to present a new methodology for the analysis of dynamic brain networks, by combining the Time-Varying Graph (TVG) method with a directional association method. We further applied the new algorithms to a set of human electroencephalogram (EEG) signals to perform a dynamic analysis of the brain functional networks (BFN).
Prediction of road accidents: A Bayesian hierarchical approach.
Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H
2013-03-01
In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any road network provided that the required data are available. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spatial modeling of potential woody biomass flow
Woodam Chung; Nathaniel Anderson
2012-01-01
The flow of woody biomass to end users is determined by economic factors, especially the amount available across a landscape and delivery costs of bioenergy facilities. The objective of this study develop methodology to quantify landscape-level stocks and potential biomass flows using the currently available spatial database road network analysis tool. We applied this...
Mathematics Lectures as Narratives: Insights from Network Graph Methodology
ERIC Educational Resources Information Center
Weinberg, Aaron; Wiesner, Emilie; Fukawa-Connelly, Tim
2016-01-01
Although lecture is the traditional method of university mathematics instruction, there has been little empirical research that describes the general structure of lectures. In this paper, we adapt ideas from narrative analysis and apply them to an upper-level mathematics lecture. We develop a framework that enables us to conceptualize the lecture…
NASA Astrophysics Data System (ADS)
Pei, Jin-Song; Mai, Eric C.
2007-04-01
This paper introduces a continuous effort towards the development of a heuristic initialization methodology for constructing multilayer feedforward neural networks to model nonlinear functions. In this and previous studies that this work is built upon, including the one presented at SPIE 2006, the authors do not presume to provide a universal method to approximate arbitrary functions, rather the focus is given to the development of a rational and unambiguous initialization procedure that applies to the approximation of nonlinear functions in the specific domain of engineering mechanics. The applications of this exploratory work can be numerous including those associated with potential correlation and interpretation of the inner workings of neural networks, such as damage detection. The goal of this study is fulfilled by utilizing the governing physics and mathematics of nonlinear functions and the strength of the sigmoidal basis function. A step-by-step graphical procedure utilizing a few neural network prototypes as "templates" to approximate commonly seen memoryless nonlinear functions of one or two variables is further developed in this study. Decomposition of complex nonlinear functions into a summation of some simpler nonlinear functions is utilized to exploit this prototype-based initialization methodology. Training examples are presented to demonstrate the rationality and effciency of the proposed methodology when compared with the popular Nguyen-Widrow initialization algorithm. Future work is also identfied.
A multiscale method for a robust detection of the default mode network
NASA Astrophysics Data System (ADS)
Baquero, Katherine; Gómez, Francisco; Cifuentes, Christian; Guldenmund, Pieter; Demertzi, Athena; Vanhaudenhuyse, Audrey; Gosseries, Olivia; Tshibanda, Jean-Flory; Noirhomme, Quentin; Laureys, Steven; Soddu, Andrea; Romero, Eduardo
2013-11-01
The Default Mode Network (DMN) is a resting state network widely used for the analysis and diagnosis of mental disorders. It is normally detected in fMRI data, but for its detection in data corrupted by motion artefacts or low neuronal activity, the use of a robust analysis method is mandatory. In fMRI it has been shown that the signal-to-noise ratio (SNR) and the detection sensitivity of neuronal regions is increased with di erent smoothing kernels sizes. Here we propose to use a multiscale decomposition based of a linear scale-space representation for the detection of the DMN. Three main points are proposed in this methodology: rst, the use of fMRI data at di erent smoothing scale-spaces, second, detection of independent neuronal components of the DMN at each scale by using standard preprocessing methods and ICA decomposition at scale-level, and nally, a weighted contribution of each scale by the Goodness of Fit measurement. This method was applied to a group of control subjects and was compared with a standard preprocesing baseline. The detection of the DMN was improved at single subject level and at group level. Based on these results, we suggest to use this methodology to enhance the detection of the DMN in data perturbed with artefacts or applied to subjects with low neuronal activity. Furthermore, the multiscale method could be extended for the detection of other resting state neuronal networks.
Galán, S F; Aguado, F; Díez, F J; Mira, J
2002-07-01
The spread of cancer is a non-deterministic dynamic process. As a consequence, the design of an assistant system for the diagnosis and prognosis of the extent of a cancer should be based on a representation method that deals with both uncertainty and time. The ultimate goal is to know the stage of development of a cancer in a patient before selecting the appropriate treatment. A network of probabilistic events in discrete time (NPEDT) is a type of Bayesian network for temporal reasoning that models the causal mechanisms associated with the time evolution of a process. This paper describes NasoNet, a system that applies NPEDTs to the diagnosis and prognosis of nasopharyngeal cancer. We have made use of temporal noisy gates to model the dynamic causal interactions that take place in the domain. The methodology we describe is general enough to be applied to any other type of cancer.
The relation between global migration and trade networks
NASA Astrophysics Data System (ADS)
Sgrignoli, Paolo; Metulini, Rodolfo; Schiavo, Stefano; Riccaboni, Massimo
2015-01-01
In this paper we develop a methodology to analyze and compare multiple global networks, focusing our analysis on the relation between human migration and trade. First, we identify the subset of products for which the presence of a community of migrants significantly increases trade intensity, where to assure comparability across networks we apply a hypergeometric filter that lets us identify those links which intensity is significantly higher than expected. Next, proposing a new way to define country neighbors based on the most intense links in the trade network, we use spatial econometrics techniques to measure the effect of migration on international trade, while controlling for network interdependences. Overall, we find that migration significantly boosts trade across countries and we are able to identify product categories for which this effect is particularly strong.
Munksgaard, Rasmus; Demant, Jakob; Branwen, Gwern
2016-09-01
The development of cryptomarkets has gained increasing attention from academics, including growing scientific literature on the distribution of illegal goods using cryptomarkets. Dolliver's 2015 article "Evaluating drug trafficking on the Tor Network: Silk Road 2, the Sequel" addresses this theme by evaluating drug trafficking on one of the most well-known cryptomarkets, Silk Road 2.0. The research on cryptomarkets in general-particularly in Dolliver's article-poses a number of new questions for methodologies. This commentary is structured around a replication of Dolliver's original study. The replication study is not based on Dolliver's original dataset, but on a second dataset collected applying the same methodology. We have found that the results produced by Dolliver differ greatly from our replicated study. While a margin of error is to be expected, the inconsistencies we found are too great to attribute to anything other than methodological issues. The analysis and conclusions drawn from studies using these methods are promising and insightful. However, based on the replication of Dolliver's study, we suggest that researchers using these methodologies consider and that datasets be made available for other researchers, and that methodology and dataset metrics (e.g. number of downloaded pages, error logs) are described thoroughly in the context of web-o-metrics and web crawling. Copyright © 2016 Elsevier B.V. All rights reserved.
Developing Visualization Techniques for Semantics-based Information Networks
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Hall, David R.
2003-01-01
Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.
Infering and Calibrating Triadic Closure in a Dynamic Network
NASA Astrophysics Data System (ADS)
Mantzaris, Alexander V.; Higham, Desmond J.
In the social sciences, the hypothesis of triadic closure contends that new links in a social contact network arise preferentially between those who currently share neighbours. Here, in a proof-of-principle study, we show how to calibrate a recently proposed evolving network model to time-dependent connectivity data. The probabilistic edge birth rate in the model contains a triadic closure term, so we are also able to assess statistically the evidence for this effect. The approach is shown to work on data generated synthetically from the model. We then apply this methodology to some real, large-scale data that records the build up of connections in a business-related social networking site, and find evidence for triadic closure.
Default cascades in complex networks: topology and systemic risk.
Roukny, Tarik; Bersini, Hugues; Pirotte, Hugues; Caldarelli, Guido; Battiston, Stefano
2013-09-26
The recent crisis has brought to the fore a crucial question that remains still open: what would be the optimal architecture of financial systems? We investigate the stability of several benchmark topologies in a simple default cascading dynamics in bank networks. We analyze the interplay of several crucial drivers, i.e., network topology, banks' capital ratios, market illiquidity, and random vs targeted shocks. We find that, in general, topology matters only--but substantially--when the market is illiquid. No single topology is always superior to others. In particular, scale-free networks can be both more robust and more fragile than homogeneous architectures. This finding has important policy implications. We also apply our methodology to a comprehensive dataset of an interbank market from 1999 to 2011.
How can social network analysis contribute to social behavior research in applied ethology?
Makagon, Maja M; McCowan, Brenda; Mench, Joy A
2012-05-01
Social network analysis is increasingly used by behavioral ecologists and primatologists to describe the patterns and quality of interactions among individuals. We provide an overview of this methodology, with examples illustrating how it can be used to study social behavior in applied contexts. Like most kinds of social interaction analyses, social network analysis provides information about direct relationships (e.g. dominant-subordinate relationships). However, it also generates a more global model of social organization that determines how individual patterns of social interaction relate to individual and group characteristics. A particular strength of this approach is that it provides standardized mathematical methods for calculating metrics of sociality across levels of social organization, from the population and group levels to the individual level. At the group level these metrics can be used to track changes in social network structures over time, evaluate the effect of the environment on social network structure, or compare social structures across groups, populations or species. At the individual level, the metrics allow quantification of the heterogeneity of social experience within groups and identification of individuals who may play especially important roles in maintaining social stability or information flow throughout the network.
ERIC Educational Resources Information Center
Brewe, Eric; Bruun, Jesper; Bearden, Ian G.
2016-01-01
We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…
Assessing the Climate Resilience of Transport Infrastructure Investments in Tanzania
NASA Astrophysics Data System (ADS)
Hall, J. W.; Pant, R.; Koks, E.; Thacker, S.; Russell, T.
2017-12-01
Whilst there is an urgent need for infrastructure investment in developing countries, there is a risk that poorly planned and built infrastructure will introduce new vulnerabilities. As climate change increases the magnitudes and frequency of natural hazard events, incidence of disruptive infrastructure failures are likely to become more frequent. Therefore, it is important that infrastructure planning and investment is underpinned by climate risk assessment that can inform adaptation planning. Tanzania's rapid economic growth is placing considerable strain on the country's transportation infrastructure (roads, railways, shipping and aviation); especially at the port of Dar es Salaam and its linking transport corridors. A growing number of natural hazard events, in particular flooding, are impacting the reliability of this already over-used network. Here we report on new methodology to analyse vulnerabilities and risks due to failures of key locations in the intermodal transport network of Tanzania, including strategic connectivity to neighboring countries. To perform the national-scale risk analysis we will utilize a system-of-systems methodology. The main components of this general risk assessment, when applied to transportation systems, include: (1) Assembling data on: spatially coherent extreme hazards and intermodal transportation networks; (2) Intersecting hazards with transport network models to initiate failure conditions that trigger failure propagation across interdependent networks; (3) Quantifying failure outcomes in terms of social impacts (customers/passengers disrupted) and/or macroeconomic consequences (across multiple sectors); and (4) Simulating, testing and collecting multiple failure scenarios to perform an exhaustive risk assessment in terms of probabilities and consequences. The methodology is being used to pinpoint vulnerability and reduce climate risks to transport infrastructure investments.
Consensus between Pipelines in Structural Brain Networks
Parker, Christopher S.; Deligianni, Fani; Cardoso, M. Jorge; Daga, Pankaj; Modat, Marc; Dayan, Michael; Clark, Chris A.
2014-01-01
Structural brain networks may be reconstructed from diffusion MRI tractography data and have great potential to further our understanding of the topological organisation of brain structure in health and disease. Network reconstruction is complex and involves a series of processesing methods including anatomical parcellation, registration, fiber orientation estimation and whole-brain fiber tractography. Methodological choices at each stage can affect the anatomical accuracy and graph theoretical properties of the reconstructed networks, meaning applying different combinations in a network reconstruction pipeline may produce substantially different networks. Furthermore, the choice of which connections are considered important is unclear. In this study, we assessed the similarity between structural networks obtained using two independent state-of-the-art reconstruction pipelines. We aimed to quantify network similarity and identify the core connections emerging most robustly in both pipelines. Similarity of network connections was compared between pipelines employing different atlases by merging parcels to a common and equivalent node scale. We found a high agreement between the networks across a range of fiber density thresholds. In addition, we identified a robust core of highly connected regions coinciding with a peak in similarity across network density thresholds, and replicated these results with atlases at different node scales. The binary network properties of these core connections were similar between pipelines but showed some differences in atlases across node scales. This study demonstrates the utility of applying multiple structural network reconstrution pipelines to diffusion data in order to identify the most important connections for further study. PMID:25356977
Tomlinson, Samuel B.; Bermudez, Camilo; Conley, Chiara; Brown, Merritt W.; Porter, Brenda E.; Marsh, Eric D.
2016-01-01
Synchronized cortical activity is implicated in both normative cognitive functioning and many neurologic disorders. For epilepsy patients with intractable seizures, irregular synchronization within the epileptogenic zone (EZ) is believed to provide the network substrate through which seizures initiate and propagate. Mapping the EZ prior to epilepsy surgery is critical for detecting seizure networks in order to achieve postsurgical seizure control. However, automated techniques for characterizing epileptic networks have yet to gain traction in the clinical setting. Recent advances in signal processing and spike detection have made it possible to examine the spatiotemporal propagation of interictal spike discharges across the epileptic cortex. In this study, we present a novel methodology for detecting, extracting, and visualizing spike propagation and demonstrate its potential utility as a biomarker for the EZ. Eighteen presurgical intracranial EEG recordings were obtained from pediatric patients ultimately experiencing favorable (i.e., seizure-free, n = 9) or unfavorable (i.e., seizure-persistent, n = 9) surgical outcomes. Novel algorithms were applied to extract multichannel spike discharges and visualize their spatiotemporal propagation. Quantitative analysis of spike propagation was performed using trajectory clustering and spatial autocorrelation techniques. Comparison of interictal propagation patterns revealed an increase in trajectory organization (i.e., spatial autocorrelation) among Sz-Free patients compared with Sz-Persist patients. The pathophysiological basis and clinical implications of these findings are considered. PMID:28066315
Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network.
Diniz, Pedro Henrique Bandeira; Valente, Thales Levi Azevedo; Diniz, João Otávio Bandeira; Silva, Aristófanes Corrêa; Gattass, Marcelo; Ventura, Nina; Muniz, Bernardo Carvalho; Gasparetto, Emerson Leandro
2018-04-19
White matter lesions are non-static brain lesions that have a prevalence rate up to 98% in the elderly population. Because they may be associated with several brain diseases, it is important that they are detected as soon as possible. Magnetic Resonance Imaging (MRI) provides three-dimensional data with the possibility to detect and emphasize contrast differences in soft tissues, providing rich information about the human soft tissue anatomy. However, the amount of data provided for these images is far too much for manual analysis/interpretation, representing a difficult and time-consuming task for specialists. This work presents a computational methodology capable of detecting regions of white matter lesions of the brain in MRI of FLAIR modality. The techniques highlighted in this methodology are SLIC0 clustering for candidate segmentation and convolutional neural networks for candidate classification. The methodology proposed here consists of four steps: (1) images acquisition, (2) images preprocessing, (3) candidates segmentation and (4) candidates classification. The methodology was applied on 91 magnetic resonance images provided by DASA, and achieved an accuracy of 98.73%, specificity of 98.77% and sensitivity of 78.79% with 0.005 of false positives, without any false positives reduction technique, in detection of white matter lesion regions. It is demonstrated the feasibility of the analysis of brain MRI using SLIC0 and convolutional neural network techniques to achieve success in detection of white matter lesions regions. Copyright © 2018. Published by Elsevier B.V.
Seismic Hazard Analysis on a Complex, Interconnected Fault Network
NASA Astrophysics Data System (ADS)
Page, M. T.; Field, E. H.; Milner, K. R.
2017-12-01
In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.
Building the Material Flow Networks of Aluminum in the 2007 U.S. Economy.
Chen, Wei-Qiang; Graedel, T E; Nuss, Philip; Ohno, Hajime
2016-04-05
Based on the combination of the U.S. economic input-output table and the stocks and flows framework for characterizing anthropogenic metal cycles, this study presents a methodology for building material flow networks of bulk metals in the U.S. economy and applies it to aluminum. The results, which we term the Input-Output Material Flow Networks (IO-MFNs), achieve a complete picture of aluminum flow in the entire U.S. economy and for any chosen industrial sector (illustrated for the Automobile Manufacturing sector). The results are compared with information from our former study on U.S. aluminum stocks and flows to demonstrate the robustness and value of this new methodology. We find that the IO-MFN approach has the following advantages: (1) it helps to uncover the network of material flows in the manufacturing stage in the life cycle of metals; (2) it provides a method that may be less time-consuming but more complete and accurate in estimating new scrap generation, process loss, domestic final demand, and trade of final products of metals, than existing material flow analysis approaches; and, most importantly, (3) it enables the analysis of the material flows of metals in the U.S. economy from a network perspective, rather than merely that of a life cycle chain.
Structure and function of complex brain networks
Sporns, Olaf
2013-01-01
An increasing number of theoretical and empirical studies approach the function of the human brain from a network perspective. The analysis of brain networks is made feasible by the development of new imaging acquisition methods as well as new tools from graph theory and dynamical systems. This review surveys some of these methodological advances and summarizes recent findings on the architecture of structural and functional brain networks. Studies of the structural connectome reveal several modules or network communities that are interlinked by hub regions mediating communication processes between modules. Recent network analyses have shown that network hubs form a densely linked collective called a “rich club,” centrally positioned for attracting and dispersing signal traffic. In parallel, recordings of resting and task-evoked neural activity have revealed distinct resting-state networks that contribute to functions in distinct cognitive domains. Network methods are increasingly applied in a clinical context, and their promise for elucidating neural substrates of brain and mental disorders is discussed. PMID:24174898
Correlations in the degeneracy of structurally controllable topologies for networks
NASA Astrophysics Data System (ADS)
Campbell, Colin; Aucott, Steven; Ruths, Justin; Ruths, Derek; Shea, Katriona; Albert, Réka
2017-04-01
Many dynamic systems display complex emergent phenomena. By directly controlling a subset of system components (nodes) via external intervention it is possible to indirectly control every other component in the system. When the system is linear or can be approximated sufficiently well by a linear model, methods exist to identify the number and connectivity of a minimum set of external inputs (constituting a so-called minimal control topology, or MCT). In general, many MCTs exist for a given network; here we characterize a broad ensemble of empirical networks in terms of the fraction of nodes and edges that are always, sometimes, or never a part of an MCT. We study the relationships between the measures, and apply the methodology to the T-LGL leukemia signaling network as a case study. We show that the properties introduced in this report can be used to predict key components of biological networks, with potentially broad applications to network medicine.
Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor
2016-01-01
Analysis of user interactions in online communities could improve our understanding of health-related behaviors and inform the design of technological solutions that support behavior change. However, to achieve this we would need methods that provide granular perspective, yet are scalable. In this paper, we present a methodology for high-throughput semantic and network analysis of large social media datasets, combining semi-automated text categorization with social network analytics. We apply this method to derive content-specific network visualizations of 16,492 user interactions in an online community for smoking cessation. Performance of the categorization system was reasonable (average F-measure of 0.74, with system-rater reliability approaching rater-rater reliability). The resulting semantically specific network analysis of user interactions reveals content- and behavior-specific network topologies. Implications for socio-behavioral health and wellness platforms are also discussed.
Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial.
Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid
2016-06-01
Data science or "data-driven research" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.
Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial
Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid
2016-01-01
Data science or “data-driven research” is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves. PMID:27258286
Auditing Complex Concepts in Overlapping Subsets of SNOMED
Wang, Yue; Wei, Duo; Xu, Junchuan; Elhanan, Gai; Perl, Yehoshua; Halper, Michael; Chen, Yan; Spackman, Kent A.; Hripcsak, George
2008-01-01
Limited resources and the sheer volume of concepts make auditing a large terminology, such as SNOMED CT, a daunting task. It is essential to devise techniques that can aid an auditor by automatically identifying concepts that deserve attention. A methodology for this purpose based on a previously introduced abstraction network (called the p-area taxonomy) for a SNOMED CT hierarchy is presented. The methodology algorithmically gathers concepts appearing in certain overlapping subsets, defined exclusively with respect to the p-area taxonomy, for review. The results of applying the methodology to SNOMED’s Specimen hierarchy are presented. These results are compared against a control sample composed of concepts residing in subsets without the overlaps. With the use of the double bootstrap, the concept group produced by our methodology is shown to yield a statistically significant higher proportion of error discoveries. PMID:18998838
Auditing complex concepts in overlapping subsets of SNOMED.
Wang, Yue; Wei, Duo; Xu, Junchuan; Elhanan, Gai; Perl, Yehoshua; Halper, Michael; Chen, Yan; Spackman, Kent A; Hripcsak, George
2008-11-06
Limited resources and the sheer volume of concepts make auditing a large terminology, such as SNOMED CT, a daunting task. It is essential to devise techniques that can aid an auditor by automatically identifying concepts that deserve attention. A methodology for this purpose based on a previously introduced abstraction network (called the p-area taxonomy) for a SNOMED CT hierarchy is presented. The methodology algorithmically gathers concepts appearing in certain overlapping subsets, defined exclusively with respect to the p-area taxonomy, for review. The results of applying the methodology to SNOMED's Specimen hierarchy are presented. These results are compared against a control sample composed of concepts residing in subsets without the overlaps. With the use of the double bootstrap, the concept group produced by our methodology is shown to yield a statistically significant higher proportion of error discoveries.
Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred
2013-01-01
Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014
Assessment of distributed photovoltair electric-power systems
NASA Astrophysics Data System (ADS)
Neal, R. W.; Deduck, P. F.; Marshall, R. N.
1982-10-01
The development of a methodology to assess the potential impacts of distributed photovoltaic (PV) systems on electric utility systems, including subtransmission and distribution networks, and to apply that methodology to several illustrative examples was developed. The investigations focused upon five specific utilities. Impacts upon utility system operations and generation mix were assessed using accepted utility planning methods in combination with models that simulate PV system performance and life cycle economics. Impacts on the utility subtransmission and distribution systems were also investigated. The economic potential of distributed PV systems was investigated for ownership by the utility as well as by the individual utility customer.
Integrated Genomic and Network-Based Analyses of Complex Diseases and Human Disease Network.
Al-Harazi, Olfat; Al Insaif, Sadiq; Al-Ajlan, Monirah A; Kaya, Namik; Dzimiri, Nduna; Colak, Dilek
2016-06-20
A disease phenotype generally reflects various pathobiological processes that interact in a complex network. The highly interconnected nature of the human protein interaction network (interactome) indicates that, at the molecular level, it is difficult to consider diseases as being independent of one another. Recently, genome-wide molecular measurements, data mining and bioinformatics approaches have provided the means to explore human diseases from a molecular basis. The exploration of diseases and a system of disease relationships based on the integration of genome-wide molecular data with the human interactome could offer a powerful perspective for understanding the molecular architecture of diseases. Recently, subnetwork markers have proven to be more robust and reliable than individual biomarker genes selected based on gene expression profiles alone, and achieve higher accuracy in disease classification. We have applied one of these methodologies to idiopathic dilated cardiomyopathy (IDCM) data that we have generated using a microarray and identified significant subnetworks associated with the disease. In this paper, we review the recent endeavours in this direction, and summarize the existing methodologies and computational tools for network-based analysis of complex diseases and molecular relationships among apparently different disorders and human disease network. We also discuss the future research trends and topics of this promising field. Copyright © 2015 Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, and Genetics Society of China. Published by Elsevier Ltd. All rights reserved.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
Space network scheduling benchmark: A proof-of-concept process for technology transfer
NASA Technical Reports Server (NTRS)
Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy
1993-01-01
This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.
Ugena, L.; Moncayo, S.; Manzoor, S.; Rosales, D.
2016-01-01
The detection of adulteration of fuels and its use in criminal scenes like arson has a high interest in forensic investigations. In this work, a method based on gas chromatography (GC) and neural networks (NN) has been developed and applied to the identification and discrimination of brands of fuels such as gasoline and diesel without the necessity to determine the composition of the samples. The study included five main brands of fuels from Spain, collected from fifteen different local petrol stations. The methodology allowed the identification of the gasoline and diesel brands with a high accuracy close to 100%, without any false positives or false negatives. A success rate of three blind samples was obtained as 73.3%, 80%, and 100%, respectively. The results obtained demonstrate the potential of this methodology to help in resolving criminal situations. PMID:27375919
WATER SUPPLY PIPE REPLACEMENT CONSIDERING SUSTAINABLE TRANSITION TO POPULATION DECREASED SOCIETY
NASA Astrophysics Data System (ADS)
Hosoi, Yoshihiko; Iwasaki, Yoji; Aklog, Dagnachew; Masuda, Takanori
Social infrastructures are aging and population is decreasing in Japan. The aged social infrastructures should be renewed. At the same time, they are required to be moved into new framework suitable for population decreased societies. Furthermore, they have to continue to supply sufficient services even during transition term that renewal projects are carried out. Authors propose sustainable soft landing management of infrastructures and it is tried to apply to water supply pipe replacement in this study. Methodology to replace aged pipes not only aiming for the new water supply network which suits for population decreased condition but also ensuring supply service and feasibility while the project is carried out was developed. It is applied for a model water supply network and discussions were carried out.
Analysis of Layered Social Networks
2006-09-01
C . Anderson , O . P . John , D . Keltner , and A. M. Kring. Who attains social sta- tus... of action is provided by the following equation, U ( c ) = ∑ d (PdEx), where 56 Religious Financial Commercial Military Infrastructure Avoid Secure...Assuming that this methodology can indeed be applied to the transmission of information, the matrix powers ( p > 2) actually capture a variety of walks
Default Cascades in Complex Networks: Topology and Systemic Risk
Roukny, Tarik; Bersini, Hugues; Pirotte, Hugues; Caldarelli, Guido; Battiston, Stefano
2013-01-01
The recent crisis has brought to the fore a crucial question that remains still open: what would be the optimal architecture of financial systems? We investigate the stability of several benchmark topologies in a simple default cascading dynamics in bank networks. We analyze the interplay of several crucial drivers, i.e., network topology, banks' capital ratios, market illiquidity, and random vs targeted shocks. We find that, in general, topology matters only – but substantially – when the market is illiquid. No single topology is always superior to others. In particular, scale-free networks can be both more robust and more fragile than homogeneous architectures. This finding has important policy implications. We also apply our methodology to a comprehensive dataset of an interbank market from 1999 to 2011. PMID:24067913
Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor
2013-01-01
Unhealthy behaviors increase individual health risks and are a socioeconomic burden. Harnessing social influence is perceived as fundamental for interventions to influence health-related behaviors. However, the mechanisms through which social influence occurs are poorly understood. Online social networks provide the opportunity to understand these mechanisms as they digitally archive communication between members. In this paper, we present a methodology for content-based social network analysis, combining qualitative coding, automated text analysis, and formal network analysis such that network structure is determined by the content of messages exchanged between members. We apply this approach to characterize the communication between members of QuitNet, an online social network for smoking cessation. Results indicate that the method identifies meaningful theme-based social sub-networks. Modeling social network data using this method can provide us with theme-specific insights such as the identities of opinion leaders and sub-community clusters. Implications for design of targeted social interventions are discussed.
Empirical Reference Distributions for Networks of Different Size
Smith, Anna; Calder, Catherine A.; Browning, Christopher R.
2016-01-01
Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556
Optimized planning methodologies of ASON implementation
NASA Astrophysics Data System (ADS)
Zhou, Michael M.; Tamil, Lakshman S.
2005-02-01
Advanced network planning concerns effective network-resource allocation for dynamic and open business environment. Planning methodologies of ASON implementation based on qualitative analysis and mathematical modeling are presented in this paper. The methodology includes method of rationalizing technology and architecture, building network and nodal models, and developing dynamic programming for multi-period deployment. The multi-layered nodal architecture proposed here can accommodate various nodal configurations for a multi-plane optical network and the network modeling presented here computes the required network elements for optimizing resource allocation.
The inland water macro-invertebrate occurrences in Flanders, Belgium.
Vannevel, Rudy; Brosens, Dimitri; Cooman, Ward De; Gabriels, Wim; Frank Lavens; Mertens, Joost; Vervaeke, Bart
2018-01-01
The Flanders Environment Agency (VMM) has been performing biological water quality assessments on inland waters in Flanders (Belgium) since 1989 and sediment quality assessments since 2000. The water quality monitoring network is a combined physico-chemical and biological network, the biological component focusing on macro-invertebrates. The sediment monitoring programme produces biological data to assess the sediment quality. Both monitoring programmes aim to provide index values, applying a similar conceptual methodology based on the presence of macro-invertebrates. The biological data obtained from both monitoring networks are consolidated in the VMM macro-invertebrates database and include identifications at family and genus level of the freshwater phyla Coelenterata, Platyhelminthes, Annelida, Mollusca, and Arthropoda. This paper discusses the content of this database, and the dataset published thereof: 282,309 records of 210 observed taxa from 4,140 monitoring sites located on 657 different water bodies, collected during 22,663 events. This paper provides some background information on the methodology, temporal and spatial coverage, and taxonomy, and describes the content of the dataset. The data are distributed as open data under the Creative Commons CC-BY license.
Network analysis applications in hydrology
NASA Astrophysics Data System (ADS)
Price, Katie
2017-04-01
Applied network theory has seen pronounced expansion in recent years, in fields such as epidemiology, computer science, and sociology. Concurrent development of analytical methods and frameworks has increased possibilities and tools available to researchers seeking to apply network theory to a variety of problems. While water and nutrient fluxes through stream systems clearly demonstrate a directional network structure, the hydrological applications of network theory remain underexplored. This presentation covers a review of network applications in hydrology, followed by an overview of promising network analytical tools that potentially offer new insights into conceptual modeling of hydrologic systems, identifying behavioral transition zones in stream networks and thresholds of dynamical system response. Network applications were tested along an urbanization gradient in Atlanta, Georgia, USA. Peachtree Creek and Proctor Creek. Peachtree Creek contains a nest of five longterm USGS streamflow and water quality gages, allowing network application of longterm flow statistics. The watershed spans a range of suburban and heavily urbanized conditions. Summary flow statistics and water quality metrics were analyzed using a suite of network analysis techniques, to test the conceptual modeling and predictive potential of the methodologies. Storm events and low flow dynamics during Summer 2016 were analyzed using multiple network approaches, with an emphasis on tomogravity methods. Results indicate that network theory approaches offer novel perspectives for understanding long term and eventbased hydrological data. Key future directions for network applications include 1) optimizing data collection, 2) identifying "hotspots" of contaminant and overland flow influx to stream systems, 3) defining process domains, and 4) analyzing dynamic connectivity of various system components, including groundwatersurface water interactions.
Application of network methods for understanding evolutionary dynamics in discrete habitats.
Greenbaum, Gili; Fefferman, Nina H
2017-06-01
In populations occupying discrete habitat patches, gene flow between habitat patches may form an intricate population structure. In such structures, the evolutionary dynamics resulting from interaction of gene-flow patterns with other evolutionary forces may be exceedingly complex. Several models describing gene flow between discrete habitat patches have been presented in the population-genetics literature; however, these models have usually addressed relatively simple settings of habitable patches and have stopped short of providing general methodologies for addressing nontrivial gene-flow patterns. In the last decades, network theory - a branch of discrete mathematics concerned with complex interactions between discrete elements - has been applied to address several problems in population genetics by modelling gene flow between habitat patches using networks. Here, we present the idea and concepts of modelling complex gene flows in discrete habitats using networks. Our goal is to raise awareness to existing network theory applications in molecular ecology studies, as well as to outline the current and potential contribution of network methods to the understanding of evolutionary dynamics in discrete habitats. We review the main branches of network theory that have been, or that we believe potentially could be, applied to population genetics and molecular ecology research. We address applications to theoretical modelling and to empirical population-genetic studies, and we highlight future directions for extending the integration of network science with molecular ecology. © 2017 John Wiley & Sons Ltd.
Road detection in SAR images using a tensor voting algorithm
NASA Astrophysics Data System (ADS)
Shen, Dajiang; Hu, Chun; Yang, Bing; Tian, Jinwen; Liu, Jian
2007-11-01
In this paper, the problem of the detection of road networks in Synthetic Aperture Radar (SAR) images is addressed. Most of the previous methods extract the road by detecting lines and network reconstruction. Traditional algorithms such as MRFs, GA, Level Set, used in the progress of reconstruction are iterative. The tensor voting methodology we proposed is non-iterative, and non-sensitive to initialization. Furthermore, the only free parameter is the size of the neighborhood, related to the scale. The algorithm we present is verified to be effective when it's applied to the road extraction using the real Radarsat Image.
System learning approach to assess sustainability and ...
This paper presents a methodology that combines the power of an Artificial Neural Network and Information Theory to forecast variables describing the condition of a regional system. The novelty and strength of this approach is in the application of Fisher information, a key method in Information Theory, to preserve trends in the historical data and prevent over fitting projections. The methodology was applied to demographic, environmental, food and energy consumption, and agricultural production in the San Luis Basin regional system in Colorado, U.S.A. These variables are important for tracking conditions in human and natural systems. However, available data are often so far out of date that they limit the ability to manage these systems. Results indicate that the approaches developed provide viable tools for forecasting outcomes with the aim of assisting management toward sustainable trends. This methodology is also applicable for modeling different scenarios in other dynamic systems. Indicators are indispensable for tracking conditions in human and natural systems, however, available data is sometimes far out of date and limit the ability to gauge system status. Techniques like regression and simulation are not sufficient because system characteristics have to be modeled ensuring over simplification of complex dynamics. This work presents a methodology combining the power of an Artificial Neural Network and Information Theory to capture patterns in a real dyna
Cho, Yongrae; Kim, Minsung
2014-01-01
The volatility and uncertainty in the process of technological developments are growing faster than ever due to rapid technological innovations. Such phenomena result in integration among disparate technology fields. At this point, it is a critical research issue to understand the different roles and the propensity of each element technology for technological convergence. In particular, the network-based approach provides a holistic view in terms of technological linkage structures. Furthermore, the development of new indicators based on network visualization can reveal the dynamic patterns among disparate technologies in the process of technological convergence and provide insights for future technological developments. This research attempts to analyze and discover the patterns of the international patent classification codes of the United States Patent and Trademark Office's patent data in printed electronics, which is a representative technology in the technological convergence process. To this end, we apply the physical idea as a new methodological approach to interpret technological convergence. More specifically, the concepts of entropy and gravity are applied to measure the activities among patent citations and the binding forces among heterogeneous technologies during technological convergence. By applying the entropy and gravity indexes, we could distinguish the characteristic role of each technology in printed electronics. At the technological convergence stage, each technology exhibits idiosyncratic dynamics which tend to decrease technological differences and heterogeneity. Furthermore, through nonlinear regression analysis, we have found the decreasing patterns of disparity over a given total period in the evolution of technological convergence. This research has discovered the specific role of each element technology field and has consequently identified the co-evolutionary patterns of technological convergence. These new findings on the evolutionary patterns of technological convergence provide some implications for engineering and technology foresight research, as well as for corporate strategy and technology policy.
A tribal abstraction network for SNOMED CT target hierarchies without attribute relationships.
Ochs, Christopher; Geller, James; Perl, Yehoshua; Chen, Yan; Agrawal, Ankur; Case, James T; Hripcsak, George
2015-05-01
Large and complex terminologies, such as Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT), are prone to errors and inconsistencies. Abstraction networks are compact summarizations of the content and structure of a terminology. Abstraction networks have been shown to support terminology quality assurance. In this paper, we introduce an abstraction network derivation methodology which can be applied to SNOMED CT target hierarchies whose classes are defined using only hierarchical relationships (ie, without attribute relationships) and similar description-logic-based terminologies. We introduce the tribal abstraction network (TAN), based on the notion of a tribe-a subhierarchy rooted at a child of a hierarchy root, assuming only the existence of concepts with multiple parents. The TAN summarizes a hierarchy that does not have attribute relationships using sets of concepts, called tribal units that belong to exactly the same multiple tribes. Tribal units are further divided into refined tribal units which contain closely related concepts. A quality assurance methodology that utilizes TAN summarizations is introduced. A TAN is derived for the Observable entity hierarchy of SNOMED CT, summarizing its content. A TAN-based quality assurance review of the concepts of the hierarchy is performed, and erroneous concepts are shown to appear more frequently in large refined tribal units than in small refined tribal units. Furthermore, more erroneous concepts appear in large refined tribal units of more tribes than of fewer tribes. In this paper we introduce the TAN for summarizing SNOMED CT target hierarchies. A TAN was derived for the Observable entity hierarchy of SNOMED CT. A quality assurance methodology utilizing the TAN was introduced and demonstrated. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Blakeslee, R. J.; Bailey, J. C.; Pinto, O.; Athayde, A.; Renno, N.; Weidman, C. D.
2003-01-01
A four station Advanced Lightning Direction Finder (ALDF) network was established in the state of Rondonia in western Brazil in 1999 through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of- arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/Marshall Space Flight Center in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the Internet. The network, which is still operational, was deployed to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite that was launched in November 1997. The measurements are also being used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-time series observations produced by this network will help establish a regional lightning climatological database, supplementing other databases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at the NASA/Marshall Space Flight Center have been applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The data will also be corrected for the network detection efficiency. The processing methodology and the results from the analysis of four years of network operations will be presented.
Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar
2017-01-01
In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.
Sadjadi, Seyed Jafar
2017-01-01
In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. PMID:28953900
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Applying Model Based Systems Engineering to NASA's Space Communications Networks
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert
2013-01-01
System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its results and impact. We will highlight the insights gained by applying the Model Based System Engineering and provide recommendations for its applications and improvements.
An Inverse Neural Controller Based on the Applicability Domain of RBF Network Models
Alexandridis, Alex; Stogiannos, Marios; Papaioannou, Nikolaos; Zois, Elias; Sarimveis, Haralambos
2018-01-01
This paper presents a novel methodology of generic nature for controlling nonlinear systems, using inverse radial basis function neural network models, which may combine diverse data originating from various sources. The algorithm starts by applying the particle swarm optimization-based non-symmetric variant of the fuzzy means (PSO-NSFM) algorithm so that an approximation of the inverse system dynamics is obtained. PSO-NSFM offers models of high accuracy combined with small network structures. Next, the applicability domain concept is suitably tailored and embedded into the proposed control structure in order to ensure that extrapolation is avoided in the controller predictions. Finally, an error correction term, estimating the error produced by the unmodeled dynamics and/or unmeasured external disturbances, is included to the control scheme to increase robustness. The resulting controller guarantees bounded input-bounded state (BIBS) stability for the closed loop system when the open loop system is BIBS stable. The proposed methodology is evaluated on two different control problems, namely, the control of an experimental armature-controlled direct current (DC) motor and the stabilization of a highly nonlinear simulated inverted pendulum. For each one of these problems, appropriate case studies are tested, in which a conventional neural controller employing inverse models and a PID controller are also applied. The results reveal the ability of the proposed control scheme to handle and manipulate diverse data through a data fusion approach and illustrate the superiority of the method in terms of faster and less oscillatory responses. PMID:29361781
Classification of Alzheimer's Patients through Ubiquitous Computing.
Nieto-Reyes, Alicia; Duque, Rafael; Montaña, José Luis; Lage, Carmen
2017-07-21
Functional data analysis and artificial neural networks are the building blocks of the proposed methodology that distinguishes the movement patterns among c's patients on different stages of the disease and classifies new patients to their appropriate stage of the disease. The movement patterns are obtained by the accelerometer device of android smartphones that the patients carry while moving freely. The proposed methodology is relevant in that it is flexible on the type of data to which it is applied. To exemplify that, it is analyzed a novel real three-dimensional functional dataset where each datum is observed in a different time domain. Not only is it observed on a difference frequency but also the domain of each datum has different length. The obtained classification success rate of 83 % indicates the potential of the proposed methodology.
Mounts, W M; Liebman, M N
1997-07-01
We have developed a method for representing biological pathways and simulating their behavior based on the use of stochastic activity networks (SANs). SANs, an extension of the original Petri net, have been used traditionally to model flow systems including data-communications networks and manufacturing processes. We apply the methodology to the blood coagulation cascade, a biological flow system, and present the representation method as well as results of simulation studies based on published experimental data. In addition to describing the dynamic model, we also present the results of its utilization to perform simulations of clinical states including hemophilia's A and B as well as sensitivity analysis of individual factors and their impact on thrombin production.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
Policymaking in European healthy cities.
de Leeuw, Evelyne; Green, Geoff; Spanswick, Lucy; Palmer, Nicola
2015-06-01
This paper assesses policy development in, with and for Healthy Cities in the European Region of the World Health Organization. Materials for the assessment were sourced through case studies, a questionnaire and statistical databases. They were compiled in a realist synthesis methodology, applying theory-based evaluation principles. Non-response analyses were applied to ascertain the degree of representatives of the high response rates for the entire network of Healthy Cities in Europe. Further measures of reliability and validity were applied, and it was found that our material was indicative of the entire network. European Healthy Cities are successful in developing local health policy across many sectors within and outside government. They were also successful in addressing 'wicked' problems around equity, governance and participation in themes such as Healthy Urban Planning. It appears that strong local leadership for policy change is driven by international collaboration and the stewardship of the World Health Organization. The processes enacted by WHO, structuring membership of the Healthy City Network (designation) and the guidance on particular themes, are identified as being important for the success of local policy development. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Systems Biology Approaches for Discovering Biomarkers for Traumatic Brain Injury
Feala, Jacob D.; AbdulHameed, Mohamed Diwan M.; Yu, Chenggang; Dutta, Bhaskar; Yu, Xueping; Schmid, Kara; Dave, Jitendra; Tortella, Frank
2013-01-01
Abstract The rate of traumatic brain injury (TBI) in service members with wartime injuries has risen rapidly in recent years, and complex, variable links have emerged between TBI and long-term neurological disorders. The multifactorial nature of TBI secondary cellular response has confounded attempts to find cellular biomarkers for its diagnosis and prognosis or for guiding therapy for brain injury. One possibility is to apply emerging systems biology strategies to holistically probe and analyze the complex interweaving molecular pathways and networks that mediate the secondary cellular response through computational models that integrate these diverse data sets. Here, we review available systems biology strategies, databases, and tools. In addition, we describe opportunities for applying this methodology to existing TBI data sets to identify new biomarker candidates and gain insights about the underlying molecular mechanisms of TBI response. As an exemplar, we apply network and pathway analysis to a manually compiled list of 32 protein biomarker candidates from the literature, recover known TBI-related mechanisms, and generate hypothetical new biomarker candidates. PMID:23510232
NASA Technical Reports Server (NTRS)
Blakelee, Richard
1999-01-01
A four station Advanced Lightning Direction Finder (ALDF) network was recently established in the state of Rondonia in western Brazil through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of-arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/Marshall Space Flight Center (MSFC) in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the internet. The network will remain deployed for several years to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measurement Mission (TRMM) satellite which was launched in November 1997. The measurements will also be used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-term observations from this network will contribute in establishing a regional lightning climatological data base, supplementing other data bases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at NASA/MSFC are now being applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The processing methodology and the initial results from an analysis of the first 6 months of network operations will be presented.
NASA Technical Reports Server (NTRS)
Blakeslee, Rich; Bailey, Jeff; Koshak, Bill
1999-01-01
A four station Advanced Lightning Direction Finder (ALDF) network was recently established in the state of Rondonia in western Brazil through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of-arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/ Marshall Space Flight Center (MSFC) in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the internet. The network will remain deployed for several years to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite which was launched in November 1997. The measurements will also be used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-term observations from this network will contribute in establishing a regional lightning climatological data base, supplementing other data bases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at NASA/Marshall Space Flight Center (MSFC) are now being applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The processing methodology and the initial results from an analysis of the first 6 months of network operations will be presented.
Research synergy and drug development: Bright stars in neighboring constellations.
Keserci, Samet; Livingston, Eric; Wan, Lingtian; Pico, Alexander R; Chacko, George
2017-11-01
Drug discovery and subsequent availability of a new breakthrough therapeutic or 'cure' is a compelling example of societal benefit from research advances. These advances are invariably collaborative, involving the contributions of many scientists to a discovery network in which theory and experiment are built upon. To document and understand such scientific advances, data mining of public and commercial data sources coupled with network analysis can be used as a digital methodology to assemble and analyze component events in the history of a therapeutic. This methodology is extensible beyond the history of therapeutics and its use more generally supports (i) efficiency in exploring the scientific history of a research advance (ii) documenting and understanding collaboration (iii) portfolio analysis, planning and optimization (iv) communication of the societal value of research. Building upon prior art, we have conducted a case study of five anti-cancer therapeutics to identify the collaborations that resulted in the successful development of these therapeutics both within and across their respective networks. We have linked the work of over 235,000 authors in roughly 106,000 scientific publications that capture the research crucial for the development of these five therapeutics. Applying retrospective citation discovery, we have identified a core set of publications cited in the networks of all five therapeutics and additional intersections in combinations of networks. We have enriched the content of these networks by annotating them with information on research awards from the US National Institutes of Health (NIH). Lastly, we have mapped these awards to their cognate peer review panels, identifying another layer of collaborative scientific activity that influenced the research represented in these networks.
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
A Methodology to Develop Entrepreneurial Networks: The Tech Ecosystem of Six African Cities
2014-11-01
Information Center. Greve, A. and Salaff, J. W. (2003), Social Networks and Entrepreneurship . Entrepreneurship Theory and Practice, 28: 1–22. doi...methodology enables us to accurately measure social capital and circumvents the massive effort of mapping an individual’s social network before...locating the social resources in it. 15. SUBJECT TERMS Network Analysis, Economic Networks, Network Topology, Network Classification 16. SECURITY
Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H
2018-07-01
Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.
2009-12-01
standards for assessing the value of intangible assets or intellectual capital. Historically, a number of frameworks have evolved, each with a ...different focus and a different assessment methodology. In order to assess that knowledge management initiatives contributed to the fight against...terrorism in Canada, a results-based framework was selected, customized and applied to CRTI ( a networked science and technology program to counter
2010-09-01
MULTIPLE-ARRAY DETECTION, ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC EVENTS – UTILIZATION OF GROUND TRUTH INFORMATION Stephen J...and infrasound data from seismo-acoustic arrays and apply the methodology to regional networks for validation with ground truth information. In the...initial year of the project automated techniques for detecting, associating and locating infrasound signals were developed. Recently, the location
Social Networks, Engagement and Resilience in University Students.
Fernández-Martínez, Elena; Andina-Díaz, Elena; Fernández-Peña, Rosario; García-López, Rosa; Fulgueiras-Carril, Iván; Liébana-Presa, Cristina
2017-12-01
Analysis of social networks may be a useful tool for understanding the relationship between resilience and engagement, and this could be applied to educational methodologies, not only to improve academic performance, but also to create emotionally sustainable networks. This descriptive study was carried out on 134 university students. We collected the network structural variables, degree of resilience (CD-RISC 10), and engagement (UWES-S). The computer programs used were excel, UCINET for network analysis, and SPSS for statistical analysis. The analysis revealed results of means of 28.61 for resilience, 2.98 for absorption, 4.82 for dedication, and 3.13 for vigour. The students had two preferred places for sharing information: the classroom and WhatsApp. The greater the value for engagement, the greater the degree of centrality in the friendship network among students who are beginning their university studies. This relationship becomes reversed as the students move to later academic years. In terms of resilience, the highest values correspond to greater centrality in the friendship networks. The variables of engagement and resilience influenced the university students' support networks.
Social Networks, Engagement and Resilience in University Students
García-López, Rosa; Fulgueiras-Carril, Iván
2017-01-01
Analysis of social networks may be a useful tool for understanding the relationship between resilience and engagement, and this could be applied to educational methodologies, not only to improve academic performance, but also to create emotionally sustainable networks. This descriptive study was carried out on 134 university students. We collected the network structural variables, degree of resilience (CD-RISC 10), and engagement (UWES-S). The computer programs used were excel, UCINET for network analysis, and SPSS for statistical analysis. The analysis revealed results of means of 28.61 for resilience, 2.98 for absorption, 4.82 for dedication, and 3.13 for vigour. The students had two preferred places for sharing information: the classroom and WhatsApp. The greater the value for engagement, the greater the degree of centrality in the friendship network among students who are beginning their university studies. This relationship becomes reversed as the students move to later academic years. In terms of resilience, the highest values correspond to greater centrality in the friendship networks. The variables of engagement and resilience influenced the university students’ support networks. PMID:29194361
Transportation networks : data, analysis, methodology development and visualization.
DOT National Transportation Integrated Search
2007-12-29
This project provides data compilation, analysis methodology and visualization methodology for the current network : data assets of the Alabama Department of Transportation (ALDOT). This study finds that ALDOT is faced with a : considerable number of...
Igras, Susan; Diakité, Mariam; Lundgren, Rebecka
2017-07-01
In West Africa, social factors influence whether couples with unmet need for family planning act on birth-spacing desires. Tékponon Jikuagou is testing a social network-based intervention to reduce social barriers by diffusing new ideas. Individuals and groups judged socially influential by their communities provide entrée to networks. A participatory social network mapping methodology was designed to identify these diffusion actors. Analysis of monitoring data, in-depth interviews, and evaluation reports assessed the methodology's acceptability to communities and staff and whether it produced valid, reliable data to identify influential individuals and groups who diffuse new ideas through their networks. Results indicated the methodology's acceptability. Communities were actively and equitably engaged. Staff appreciated its ability to yield timely, actionable information. The mapping methodology also provided valid and reliable information by enabling communities to identify highly connected and influential network actors. Consistent with social network theory, this methodology resulted in the selection of informal groups and individuals in both informal and formal positions. In-depth interview data suggest these actors were diffusing new ideas, further confirming their influence/connectivity. The participatory methodology generated insider knowledge of who has social influence, challenging commonly held assumptions. Collecting and displaying information fostered staff and community learning, laying groundwork for social change.
Using expression genetics to study the neurobiology of ethanol and alcoholism.
Farris, Sean P; Wolen, Aaron R; Miles, Michael F
2010-01-01
Recent simultaneous progress in human and animal model genetics and the advent of microarray whole genome expression profiling have produced prodigious data sets on genetic loci, potential candidate genes, and differential gene expression related to alcoholism and ethanol behaviors. Validated target genes or gene networks functioning in alcoholism are still of meager proportions. Genetical genomics, which combines genetic analysis of both traditional phenotypes and whole genome expression data, offers a potential methodology for characterizing brain gene networks functioning in alcoholism. This chapter will describe concepts, approaches, and recent findings in the field of genetical genomics as it applies to alcohol research. Copyright 2010 Elsevier Inc. All rights reserved.
Haile, Sarah R; Guerra, Beniamino; Soriano, Joan B; Puhan, Milo A
2017-12-21
Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates that the scores with the best discriminative properties to predict 3 year mortality in patients with COPD are ADO and eBODE.
Detecting switching and intermittent causalities in time series
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano; Papo, David
2017-04-01
During the last decade, complex network representations have emerged as a powerful instrument for describing the cross-talk between different brain regions both at rest and as subjects are carrying out cognitive tasks, in healthy brains and neurological pathologies. The transient nature of such cross-talk has nevertheless by and large been neglected, mainly due to the inherent limitations of some metrics, e.g., causality ones, which require a long time series in order to yield statistically significant results. Here, we present a methodology to account for intermittent causal coupling in neural activity, based on the identification of non-overlapping windows within the original time series in which the causality is strongest. The result is a less coarse-grained assessment of the time-varying properties of brain interactions, which can be used to create a high temporal resolution time-varying network. We apply the proposed methodology to the analysis of the brain activity of control subjects and alcoholic patients performing an image recognition task. Our results show that short-lived, intermittent, local-scale causality is better at discriminating both groups than global network metrics. These results highlight the importance of the transient nature of brain activity, at least under some pathological conditions.
Sand/cement ratio evaluation on mortar using neural networks and ultrasonic transmission inspection.
Molero, M; Segura, I; Izquierdo, M A G; Fuente, J V; Anaya, J J
2009-02-01
The quality and degradation state of building materials can be determined by nondestructive testing (NDT). These materials are composed of a cementitious matrix and particles or fragments of aggregates. Sand/cement ratio (s/c) provides the final material quality; however, the sand content can mask the matrix properties in a nondestructive measurement. Therefore, s/c ratio estimation is needed in nondestructive characterization of cementitious materials. In this study, a methodology to classify the sand content in mortar is presented. The methodology is based on ultrasonic transmission inspection, data reduction, and features extraction by principal components analysis (PCA), and neural network classification. This evaluation is carried out with several mortar samples, which were made while taking into account different cement types and s/c ratios. The estimated s/c ratio is determined by ultrasonic spectral attenuation with three different broadband transducers (0.5, 1, and 2 MHz). Statistical PCA to reduce the dimension of the captured traces has been applied. Feed-forward neural networks (NNs) are trained using principal components (PCs) and their outputs are used to display the estimated s/c ratios in false color images, showing the s/c ratio distribution of the mortar samples.
Toroody, Ahmad Bahoo; Abaei, Mohammad Mahdy; Gholamnia, Reza
2016-12-01
Risk assessment can be classified into two broad categories: traditional and modern. This paper is aimed at contrasting the functional resonance analysis method (FRAM) as a modern approach with the fault tree analysis (FTA) as a traditional method, regarding assessing the risks of a complex system. Applied methodology by which the risk assessment is carried out, is presented in each approach. Also, FRAM network is executed with regard to nonlinear interaction of human and organizational levels to assess the safety of technological systems. The methodology is implemented for lifting structures deep offshore. The main finding of this paper is that the combined application of FTA and FRAM during risk assessment, could provide complementary perspectives and may contribute to a more comprehensive understanding of an incident. Finally, it is shown that coupling a FRAM network with a suitable quantitative method will result in a plausible outcome for a predefined accident scenario.
Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim
2016-01-01
The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.
The use of hierarchical clustering for the design of optimized monitoring networks
NASA Astrophysics Data System (ADS)
Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji
2018-05-01
Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different sampling methodologies as well as outliers (stations' time series which are markedly different from all others in a given dataset).
Classification of Alzheimer’s Patients through Ubiquitous Computing †
Nieto-Reyes, Alicia; Duque, Rafael; Montaña, José Luis; Lage, Carmen
2017-01-01
Functional data analysis and artificial neural networks are the building blocks of the proposed methodology that distinguishes the movement patterns among c’s patients on different stages of the disease and classifies new patients to their appropriate stage of the disease. The movement patterns are obtained by the accelerometer device of android smartphones that the patients carry while moving freely. The proposed methodology is relevant in that it is flexible on the type of data to which it is applied. To exemplify that, it is analyzed a novel real three-dimensional functional dataset where each datum is observed in a different time domain. Not only is it observed on a difference frequency but also the domain of each datum has different length. The obtained classification success rate of 83% indicates the potential of the proposed methodology. PMID:28753975
NASA Astrophysics Data System (ADS)
Gotoda, Hiroshi; Kinugawa, Hikaru; Tsujimoto, Ryosuke; Domen, Shohei; Okuno, Yuta
2017-04-01
Complex-network theory has attracted considerable attention for nearly a decade, and it enables us to encompass our understanding of nonlinear dynamics in complex systems in a wide range of fields, including applied physics and mechanical, chemical, and electrical engineering. We conduct an experimental study using a pragmatic online detection methodology based on complex-network theory to prevent a limiting unstable state such as blowout in a confined turbulent combustion system. This study introduces a modified version of the natural visibility algorithm based on the idea of a visibility limit to serve as a pragmatic online detector. The average degree of the modified version of the natural visibility graph allows us to detect the onset of blowout, resulting in online prevention.
Emerging Concepts and Methodologies in Cancer Biomarker Discovery.
Lu, Meixia; Zhang, Jinxiang; Zhang, Lanjing
2017-01-01
Cancer biomarker discovery is a critical part of cancer prevention and treatment. Despite the decades of effort, only a small number of cancer biomarkers have been identified for and validated in clinical settings. Conceptual and methodological breakthroughs may help accelerate the discovery of additional cancer biomarkers, particularly their use for diagnostics. In this review, we have attempted to review the emerging concepts in cancer biomarker discovery, including real-world evidence, open access data, and data paucity in rare or uncommon cancers. We have also summarized the recent methodological progress in cancer biomarker discovery, such as high-throughput sequencing, liquid biopsy, big data, artificial intelligence (AI), and deep learning and neural networks. Much attention has been given to the methodological details and comparison of the methodologies. Notably, these concepts and methodologies interact with each other and will likely lead to synergistic effects when carefully combined. Newer, more innovative concepts and methodologies are emerging as the current emerging ones became mainstream and widely applied to the field. Some future challenges are also discussed. This review contributes to the development of future theoretical frameworks and technologies in cancer biomarker discovery and will contribute to the discovery of more useful cancer biomarkers.
Lamontagne, Marie-Eve
2013-01-01
Integration is a popular strategy to increase the quality of care within systems of care. However, there is no common language, approach or tool allowing for a valid description, comparison and evaluation of integrated care. Social network analysis could be a viable methodology to provide an objective picture of integrated networks. To illustrate social network analysis use in the context of systems of care for traumatic brain injury. We surveyed members of a network using a validated questionnaire to determine the links between them. We determined the density, centrality, multiplexity, and quality of the links reported. The network was described as moderately dense (0.6), the most prevalent link was knowledge, and four organisation members of a consortium were central to the network. Social network analysis allowed us to create a graphic representation of the network. Social network analysis is a useful methodology to objectively characterise integrated networks.
Hoffman, P; Kline, E; George, L; Price, K; Clark, M; Walasin, R
1995-01-01
The Military Health Service System (MHSS) provides health care for the Department of Defense (DOD). This system operates on an annual budget of $15 Billion, supports 127 medical treatment facilities (MTFs) and 500 clinics, and provides support to 8.7 million beneficiaries worldwide. To support these facilities and their patients, the MHSS uses more than 125 different networked automated medical systems. These systems rely on a heterogeneous telecommunications infrastructure for data communications. With the support of the Defense Medical Information Management (DMIM) Program Office, our goal was to identify the network requirements for DMIM migration and target systems and design a communications infrastructure to support all systems with an integrated network. This work used tools from Business Process Reengineering (BPR) and applied it to communications infrastructure design for the first time. The methodology and results are applicable to any health care enterprise, military or civilian.
Hoffman, P.; Kline, E.; George, L.; Price, K.; Clark, M.; Walasin, R.
1995-01-01
The Military Health Service System (MHSS) provides health care for the Department of Defense (DOD). This system operates on an annual budget of $15 Billion, supports 127 medical treatment facilities (MTFs) and 500 clinics, and provides support to 8.7 million beneficiaries worldwide. To support these facilities and their patients, the MHSS uses more than 125 different networked automated medical systems. These systems rely on a heterogeneous telecommunications infrastructure for data communications. With the support of the Defense Medical Information Management (DMIM) Program Office, our goal was to identify the network requirements for DMIM migration and target systems and design a communications infrastructure to support all systems with an integrated network. This work used tools from Business Process Reengineering (BPR) and applied it to communications infrastructure design for the first time. The methodology and results are applicable to any health care enterprise, military or civilian. PMID:8563346
Topology of Innovation Spaces in the Knowledge Networks Emerging through Questions-And-Answers
Andjelković, Miroslav; Tadić, Bosiljka; Mitrović Dankulov, Marija; Rajković, Milan; Melnik, Roderick
2016-01-01
The communication processes of knowledge creation represent a particular class of human dynamics where the expertise of individuals plays a substantial role, thus offering a unique possibility to study the structure of knowledge networks from online data. Here, we use the empirical evidence from questions-and-answers in mathematics to analyse the emergence of the network of knowledge contents (or tags) as the individual experts use them in the process. After removing extra edges from the network-associated graph, we apply the methods of algebraic topology of graphs to examine the structure of higher-order combinatorial spaces in networks for four consecutive time intervals. We find that the ranking distributions of the suitably scaled topological dimensions of nodes fall into a unique curve for all time intervals and filtering levels, suggesting a robust architecture of knowledge networks. Moreover, these networks preserve the logical structure of knowledge within emergent communities of nodes, labeled according to a standard mathematical classification scheme. Further, we investigate the appearance of new contents over time and their innovative combinations, which expand the knowledge network. In each network, we identify an innovation channel as a subgraph of triangles and larger simplices to which new tags attach. Our results show that the increasing topological complexity of the innovation channels contributes to network’s architecture over different time periods, and is consistent with temporal correlations of the occurrence of new tags. The methodology applies to a wide class of data with the suitable temporal resolution and clearly identified knowledge-content units. PMID:27171149
3-D Survey Applied to Industrial Archaeology by Tls Methodology
NASA Astrophysics Data System (ADS)
Monego, M.; Fabris, M.; Menin, A.; Achilli, V.
2017-05-01
This work describes the three-dimensional survey of "Ex Stazione Frigorifera Specializzata": initially used for agricultural storage, during the years it was allocated to different uses until the complete neglect. The historical relevance and the architectural heritage that this building represents has brought the start of a recent renovation project and functional restoration. In this regard it was necessary a global 3-D survey that was based on the application and integration of different geomatic methodologies (mainly terrestrial laser scanner, classical topography, and GNSS). The acquisitions of point clouds was performed using different laser scanners: with time of flight (TOF) and phase shift technologies for the distance measurements. The topographic reference network, needed for scans alignment in the same system, was measured with a total station. For the complete survey of the building, 122 scans were acquired and 346 targets were measured from 79 vertices of the reference network. Moreover, 3 vertices were measured with GNSS methodology in order to georeference the network. For the detail survey of machine room were executed 14 scans with 23 targets. The 3-D global model of the building have less than one centimeter of error in the alignment (for the machine room the error in alignment is not greater than 6 mm) and was used to extract products such as longitudinal and transversal sections, plans, architectural perspectives, virtual scans. A complete spatial knowledge of the building is obtained from the processed data, providing basic information for restoration project, structural analysis, industrial and architectural heritage valorization.
NASA Astrophysics Data System (ADS)
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.
Research on energy stock market associated network structure based on financial indicators
NASA Astrophysics Data System (ADS)
Xi, Xian; An, Haizhong
2018-01-01
A financial market is a complex system consisting of many interacting units. In general, due to the various types of information exchange within the industry, there is a relationship between the stocks that can reveal their clear structural characteristics. Complex network methods are powerful tools for studying the internal structure and function of the stock market, which allows us to better understand the stock market. Applying complex network methodology, a stock associated network model based on financial indicators is created. Accordingly, we set threshold value and use modularity to detect the community network, and we analyze the network structure and community cluster characteristics of different threshold situations. The study finds that the threshold value of 0.7 is the abrupt change point of the network. At the same time, as the threshold value increases, the independence of the community strengthens. This study provides a method of researching stock market based on the financial indicators, exploring the structural similarity of financial indicators of stocks. Also, it provides guidance for investment and corporate financial management.
NASA Astrophysics Data System (ADS)
Campanelli, Monica; Mascitelli, Alessandra; Sanò, Paolo; Diémoz, Henri; Estellés, Victor; Federico, Stefano; Iannarelli, Anna Maria; Fratarcangeli, Francesca; Mazzoni, Augusto; Realini, Eugenio; Crespi, Mattia; Bock, Olivier; Martínez-Lozano, Jose A.; Dietrich, Stefano
2018-01-01
The estimation of the precipitable water vapour content (W) with high temporal and spatial resolution is of great interest to both meteorological and climatological studies. Several methodologies based on remote sensing techniques have been recently developed in order to obtain accurate and frequent measurements of this atmospheric parameter. Among them, the relative low cost and easy deployment of sun-sky radiometers, or sun photometers, operating in several international networks, allowed the development of automatic estimations of W from these instruments with high temporal resolution. However, the great problem of this methodology is the estimation of the sun-photometric calibration parameters. The objective of this paper is to validate a new methodology based on the hypothesis that the calibration parameters characterizing the atmospheric transmittance at 940 nm are dependent on vertical profiles of temperature, air pressure and moisture typical of each measurement site. To obtain the calibration parameters some simultaneously seasonal measurements of W, from independent sources, taken over a large range of solar zenith angle and covering a wide range of W, are needed. In this work yearly GNSS/GPS datasets were used for obtaining a table of photometric calibration constants and the methodology was applied and validated in three European ESR-SKYNET network sites, characterized by different atmospheric and climatic conditions: Rome, Valencia and Aosta. Results were validated against the GNSS/GPS and AErosol RObotic NETwork (AERONET) W estimations. In both the validations the agreement was very high, with a percentage RMSD of about 6, 13 and 8 % in the case of GPS intercomparison at Rome, Aosta and Valencia, respectively, and of 8 % in the case of AERONET comparison in Valencia. Analysing the results by W classes, the present methodology was found to clearly improve W estimation at low W content when compared against AERONET in terms of % bias, bringing the agreement with the GPS (considered the reference one) from a % bias of 5.76 to 0.52.
Learning representations for the early detection of sepsis with deep neural networks.
Kam, Hye Jin; Kim, Ha Young
2017-10-01
Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analysis of optimal phenotypic space using elementary modes as applied to Corynebacterium glutamicum
Gayen, Kalyan; Venkatesh, KV
2006-01-01
Background Quantification of the metabolic network of an organism offers insights into possible ways of developing mutant strain for better productivity of an extracellular metabolite. The first step in this quantification is the enumeration of stoichiometries of all reactions occurring in a metabolic network. The structural details of the network in combination with experimentally observed accumulation rates of external metabolites can yield flux distribution at steady state. One such methodology for quantification is the use of elementary modes, which are minimal set of enzymes connecting external metabolites. Here, we have used a linear objective function subject to elementary modes as constraint to determine the fluxes in the metabolic network of Corynebacterium glutamicum. The feasible phenotypic space was evaluated at various combinations of oxygen and ammonia uptake rates. Results Quantification of the fluxes of the elementary modes in the metabolism of C. glutamicum was formulated as linear programming. The analysis demonstrated that the solution was dependent on the criteria of objective function when less than four accumulation rates of the external metabolites were considered. The analysis yielded feasible ranges of fluxes of elementary modes that satisfy the experimental accumulation rates. In C. glutamicum, the elementary modes relating to biomass synthesis through glycolysis and TCA cycle were predominantly operational in the initial growth phase. At a later time, the elementary modes contributing to lysine synthesis became active. The oxygen and ammonia uptake rates were shown to be bounded in the phenotypic space due to the stoichiometric constraint of the elementary modes. Conclusion We have demonstrated the use of elementary modes and the linear programming to quantify a metabolic network. We have used the methodology to quantify the network of C. glutamicum, which evaluates the set of operational elementary modes at different phases of fermentation. The methodology was also used to determine the feasible solution space for a given set of substrate uptake rates under specific optimization criteria. Such an approach can be used to determine the optimality of the accumulation rates of any metabolite in a given network. PMID:17038164
Amezquita-Sanchez, Juan P; Adeli, Anahita; Adeli, Hojjat
2016-05-15
Mild cognitive impairment (MCI) is a cognitive disorder characterized by memory impairment, greater than expected by age. A new methodology is presented to identify MCI patients during a working memory task using MEG signals. The methodology consists of four steps: In step 1, the complete ensemble empirical mode decomposition (CEEMD) is used to decompose the MEG signal into a set of adaptive sub-bands according to its contained frequency information. In step 2, a nonlinear dynamics measure based on permutation entropy (PE) analysis is employed to analyze the sub-bands and detect features to be used for MCI detection. In step 3, an analysis of variation (ANOVA) is used for feature selection. In step 4, the enhanced probabilistic neural network (EPNN) classifier is applied to the selected features to distinguish between MCI and healthy patients. The usefulness and effectiveness of the proposed methodology are validated using the sensed MEG data obtained experimentally from 18 MCI and 19 control patients. Copyright © 2016 Elsevier B.V. All rights reserved.
Martins, Marcelo Ramos; Schleder, Adriana Miralles; Droguett, Enrique López
2014-12-01
This article presents an iterative six-step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty. © 2014 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Bailly, J. S.; Delenne, C.; Chahinian, N.; Bringay, S.; Commandré, B.; Chaumont, M.; Derras, M.; Deruelle, L.; Roche, M.; Rodriguez, F.; Subsol, G.; Teisseire, M.
2017-12-01
In France, local government institutions must establish a detailed description of wastewater networks. The information should be available, but it remains fragmented (different formats held by different stakeholders) and incomplete. In the "Cart'Eaux" project, a multidisciplinary team, including an industrial partner, develops a global methodology using Machine Learning and Data Mining approaches applied to various types of large data to recover information in the aim of mapping urban sewage systems for hydraulic modelling. Deep-learning is first applied using a Convolution Neural Network to localize manhole covers on 5 cm resolution aerial RGB images. The detected manhole covers are then automatically connected using a tree-shaped graph constrained by industry rules. Based on a Delaunay triangulation, connections are chosen to minimize a cost function depending on pipe length, slope and possible intersection with roads or buildings. A stochastic version of this algorithm is currently being developed to account for positional uncertainty and detection errors, and generate sets of probable networks. As more information is required for hydraulic modeling (slopes, diameters, materials, etc.), text data mining is used to extract network characteristics from data posted on the Web or available through governmental or specific databases. Using an appropriate list of keywords, the web is scoured for documents which are saved in text format. The thematic entities are identified and linked to the surrounding spatial and temporal entities. The methodology is developed and tested on two towns in southern France. The primary results are encouraging: 54% of manhole covers are detected with few false detections, enabling the reconstruction of probable networks. The data mining results are still being investigated. It is clear at this stage that getting numerical values on specific pipes will be challenging. Thus, when no information is found, decision rules will be used to assign admissible numerical values to enable the final hydraulic modelling. Consequently, sensitivity analysis of the hydraulic model will be performed to take into account the uncertainty associated with each piece of information. Project funded by the European Regional Development Fund and the Occitanie Region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, Maria K., E-mail: cameron@math.umd.edu
We develop computational tools for spectral analysis of stochastic networks representing energy landscapes of atomic and molecular clusters. Physical meaning and some properties of eigenvalues, left and right eigenvectors, and eigencurrents are discussed. We propose an approach to compute a collection of eigenpairs and corresponding eigencurrents describing the most important relaxation processes taking place in the system on its way to the equilibrium. It is suitable for large and complex stochastic networks where pairwise transition rates, given by the Arrhenius law, vary by orders of magnitude. The proposed methodology is applied to the network representing the Lennard-Jones-38 cluster created bymore » Wales's group. Its energy landscape has a double funnel structure with a deep and narrow face-centered cubic funnel and a shallower and wider icosahedral funnel. However, the complete spectrum of the generator matrix of the Lennard-Jones-38 network has no appreciable spectral gap separating the eigenvalue corresponding to the escape from the icosahedral funnel. We provide a detailed description of the escape process from the icosahedral funnel using the eigencurrent and demonstrate a superexponential growth of the corresponding eigenvalue. The proposed spectral approach is compared to the methodology of the Transition Path Theory. Finally, we discuss whether the Lennard-Jones-38 cluster is metastable from the points of view of a mathematician and a chemical physicist, and make a connection with experimental works.« less
Applications of artificial neural network in AIDS research and therapy.
Sardari, S; Sardari, D
2002-01-01
In recent years considerable effort has been devoted to applying pattern recognition techniques to the complex task of data analysis in drug research. Artificial neural networks (ANN) methodology is a modeling method with great ability to adapt to a new situation, or control an unknown system, using data acquired in previous experiments. In this paper, a brief history of ANN and the basic concepts behind the computing, the mathematical and algorithmic formulation of each of the techniques, and their developmental background is presented. Based on the abilities of ANNs in pattern recognition and estimation of system outputs from the known inputs, the neural network can be considered as a tool for molecular data analysis and interpretation. Analysis by neural networks improves the classification accuracy, data quantification and reduces the number of analogues necessary for correct classification of biologically active compounds. Conformational analysis and quantifying the components in mixtures using NMR spectra, aqueous solubility prediction and structure-activity correlation are among the reported applications of ANN as a new modeling method. Ranging from drug design and discovery to structure and dosage form design, the potential pharmaceutical applications of the ANN methodology are significant. In the areas of clinical monitoring, utilization of molecular simulation and design of bioactive structures, ANN would make the study of the status of the health and disease possible and brings their predicted chemotherapeutic response closer to reality.
On the methodology of Engineering Geodesy
NASA Astrophysics Data System (ADS)
Brunner, Fritz K.
2007-09-01
Textbooks on geodetic surveying usually describe a very small number of principles which should provide the foundation of geodetic surveying. Here, the author argues that an applied field, such as engineering geodesy, has a methodology as foundation rather than a few principles. Ten methodological elements (ME) are identified: (1) Point discretisation of natural surfaces and objects, (2) distinction between coordinate and observation domain, (3) definition of reference systems, (4) specification of unknown parameters and desired precisions, (5) geodetic network and observation design, (6) quality control of equipment, (7) quality control of measurements, (8) establishment of measurement models, (9) establishment of parameter estimation models, (10) quality control of results. Each ME consists of a suite of theoretical developments, geodetic techniques and calculation procedures, which will be discussed. This paper is to be considered a first attempt at identifying the specific elements of the methodology of engineering geodesy. A better understanding of this methodology could lead to an increased objectivity, to a transformation of subjective practical experiences into objective working methods, and consequently to a new structure for teaching this rather diverse subject.
Kamal, Noreen; Fels, Sidney
2013-01-01
Positive health behaviour is critical to preventing illness and managing chronic conditions. A user-centred methodology was employed to design an online social network to motivate health behaviour change. The methodology was augmented by utilizing the Appeal, Belonging, Commitment (ABC) Framework, which is based on theoretical models for health behaviour change and use of online social networks. The user-centred methodology included four phases: 1) initial user inquiry on health behaviour and use of online social networks; 2) interview feedback on paper prototypes; 2) laboratory study on medium fidelity prototype; and 4) a field study on the high fidelity prototype. The points of inquiry through these phases were based on the ABC Framework. This yielded an online social network system that linked to external third party databases to deploy to users via an interactive website.
Tracking the Evolution of Infrastructure Systems and Mass Responses Using Publically Available Data
Guan, Xiangyang; Chen, Cynthia; Work, Dan
2016-01-01
Networks can evolve even on a short-term basis. This phenomenon is well understood by network scientists, but receive little attention in empirical literature involving real-world networks. On one hand, this is due to the deceitfully fixed topology of some networks such as many physical infrastructures, whose evolution is often deemed unlikely to occur in short term; on the other hand, the lack of data prohibits scientists from studying subjects such as social networks that seem likely to evolve on a short-term basis. We show that both networks—the infrastructure network and social network—are able to demonstrate evolutionary dynamics at the system level even in the short-term, characterized by shifting between different phases as predicted in network science. We develop a methodology of tracking the evolutionary dynamics of the two networks by incorporating flows and the microstructure of networks such as motifs. This approach is applied to the human interaction network and two transportation networks (subway and taxi) in the context of Hurricane Sandy, using publically available Twitter data and transportation data. Our result shows that significant changes in the system-level structure of networks can be detected on a continuous basis. This result provides a promising channel for real-time tracking in the future. PMID:27907061
Lamontagne, Marie-Eve
2013-01-01
Introduction Integration is a popular strategy to increase the quality of care within systems of care. However, there is no common language, approach or tool allowing for a valid description, comparison and evaluation of integrated care. Social network analysis could be a viable methodology to provide an objective picture of integrated networks. Goal of the article To illustrate social network analysis use in the context of systems of care for traumatic brain injury. Method We surveyed members of a network using a validated questionnaire to determine the links between them. We determined the density, centrality, multiplexity, and quality of the links reported. Results The network was described as moderately dense (0.6), the most prevalent link was knowledge, and four organisation members of a consortium were central to the network. Social network analysis allowed us to create a graphic representation of the network. Conclusion Social network analysis is a useful methodology to objectively characterise integrated networks. PMID:24250281
Cai, Rong-Lin; Shen, Guo-Ming; Wang, Hao; Guan, Yuan-Yuan
2018-01-01
Functional magnetic resonance imaging (fMRI) is a novel method for studying the changes of brain networks due to acupuncture treatment. In recent years, more and more studies have focused on the brain functional connectivity network of acupuncture stimulation. To offer an overview of the different influences of acupuncture on the brain functional connectivity network from studies using resting-state fMRI. The authors performed a systematic search according to PRISMA guidelines. The database PubMed was searched from January 1, 2006 to December 31, 2016 with restriction to human studies in English language. Electronic searches were conducted in PubMed using the keywords "acupuncture" and "neuroimaging" or "resting-state fMRI" or "functional connectivity". Selection of included articles, data extraction and methodological quality assessments were respectively conducted by two review authors. Forty-four resting-state fMRI studies were included in this systematic review according to inclusion criteria. Thirteen studies applied manual acupuncture vs. sham, four studies applied electro-acupuncture vs. sham, two studies also compared transcutaneous electrical acupoint stimulation vs. sham, and nine applied sham acupoint as control. Nineteen studies with a total number of 574 healthy subjects selected to perform fMRI only considered healthy adult volunteers. The brain functional connectivity of the patients had varying degrees of change. Compared with sham acupuncture, verum acupuncture could increase default mode network and sensorimotor network connectivity with pain-, affective- and memory-related brain areas. It has significantly greater connectivity of genuine acupuncture between the periaqueductal gray, anterior cingulate cortex, left posterior cingulate cortex, right anterior insula, limbic/paralimbic and precuneus compared with sham acupuncture. Some research had also shown that acupuncture could adjust the limbic-paralimbic-neocortical network, brainstem, cerebellum, subcortical and hippocampus brain areas. It can be presumed that the functional connectivity network is closely related to the mechanism of acupuncture, and central integration plays a critical role in the acupuncture mechanism. Copyright © 2017 Shanghai Changhai Hospital. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.
Differentiation of tea varieties using UV-Vis spectra and pattern recognition techniques
NASA Astrophysics Data System (ADS)
Palacios-Morillo, Ana; Alcázar, Ángela.; de Pablos, Fernando; Jurado, José Marcos
2013-02-01
Tea, one of the most consumed beverages all over the world, is of great importance in the economies of a number of countries. Several methods have been developed to classify tea varieties or origins based in pattern recognition techniques applied to chemical data, such as metal profile, amino acids, catechins and volatile compounds. Some of these analytical methods become tedious and expensive to be applied in routine works. The use of UV-Vis spectral data as discriminant variables, highly influenced by the chemical composition, can be an alternative to these methods. UV-Vis spectra of methanol-water extracts of tea have been obtained in the interval 250-800 nm. Absorbances have been used as input variables. Principal component analysis was used to reduce the number of variables and several pattern recognition methods, such as linear discriminant analysis, support vector machines and artificial neural networks, have been applied in order to differentiate the most common tea varieties. A successful classification model was built by combining principal component analysis and multilayer perceptron artificial neural networks, allowing the differentiation between tea varieties. This rapid and simple methodology can be applied to solve classification problems in food industry saving economic resources.
NASA Technical Reports Server (NTRS)
Paul, Arthur S.; Gill, Tepper L.; Maclin, Arlene P.
1989-01-01
A study of NASA's Systems Management Policy (SMP) concluded that the primary methodology being used by the Mission Operations and Data Systems Directorate and its subordinate, the Networks Division, is very effective. Still some unmet needs were identified. This study involved evaluating methodologies, tools, and techniques with the potential for resolving the previously identified deficiencies. Six preselected methodologies being used by other organizations with similar development problems were studied. The study revealed a wide range of significant differences in structure. Each system had some strengths but none will satisfy all of the needs of the Networks Division. Areas for improvement of the methodology being used by the Networks Division are listed with recommendations for specific action.
Cho, Yongrae; Kim, Minsung
2014-01-01
The volatility and uncertainty in the process of technological developments are growing faster than ever due to rapid technological innovations. Such phenomena result in integration among disparate technology fields. At this point, it is a critical research issue to understand the different roles and the propensity of each element technology for technological convergence. In particular, the network-based approach provides a holistic view in terms of technological linkage structures. Furthermore, the development of new indicators based on network visualization can reveal the dynamic patterns among disparate technologies in the process of technological convergence and provide insights for future technological developments. This research attempts to analyze and discover the patterns of the international patent classification codes of the United States Patent and Trademark Office's patent data in printed electronics, which is a representative technology in the technological convergence process. To this end, we apply the physical idea as a new methodological approach to interpret technological convergence. More specifically, the concepts of entropy and gravity are applied to measure the activities among patent citations and the binding forces among heterogeneous technologies during technological convergence. By applying the entropy and gravity indexes, we could distinguish the characteristic role of each technology in printed electronics. At the technological convergence stage, each technology exhibits idiosyncratic dynamics which tend to decrease technological differences and heterogeneity. Furthermore, through nonlinear regression analysis, we have found the decreasing patterns of disparity over a given total period in the evolution of technological convergence. This research has discovered the specific role of each element technology field and has consequently identified the co-evolutionary patterns of technological convergence. These new findings on the evolutionary patterns of technological convergence provide some implications for engineering and technology foresight research, as well as for corporate strategy and technology policy. PMID:24914959
Portable parallel stochastic optimization for the design of aeropropulsion components
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Rhodes, G. S.
1994-01-01
This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.
Quantifiable and objective approach to organizational performance enhancement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholand, Andrew Joseph; Tausczik, Yla R.
This report describes a new methodology, social language network analysis (SLNA), that combines tools from social language processing and network analysis to identify socially situated relationships between individuals which, though subtle, are highly influential. Specifically, SLNA aims to identify and characterize the nature of working relationships by processing artifacts generated with computer-mediated communication systems, such as instant message texts or emails. Because social language processing is able to identify psychological, social, and emotional processes that individuals are not able to fully mask, social language network analysis can clarify and highlight complex interdependencies between group members, even when these relationships aremore » latent or unrecognized. This report outlines the philosophical antecedents of SLNA, the mechanics of preprocessing, processing, and post-processing stages, and some example results obtained by applying this approach to a 15-month corporate discussion archive.« less
Network analysis for a network disorder: The emerging role of graph theory in the study of epilepsy.
Bernhardt, Boris C; Bonilha, Leonardo; Gross, Donald W
2015-09-01
Recent years have witnessed a paradigm shift in the study and conceptualization of epilepsy, which is increasingly understood as a network-level disorder. An emblematic case is temporal lobe epilepsy (TLE), the most common drug-resistant epilepsy that is electroclinically defined as a focal epilepsy and pathologically associated with hippocampal sclerosis. In this review, we will summarize histopathological, electrophysiological, and neuroimaging evidence supporting the concept that the substrate of TLE is not limited to the hippocampus alone, but rather is broadly distributed across multiple brain regions and interconnecting white matter pathways. We will introduce basic concepts of graph theory, a formalism to quantify topological properties of complex systems that has recently been widely applied to study networks derived from brain imaging and electrophysiology. We will discuss converging graph theoretical evidence indicating that networks in TLE show marked shifts in their overall topology, providing insight into the neurobiology of TLE as a network-level disorder. Our review will conclude by discussing methodological challenges and future clinical applications of this powerful analytical approach. Copyright © 2015 Elsevier Inc. All rights reserved.
Randomizing growing networks with a time-respecting null model
NASA Astrophysics Data System (ADS)
Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš
2018-05-01
Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.
Networking—a statistical physics perspective
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho; Saad, David
2013-03-01
Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.
Vicentini, Federico; Pedrocchi, Nicola; Malosio, Matteo; Molinari Tosatti, Lorenzo
2014-09-01
Robot-assisted neurorehabilitation often involves networked systems of sensors ("sensory rooms") and powerful devices in physical interaction with weak users. Safety is unquestionably a primary concern. Some lightweight robot platforms and devices designed on purpose include safety properties using redundant sensors or intrinsic safety design (e.g. compliance and backdrivability, limited exchange of energy). Nonetheless, the entire "sensory room" shall be required to be fail-safe and safely monitored as a system at large. Yet, sensor capabilities and control algorithms used in functional therapies require, in general, frequent updates or re-configurations, making a safety-grade release of such devices hardly sustainable in cost-effectiveness and development time. As such, promising integrated platforms for human-in-the-loop therapies could not find clinical application and manufacturing support because of lacking in the maintenance of global fail-safe properties. Under the general context of cross-machinery safety standards, the paper presents a methodology called SafeNet for helping in extending the safety rate of Human Robot Interaction (HRI) systems using unsafe components, including sensors and controllers. SafeNet considers, in fact, the robotic system as a device at large and applies the principles of functional safety (as in ISO 13489-1) through a set of architectural procedures and implementation rules. The enabled capability of monitoring a network of unsafe devices through redundant computational nodes, allows the usage of any custom sensors and algorithms, usually planned and assembled at therapy planning-time rather than at platform design-time. A case study is presented with an actual implementation of the proposed methodology. A specific architectural solution is applied to an example of robot-assisted upper-limb rehabilitation with online motion tracking. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Rautureau, S; Dufour, B; Durand, B
2011-04-01
Besides farming, trade of livestock is a major component of agricultural economy. However, the networks generated by live animal movements are the major support for the propagation of infectious agents between farms, and their structure strongly affects how fast a disease may spread. Structural characteristics may thus be indicators of network vulnerability to the spread of infectious disease. The method proposed here is based upon the analysis of specific subnetworks: the giant strongly connected components (GSCs). Their existence, size and geographic extent are used to assess network vulnerability. Their disappearance when targeted nodes are removed allows studying how network vulnerability may be controlled under emergency conditions. The method was applied to the cattle trade network in France, 2005. Giant strongly connected components were present and widely spread all over the country in yearly, monthly and weekly networks. Among several tested approaches, the most efficient way to make GSCs disappear was based on the ranking of nodes by decreasing betweenness centrality (the proportion of shortest paths between nodes on which a specific node lies). Giant strongly connected components disappearance was obtained after removal of <1% of network nodes. Under emergency conditions, suspending animal trade activities in a small subset of holdings may thus allow to control the spread of an infectious disease through the animal trade network. Nodes representing markets and dealers were widely affected by these simulated control measures. This confirms their importance as 'hubs' for infectious diseases spread. Besides emergency conditions, specific sensitization and preventive measures should be dedicated to this population. © 2010 Blackwell Verlag GmbH.
On the sensitivity of geospatial low impact development locations to the centralized sewer network.
Zischg, Jonatan; Zeisl, Peter; Winkler, Daniel; Rauch, Wolfgang; Sitzenfrei, Robert
2018-04-01
In the future, infrastructure systems will have to become smarter, more sustainable, and more resilient requiring new methods of urban infrastructure design. In the field of urban drainage, green infrastructure is a promising design concept with proven benefits to runoff reduction, stormwater retention, pollution removal, and/or the creation of attractive living spaces. Such 'near-nature' concepts are usually distributed over the catchment area in small scale units. In many cases, these above-ground structures interact with the existing underground pipe infrastructure, resulting in hybrid solutions. In this work, we investigate the effect of different placement strategies for low impact development (LID) structures on hydraulic network performance of existing drainage networks. Based on a sensitivity analysis, geo-referenced maps are created which identify the most effective LID positions within the city framework (e.g. to improve network resilience). The methodology is applied to a case study to test the effectiveness of the approach and compare different placement strategies. The results show that with a simple targeted LID placement strategy, the flood performance is improved by an additional 34% as compared to a random placement strategy. The developed map is easy to communicate and can be rapidly applied by decision makers when deciding on stormwater policies.
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-04-01
System identification has been employed in numerous structural health monitoring (SHM) applications. Traditional system identification methods usually rely on centralized processing of structural response data to extract information on structural parameters. However, in wireless SHM systems the centralized processing of structural response data introduces a significant communication bottleneck. Exploiting the merits of decentralization and on-board processing power of wireless SHM systems, many system identification methods have been successfully implemented in wireless sensor networks. While several system identification approaches for wireless SHM systems have been proposed, little attention has been paid to obtaining information on the physical parameters (e.g. stiffness, damping) of the monitored structure. This paper presents a hybrid system identification methodology suitable for wireless sensor networks based on the principles of component mode synthesis (dynamic substructuring). A numerical model of the monitored structure is embedded into the wireless sensor nodes in a distributed manner, i.e. the entire model is segmented into sub-models, each embedded into one sensor node corresponding to the substructure the sensor node is assigned to. The parameters of each sub-model are estimated by extracting local mode shapes and by applying the equations of the Craig-Bampton method on dynamic substructuring. The proposed methodology is validated in a laboratory test conducted on a four-story frame structure to demonstrate the ability of the methodology to yield accurate estimates of stiffness parameters. Finally, the test results are discussed and an outlook on future research directions is provided.
Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry
2013-02-20
This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).
NASA Astrophysics Data System (ADS)
Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri
2016-04-01
The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A. Lobbrecht, and R. Price (2010b), Optimization of water level monitoring network in polder systems using information theory, WATER RESOURCES RESEARCH, 46(12), W12553,10.1029/2009wr008953. [3] Alfonso, L., L. He, A. Lobbrecht, and R. Price (2013), Information theory applied to evaluate the discharge monitoring network of the Magdalena River, Journal of Hydroinformatics, 15(1), 211-228 [4] Alfonso, L., E. Ridolfi, S. Gaytan-Aguilar, F. Napolitano, and F. Russo (2014), Ensemble Entropy for Monitoring Network Design, Entropy, 16(3), 1365-1375
Parsing Social Network Survey Data from Hidden Populations Using Stochastic Context-Free Grammars
Poon, Art F. Y.; Brouwer, Kimberly C.; Strathdee, Steffanie A.; Firestone-Cruz, Michelle; Lozada, Remedios M.; Kosakovsky Pond, Sergei L.; Heckathorn, Douglas D.; Frost, Simon D. W.
2009-01-01
Background Human populations are structured by social networks, in which individuals tend to form relationships based on shared attributes. Certain attributes that are ambiguous, stigmatized or illegal can create a ÔhiddenÕ population, so-called because its members are difficult to identify. Many hidden populations are also at an elevated risk of exposure to infectious diseases. Consequently, public health agencies are presently adopting modern survey techniques that traverse social networks in hidden populations by soliciting individuals to recruit their peers, e.g., respondent-driven sampling (RDS). The concomitant accumulation of network-based epidemiological data, however, is rapidly outpacing the development of computational methods for analysis. Moreover, current analytical models rely on unrealistic assumptions, e.g., that the traversal of social networks can be modeled by a Markov chain rather than a branching process. Methodology/Principal Findings Here, we develop a new methodology based on stochastic context-free grammars (SCFGs), which are well-suited to modeling tree-like structure of the RDS recruitment process. We apply this methodology to an RDS case study of injection drug users (IDUs) in Tijuana, México, a hidden population at high risk of blood-borne and sexually-transmitted infections (i.e., HIV, hepatitis C virus, syphilis). Survey data were encoded as text strings that were parsed using our custom implementation of the inside-outside algorithm in a publicly-available software package (HyPhy), which uses either expectation maximization or direct optimization methods and permits constraints on model parameters for hypothesis testing. We identified significant latent variability in the recruitment process that violates assumptions of Markov chain-based methods for RDS analysis: firstly, IDUs tended to emulate the recruitment behavior of their own recruiter; and secondly, the recruitment of like peers (homophily) was dependent on the number of recruits. Conclusions SCFGs provide a rich probabilistic language that can articulate complex latent structure in survey data derived from the traversal of social networks. Such structure that has no representation in Markov chain-based models can interfere with the estimation of the composition of hidden populations if left unaccounted for, raising critical implications for the prevention and control of infectious disease epidemics. PMID:19738904
Metabolomics analysis: Finding out metabolic building blocks
2017-01-01
In this paper we propose a new methodology for the analysis of metabolic networks. We use the notion of strongly connected components of a graph, called in this context metabolic building blocks. Every strongly connected component is contracted to a single node in such a way that the resulting graph is a directed acyclic graph, called a metabolic DAG, with a considerably reduced number of nodes. The property of being a directed acyclic graph brings out a background graph topology that reveals the connectivity of the metabolic network, as well as bridges, isolated nodes and cut nodes. Altogether, it becomes a key information for the discovery of functional metabolic relations. Our methodology has been applied to the glycolysis and the purine metabolic pathways for all organisms in the KEGG database, although it is general enough to work on any database. As expected, using the metabolic DAGs formalism, a considerable reduction on the size of the metabolic networks has been obtained, specially in the case of the purine pathway due to its relative larger size. As a proof of concept, from the information captured by a metabolic DAG and its corresponding metabolic building blocks, we obtain the core of the glycolysis pathway and the core of the purine metabolism pathway and detect some essential metabolic building blocks that reveal the key reactions in both pathways. Finally, the application of our methodology to the glycolysis pathway and the purine metabolism pathway reproduce the tree of life for the whole set of the organisms represented in the KEGG database which supports the utility of this research. PMID:28493998
Modular representation of layered neural networks.
Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio
2018-01-01
Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.
Eilam, David; Portugali, Juval; Blumenfeld-Lieberthal, Efrat
2012-01-01
Background We set out to solve two inherent problems in the study of animal spatial cognition (i) What is a “place”?; and (ii) whether behaviors that are not revealed as differing by one methodology could be revealed as different when analyzed using a different approach. Methodology We applied network analysis to scrutinize spatial behavior of rats tested in either a symmetrical or asymmetrical layout of 4, 8, or 12 objects placed along the perimeter of a round arena. We considered locations as the units of the network (nodes), and passes between locations as the links within the network. Principal Findings While there were only minor activity differences between rats tested in the symmetrical or asymmetrical object layouts, network analysis revealed substantial differences. Viewing ‘location’ as a cluster of stopping coordinates, the key locations (large clusters of stopping coordinates) were at the objects in both layouts with 4 objects. However, in the asymmetrical layout with 4 objects, additional key locations were spaced by the rats between the objects, forming symmetry among the key locations. It was as if the rats had behaviorally imposed symmetry on the physically asymmetrical environment. Based on a previous finding that wayfinding is easier in symmetrical environments, we suggest that when the physical attributes of the environment were not symmetrical, the rats established a symmetric layout of key locations, thereby acquiring a more legible environment despite its complex physical structure. Conclusions and Significance The present study adds a behavioral definition for “location”, a term that so far has been mostly discussed according to its physical attributes or neurobiological correlates (e.g. - place and grid neurons). Moreover, network analysis enabled the assessment of the importance of a location, even when that location did not display any distinctive physical properties. PMID:22815808
NASA Astrophysics Data System (ADS)
Jemberie, A.; Dugda, M. T.; Reusch, D.; Nyblade, A.
2006-12-01
Neural networks are decision making mathematical/engineering tools, which if trained properly, can do jobs automatically (and objectively) that normally require particular expertise and/or tedious repetition. Here we explore two techniques from the field of artificial neural networks (ANNs) that seek to reduce the time requirements and increase the objectivity of quality control (QC) and Event Identification (EI) on seismic datasets. We explore to apply the multiplayer Feed Forward (FF) Artificial Neural Networks (ANN) and Self- Organizing Maps (SOM) in combination with Hk stacking of receiver functions in an attempt to test the extent of the usefulness of automatic classification of receiver functions for crustal parameter determination. Feed- forward ANNs (FFNNs) are a supervised classification tool while self-organizing maps (SOMs) are able to provide unsupervised classification of large, complex geophysical data sets into a fixed number of distinct generalized patterns or modes. Hk stacking is a methodology that is used to stack receiver functions based on the relative arrival times of P-to-S converted phase and next two reverberations to determine crustal thickness H and Vp-to-Vs ratio (k). We use receiver functions from teleseismic events recorded by the 2000- 2002 Ethiopia Broadband Seismic Experiment. Preliminary results of applying FFNN neural network and Hk stacking of receiver functions for automatic receiver functions classification as a step towards an effort of automatic crustal parameter determination look encouraging. After training a FFNN neural network, the network could classify the best receiver functions from bad ones with a success rate of about 75 to 95%. Applying H? stacking on the receiver functions classified by this FFNN as the best receiver functions, we could obtain crustal thickness and Vp/Vs ratio of 31±4 km and 1.75±0.05, respectively, for the crust beneath station ARBA in the Main Ethiopian Rift. To make comparison, we applied Hk stacking on the receiver functions which we ourselves classified as the best set and found that the crustal thickness and Vp/Vs ratio are 31±2 km and 1.75±0.02, respectively.
2016-12-22
assumptions of behavior. This research proposes an information theoretic methodology to discover such complex network structures and dynamics while overcoming...the difficulties historically associated with their study. Indeed, this was the first application of an information theoretic methodology as a tool...1 Research Objectives and Questions..............................................................................2 Methodology
Smart Bandwidth Assignation in an Underlay Cellular Network for Internet of Vehicles.
de la Iglesia, Idoia; Hernandez-Jayo, Unai; Osaba, Eneko; Carballedo, Roberto
2017-09-27
The evolution of the IoT (Internet of Things) paradigm applied to new scenarios as VANETs (Vehicular Ad Hoc Networks) has gained momentum in recent years. Both academia and industry have triggered advanced studies in the IoV (Internet of Vehicles), which is understood as an ecosystem where different types of users (vehicles, elements of the infrastructure, pedestrians) are connected. How to efficiently share the available radio resources among the different types of eligible users is one of the important issues to be addressed. This paper briefly analyzes various concepts presented hitherto in the literature and it proposes an enhanced algorithm for ensuring a robust co-existence of the aforementioned system users. Therefore, this paper introduces an underlay RRM (Radio Resource Management) methodology which is capable of (1) improving cellular spectral efficiency while making a minimal impact on cellular communications and (2) ensuring the different QoS (Quality of Service) requirements of ITS (Intelligent Transportation Systems) applications. Simulation results, where we compare the proposed algorithm to the other two RRM, show the promising spectral efficiency performance of the proposed RRM methodology.
Smart Bandwidth Assignation in an Underlay Cellular Network for Internet of Vehicles
de la Iglesia, Idoia; Hernandez-Jayo, Unai
2017-01-01
The evolution of the IoT (Internet of Things) paradigm applied to new scenarios as VANETs (Vehicular Ad Hoc Networks) has gained momentum in recent years. Both academia and industry have triggered advanced studies in the IoV (Internet of Vehicles), which is understood as an ecosystem where different types of users (vehicles, elements of the infrastructure, pedestrians) are connected. How to efficiently share the available radio resources among the different types of eligible users is one of the important issues to be addressed. This paper briefly analyzes various concepts presented hitherto in the literature and it proposes an enhanced algorithm for ensuring a robust co-existence of the aforementioned system users. Therefore, this paper introduces an underlay RRM (Radio Resource Management) methodology which is capable of (1) improving cellular spectral efficiency while making a minimal impact on cellular communications and (2) ensuring the different QoS (Quality of Service) requirements of ITS (Intelligent Transportation Systems) applications. Simulation results, where we compare the proposed algorithm to the other two RRM, show the promising spectral efficiency performance of the proposed RRM methodology. PMID:28953256
Protocol Architecture Model Report
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.
NASA Astrophysics Data System (ADS)
Morse, Llewellyn; Sharif Khodaei, Zahra; Aliabadi, M. H.
2018-01-01
In this work, a reliability based impact detection strategy for a sensorized composite structure is proposed. Impacts are localized using Artificial Neural Networks (ANNs) with recorded guided waves due to impacts used as inputs. To account for variability in the recorded data under operational conditions, Bayesian updating and Kalman filter techniques are applied to improve the reliability of the detection algorithm. The possibility of having one or more faulty sensors is considered, and a decision fusion algorithm based on sub-networks of sensors is proposed to improve the application of the methodology to real structures. A strategy for reliably categorizing impacts into high energy impacts, which are probable to cause damage in the structure (true impacts), and low energy non-damaging impacts (false impacts), has also been proposed to reduce the false alarm rate. The proposed strategy involves employing classification ANNs with different features extracted from captured signals used as inputs. The proposed methodologies are validated by experimental results on a quasi-isotropic composite coupon impacted with a range of impact energies.
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
NASAs Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASAs four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. CNS previously developed a report which applied the methodology, to three space Internet-based communications scenarios for future missions. CNS conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. GRC selected for further analysis the scenario that involved unicast communications between a Low-Earth-Orbit (LEO) International Space Station (ISS) and a ground terminal Internet node via a Tracking and Data Relay Satellite (TDRS) transfer. This report contains a tradeoff analysis on the selected scenario. The analysis examines the performance characteristics of the various protocols and architectures. The tradeoff analysis incorporates the results of a CNS developed analytical model that examined performance parameters.
A New Screening Methodology for Improved Oil Recovery Processes Using Soft-Computing Techniques
NASA Astrophysics Data System (ADS)
Parada, Claudia; Ertekin, Turgay
2010-05-01
The first stage of production of any oil reservoir involves oil displacement by natural drive mechanisms such as solution gas drive, gas cap drive and gravity drainage. Typically, improved oil recovery (IOR) methods are applied to oil reservoirs that have been depleted naturally. In more recent years, IOR techniques are applied to reservoirs even before their natural energy drive is exhausted by primary depletion. Descriptive screening criteria for IOR methods are used to select the appropriate recovery technique according to the fluid and rock properties. This methodology helps in assessing the most suitable recovery process for field deployment of a candidate reservoir. However, the already published screening guidelines neither provide information about the expected reservoir performance nor suggest a set of project design parameters, which can be used towards the optimization of the process. In this study, artificial neural networks (ANN) are used to build a high-performance neuro-simulation tool for screening different improved oil recovery techniques: miscible injection (CO2 and N2), waterflooding and steam injection processes. The simulation tool consists of proxy models that implement a multilayer cascade feedforward back propagation network algorithm. The tool is intended to narrow the ranges of possible scenarios to be modeled using conventional simulation, reducing the extensive time and energy spent in dynamic reservoir modeling. A commercial reservoir simulator is used to generate the data to train and validate the artificial neural networks. The proxy models are built considering four different well patterns with different well operating conditions as the field design parameters. Different expert systems are developed for each well pattern. The screening networks predict oil production rate and cumulative oil production profiles for a given set of rock and fluid properties, and design parameters. The results of this study show that the networks are able to recognize the strong correlation between the displacement mechanism and the reservoir characteristics as they effectively forecast hydrocarbon production for different types of reservoir undergoing diverse recovery processes. The artificial neuron networks are able to capture the similarities between different displacement mechanisms as same network architecture is successfully applied in both CO2 and N2 injection. The neuro-simulation application tool is built within a graphical user interface to facilitate the display of the results. The developed soft-computing tool offers an innovative approach to design a variety of efficient and feasible IOR processes by using artificial intelligence. The tool provides appropriate guidelines to the reservoir engineer, it facilitates the appraisal of diverse field development strategies for oil reservoirs, and it helps to reduce the number of scenarios evaluated with conventional reservoir simulation.
Topic segmentation via community detection in complex networks
NASA Astrophysics Data System (ADS)
de Arruda, Henrique F.; Costa, Luciano da F.; Amancio, Diego R.
2016-06-01
Many real systems have been modeled in terms of network concepts, and written texts are a particular example of information networks. In recent years, the use of network methods to analyze language has allowed the discovery of several interesting effects, including the proposition of novel models to explain the emergence of fundamental universal patterns. While syntactical networks, one of the most prevalent networked models of written texts, display both scale-free and small-world properties, such a representation fails in capturing other textual features, such as the organization in topics or subjects. We propose a novel network representation whose main purpose is to capture the semantical relationships of words in a simple way. To do so, we link all words co-occurring in the same semantic context, which is defined in a threefold way. We show that the proposed representations favor the emergence of communities of semantically related words, and this feature may be used to identify relevant topics. The proposed methodology to detect topics was applied to segment selected Wikipedia articles. We found that, in general, our methods outperform traditional bag-of-words representations, which suggests that a high-level textual representation may be useful to study the semantical features of texts.
Dynamic changes in neural circuit topology following mild mechanical injury in vitro.
Patel, Tapan P; Ventre, Scott C; Meaney, David F
2012-01-01
Despite its enormous incidence, mild traumatic brain injury is not well understood. One aspect that needs more definition is how the mechanical energy during injury affects neural circuit function. Recent developments in cellular imaging probes provide an opportunity to assess the dynamic state of neural networks with single-cell resolution. In this article, we developed imaging methods to assess the state of dissociated cortical networks exposed to mild injury. We estimated the imaging conditions needed to achieve accurate measures of network properties, and applied these methodologies to evaluate if mild mechanical injury to cortical neurons produces graded changes to either spontaneous network activity or altered network topology. We found that modest injury produced a transient increase in calcium activity that dissipated within 1 h after injury. Alternatively, moderate mechanical injury produced immediate disruption in network synchrony, loss in excitatory tone, and increased modular topology. A calcium-activated neutral protease (calpain) was a key intermediary in these changes; blocking calpain activation restored the network nearly completely to its pre-injury state. Together, these findings show a more complex change in neural circuit behavior than previously reported for mild mechanical injury, and highlight at least one important early mechanism responsible for these changes.
Graph analysis of functional brain networks: practical issues in translational neuroscience
De Vico Fallani, Fabrizio; Richiardi, Jonas; Chavez, Mario; Achard, Sophie
2014-01-01
The brain can be regarded as a network: a connected system where nodes, or units, represent different specialized regions and links, or connections, represent communication pathways. From a functional perspective, communication is coded by temporal dependence between the activities of different brain areas. In the last decade, the abstract representation of the brain as a graph has allowed to visualize functional brain networks and describe their non-trivial topological properties in a compact and objective way. Nowadays, the use of graph analysis in translational neuroscience has become essential to quantify brain dysfunctions in terms of aberrant reconfiguration of functional brain networks. Despite its evident impact, graph analysis of functional brain networks is not a simple toolbox that can be blindly applied to brain signals. On the one hand, it requires the know-how of all the methodological steps of the pipeline that manipulate the input brain signals and extract the functional network properties. On the other hand, knowledge of the neural phenomenon under study is required to perform physiologically relevant analysis. The aim of this review is to provide practical indications to make sense of brain network analysis and contrast counterproductive attitudes. PMID:25180301
Topic segmentation via community detection in complex networks.
de Arruda, Henrique F; Costa, Luciano da F; Amancio, Diego R
2016-06-01
Many real systems have been modeled in terms of network concepts, and written texts are a particular example of information networks. In recent years, the use of network methods to analyze language has allowed the discovery of several interesting effects, including the proposition of novel models to explain the emergence of fundamental universal patterns. While syntactical networks, one of the most prevalent networked models of written texts, display both scale-free and small-world properties, such a representation fails in capturing other textual features, such as the organization in topics or subjects. We propose a novel network representation whose main purpose is to capture the semantical relationships of words in a simple way. To do so, we link all words co-occurring in the same semantic context, which is defined in a threefold way. We show that the proposed representations favor the emergence of communities of semantically related words, and this feature may be used to identify relevant topics. The proposed methodology to detect topics was applied to segment selected Wikipedia articles. We found that, in general, our methods outperform traditional bag-of-words representations, which suggests that a high-level textual representation may be useful to study the semantical features of texts.
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Lüttjohann, Annika; Makarov, Vladimir V.; Goremyko, Mikhail V.; Koronovskii, Alexey A.; Nedaivozov, Vladimir; Runnova, Anastasia E.; van Luijtelaar, Gilles; Hramov, Alexander E.; Boccaletti, Stefano
2017-07-01
We introduce a practical and computationally not demanding technique for inferring interactions at various microscopic levels between the units of a network from the measurements and the processing of macroscopic signals. Starting from a network model of Kuramoto phase oscillators, which evolve adaptively according to homophilic and homeostatic adaptive principles, we give evidence that the increase of synchronization within groups of nodes (and the corresponding formation of synchronous clusters) causes also the defragmentation of the wavelet energy spectrum of the macroscopic signal. Our methodology is then applied to getting a glance into the microscopic interactions occurring in a neurophysiological system, namely, in the thalamocortical neural network of an epileptic brain of a rat, where the group electrical activity is registered by means of multichannel EEG. We demonstrate that it is possible to infer the degree of interaction between the interconnected regions of the brain during different types of brain activities and to estimate the regions' participation in the generation of the different levels of consciousness.
An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh
2013-01-01
The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis. PMID:23300959
Kim, Jinkyu; Kim, Gunn; An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh
2013-01-01
The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis.
NASA Astrophysics Data System (ADS)
Štolc, Svorad; Bajla, Ivan
2010-01-01
In the paper we describe basic functions of the Hierarchical Temporal Memory (HTM) network based on a novel biologically inspired model of the large-scale structure of the mammalian neocortex. The focus of this paper is in a systematic exploration of possibilities how to optimize important controlling parameters of the HTM model applied to the classification of hand-written digits from the USPS database. The statistical properties of this database are analyzed using the permutation test which employs a randomization distribution of the training and testing data. Based on a notion of the homogeneous usage of input image pixels, a methodology of the HTM parameter optimization is proposed. In order to study effects of two substantial parameters of the architecture: the
Data based identification and prediction of nonlinear and complex dynamical systems
NASA Astrophysics Data System (ADS)
Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso
2016-07-01
The problem of reconstructing nonlinear and complex dynamical systems from measured data or time series is central to many scientific disciplines including physical, biological, computer, and social sciences, as well as engineering and economics. The classic approach to phase-space reconstruction through the methodology of delay-coordinate embedding has been practiced for more than three decades, but the paradigm is effective mostly for low-dimensional dynamical systems. Often, the methodology yields only a topological correspondence of the original system. There are situations in various fields of science and engineering where the systems of interest are complex and high dimensional with many interacting components. A complex system typically exhibits a rich variety of collective dynamics, and it is of great interest to be able to detect, classify, understand, predict, and control the dynamics using data that are becoming increasingly accessible due to the advances of modern information technology. To accomplish these goals, especially prediction and control, an accurate reconstruction of the original system is required. Nonlinear and complex systems identification aims at inferring, from data, the mathematical equations that govern the dynamical evolution and the complex interaction patterns, or topology, among the various components of the system. With successful reconstruction of the system equations and the connecting topology, it may be possible to address challenging and significant problems such as identification of causal relations among the interacting components and detection of hidden nodes. The "inverse" problem thus presents a grand challenge, requiring new paradigms beyond the traditional delay-coordinate embedding methodology. The past fifteen years have witnessed rapid development of contemporary complex graph theory with broad applications in interdisciplinary science and engineering. The combination of graph, information, and nonlinear dynamical systems theories with tools from statistical physics, optimization, engineering control, applied mathematics, and scientific computing enables the development of a number of paradigms to address the problem of nonlinear and complex systems reconstruction. In this Review, we describe the recent advances in this forefront and rapidly evolving field, with a focus on compressive sensing based methods. In particular, compressive sensing is a paradigm developed in recent years in applied mathematics, electrical engineering, and nonlinear physics to reconstruct sparse signals using only limited data. It has broad applications ranging from image compression/reconstruction to the analysis of large-scale sensor networks, and it has become a powerful technique to obtain high-fidelity signals for applications where sufficient observations are not available. We will describe in detail how compressive sensing can be exploited to address a diverse array of problems in data based reconstruction of nonlinear and complex networked systems. The problems include identification of chaotic systems and prediction of catastrophic bifurcations, forecasting future attractors of time-varying nonlinear systems, reconstruction of complex networks with oscillatory and evolutionary game dynamics, detection of hidden nodes, identification of chaotic elements in neuronal networks, reconstruction of complex geospatial networks and nodal positioning, and reconstruction of complex spreading networks with binary data.. A number of alternative methods, such as those based on system response to external driving, synchronization, and noise-induced dynamical correlation, will also be discussed. Due to the high relevance of network reconstruction to biological sciences, a special section is devoted to a brief survey of the current methods to infer biological networks. Finally, a number of open problems including control and controllability of complex nonlinear dynamical networks are discussed. The methods outlined in this Review are principled on various concepts in complexity science and engineering such as phase transitions, bifurcations, stabilities, and robustness. The methodologies have the potential to significantly improve our ability to understand a variety of complex dynamical systems ranging from gene regulatory systems to social networks toward the ultimate goal of controlling such systems.
NASA Technical Reports Server (NTRS)
Broderick, Ron
1997-01-01
The ultimate goal of this report was to integrate the powerful tools of artificial intelligence into the traditional process of software development. To maintain the US aerospace competitive advantage, traditional aerospace and software engineers need to more easily incorporate the technology of artificial intelligence into the advanced aerospace systems being designed today. The future goal was to transition artificial intelligence from an emerging technology to a standard technology that is considered early in the life cycle process to develop state-of-the-art aircraft automation systems. This report addressed the future goal in two ways. First, it provided a matrix that identified typical aircraft automation applications conducive to various artificial intelligence methods. The purpose of this matrix was to provide top-level guidance to managers contemplating the possible use of artificial intelligence in the development of aircraft automation. Second, the report provided a methodology to formally evaluate neural networks as part of the traditional process of software development. The matrix was developed by organizing the discipline of artificial intelligence into the following six methods: logical, object representation-based, distributed, uncertainty management, temporal and neurocomputing. Next, a study of existing aircraft automation applications that have been conducive to artificial intelligence implementation resulted in the following five categories: pilot-vehicle interface, system status and diagnosis, situation assessment, automatic flight planning, and aircraft flight control. The resulting matrix provided management guidance to understand artificial intelligence as it applied to aircraft automation. The approach taken to develop a methodology to formally evaluate neural networks as part of the software engineering life cycle was to start with the existing software quality assurance standards and to change these standards to include neural network development. The changes were to include evaluation tools that can be applied to neural networks at each phase of the software engineering life cycle. The result was a formal evaluation approach to increase the product quality of systems that use neural networks for their implementation.
van Diessen, E; Numan, T; van Dellen, E; van der Kooi, A W; Boersma, M; Hofman, D; van Lutterveld, R; van Dijk, B W; van Straaten, E C W; Hillebrand, A; Stam, C J
2015-08-01
Electroencephalogram (EEG) and magnetoencephalogram (MEG) recordings during resting state are increasingly used to study functional connectivity and network topology. Moreover, the number of different analysis approaches is expanding along with the rising interest in this research area. The comparison between studies can therefore be challenging and discussion is needed to underscore methodological opportunities and pitfalls in functional connectivity and network studies. In this overview we discuss methodological considerations throughout the analysis pipeline of recording and analyzing resting state EEG and MEG data, with a focus on functional connectivity and network analysis. We summarize current common practices with their advantages and disadvantages; provide practical tips, and suggestions for future research. Finally, we discuss how methodological choices in resting state research can affect the construction of functional networks. When taking advantage of current best practices and avoid the most obvious pitfalls, functional connectivity and network studies can be improved and enable a more accurate interpretation and comparison between studies. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Biederman, J; Hammerness, P; Sadeh, B; Peremen, Z; Amit, A; Or-Ly, H; Stern, Y; Reches, A; Geva, A; Faraone, S V
2017-05-01
A previous small study suggested that Brain Network Activation (BNA), a novel ERP-based brain network analysis, may have diagnostic utility in attention deficit hyperactivity disorder (ADHD). In this study we examined the diagnostic capability of a new advanced version of the BNA methodology on a larger population of adults with and without ADHD. Subjects were unmedicated right-handed 18- to 55-year-old adults of both sexes with and without a DSM-IV diagnosis of ADHD. We collected EEG while the subjects were performing a response inhibition task (Go/NoGo) and then applied a spatio-temporal Brain Network Activation (BNA) analysis of the EEG data. This analysis produced a display of qualitative measures of brain states (BNA scores) providing information on cortical connectivity. This complex set of scores was then fed into a machine learning algorithm. The BNA analysis of the EEG data recorded during the Go/NoGo task demonstrated a high discriminative capacity between ADHD patients and controls (AUC = 0.92, specificity = 0.95, sensitivity = 0.86 for the Go condition; AUC = 0.84, specificity = 0.91, sensitivity = 0.76 for the NoGo condition). BNA methodology can help differentiate between ADHD and healthy controls based on functional brain connectivity. The data support the utility of the tool to augment clinical examinations by objective evaluation of electrophysiological changes associated with ADHD. Results also support a network-based approach to the study of ADHD.
Applying policy network theory to policy-making in China: the case of urban health insurance reform.
Zheng, Haitao; de Jong, Martin; Koppenjan, Joop
2010-01-01
In this article, we explore whether policy network theory can be applied in the People's Republic of China (PRC). We carried out a literature review of how this approach has already been dealt with in the Chinese policy sciences thus far. We then present the key concepts and research approach in policy networks theory in the Western literature and try these on a Chinese case to see the fit. We follow this with a description and analysis of the policy-making process regarding the health insurance reform in China from 1998 until the present. Based on this case study, we argue that this body of theory is useful to describe and explain policy-making processes in the Chinese context. However, limitations in the generic model appear in capturing the fundamentally different political and administrative systems, crucially different cultural values in the applicability of some research methods common in Western countries. Finally, we address which political and cultural aspects turn out to be different in the PRC and how they affect methodological and practical problems that PRC researchers will encounter when studying decision-making processes.
A review of machine learning in obesity.
DeGregory, K W; Kuiper, P; DeSilvio, T; Pleuss, J D; Miller, R; Roginski, J W; Fisher, C B; Harness, D; Viswanath, S; Heymsfield, S B; Dungan, I; Thomas, D M
2018-05-01
Rich sources of obesity-related data arising from sensors, smartphone apps, electronic medical health records and insurance data can bring new insights for understanding, preventing and treating obesity. For such large datasets, machine learning provides sophisticated and elegant tools to describe, classify and predict obesity-related risks and outcomes. Here, we review machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning and decision tree analysis. We also review methods that describe and characterize data such as cluster analysis, principal component analysis, network science and topological data analysis. We introduce each method with a high-level overview followed by examples of successful applications. The algorithms were then applied to National Health and Nutrition Examination Survey to demonstrate methodology, utility and outcomes. The strengths and limitations of each method were also evaluated. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity. © 2018 World Obesity Federation.
Western Pyrenees geodetic deformation study using the Guipuzcoa GNSS network
NASA Astrophysics Data System (ADS)
Martín, Adriana; Sevilla, Miguel; Zurutuza, Joaquín
2018-07-01
The Basque Country in the north of Spain is located inside the Basque-Cantabrian basin of the western Pyrenees which remarkable seismic-tectonic implications justify the need of geodetic control in the area. In order to perform a crustal deformation study we have analysed all daily observations from the GNSS permanent network of Guipuzcoa and external IGS stations, from January 2007 to November 2011. We have carried out the data processing applying double differences methodology in the automatic processing module BPE (Bernese Processing Engine) from Bernese GNSS software version 5.0. Solution was aligned to geodetic reference framework ITRF2008, by using the IGS08 solution and updated satellite and terrestrial antennas calibration. This five years network study results: Coordinate time series, velocities and baseline lengths variations show internal stability among inner stations and from them with respect to outer IGS stations, concluding that no deformations have been observed.
Development blocks in innovation networks: The Swedish manufacturing industry, 1970-2007.
Taalbi, Josef
2017-01-01
The notion of development blocks (Dahmén, 1950, 1991) suggests the co-evolution of technologies and industries through complementarities and the overcoming of imbalances. This study proposes and applies a methodology to analyse development blocks empirically. To assess the extent and character of innovational interdependencies between industries the study combines analysis of innovation biographies and statistical network analysis. This is made possible by using data from a newly constructed innovation output database for Sweden. The study finds ten communities of closely related industries in which innovation activity has been prompted by the emergence of technological imbalances or by the exploitation of new technological opportunities. The communities found in the Swedish network of innovation are shown to be stable over time and often characterized by strong user-supplier interdependencies. These findings serve to stress how historical imbalances and opportunities are key to understanding the dynamics of the long-run development of industries and new technologies.
CellNet: Network Biology Applied to Stem Cell Engineering
Cahan, Patrick; Li, Hu; Morris, Samantha A.; da Rocha, Edroaldo Lummertz; Daley, George Q.; Collins, James J.
2014-01-01
SUMMARY Somatic cell reprogramming, directed differentiation of pluripotent stem cells, and direct conversions between differentiated cell lineages represent powerful approaches to engineer cells for research and regenerative medicine. We have developed CellNet, a network biology platform that more accurately assesses the fidelity of cellular engineering than existing methodologies and generates hypotheses for improving cell derivations. Analyzing expression data from 56 published reports, we found that cells derived via directed differentiation more closely resemble their in vivo counterparts than products of direct conversion, as reflected by the establishment of target cell-type gene regulatory networks (GRNs). Furthermore, we discovered that directly converted cells fail to adequately silence expression programs of the starting population, and that the establishment of unintended GRNs is common to virtually every cellular engineering paradigm. CellNet provides a platform for quantifying how closely engineered cell populations resemble their target cell type and a rational strategy to guide enhanced cellular engineering. PMID:25126793
NASA Astrophysics Data System (ADS)
Pal, Krishnendu; Das, Biswajit; Banerjee, Kinshuk; Gangopadhyay, Gautam
2015-09-01
We have introduced an approach to nonequilibrium thermodynamics of an open chemical reaction network in terms of the propensities of the individual elementary reactions and the corresponding reverse reactions. The method is a microscopic formulation of the dissipation function in terms of the relative entropy or Kullback-Leibler distance which is based on the analogy of phase space trajectory with the path of elementary reactions in a network of chemical process. We have introduced here a fluctuation theorem valid for each opposite pair of elementary reactions which is useful in determining the contribution of each sub-reaction on the nonequilibrium thermodynamics of overall reaction. The methodology is applied to an oligomeric enzyme kinetics at a chemiostatic condition that leads the reaction to a nonequilibrium steady state for which we have estimated how each step of the reaction is energy driven or entropy driven to contribute to the overall reaction.
Schubert, M; Fey, A; Ihssen, J; Civardi, C; Schwarze, F W M R; Mourad, S
2015-01-10
An artificial neural network (ANN) and genetic algorithm (GA) were applied to improve the laccase-mediated oxidation of iodide (I(-)) to elemental iodine (I2). Biosynthesis of iodine (I2) was studied with a 5-level-4-factor central composite design (CCD). The generated ANN network was mathematically evaluated by several statistical indices and revealed better results than a classical quadratic response surface (RS) model. Determination of the relative significance of model input parameters, ranking the process parameters in order of importance (pH>laccase>mediator>iodide), was performed by sensitivity analysis. ANN-GA methodology was used to optimize the input space of the neural network model to find optimal settings for the laccase-mediated synthesis of iodine. ANN-GA optimized parameters resulted in a 9.9% increase in the conversion rate. Copyright © 2014 Elsevier B.V. All rights reserved.
Tracking cohesive subgroups over time in inferred social networks
NASA Astrophysics Data System (ADS)
Chin, Alvin; Chignell, Mark; Wang, Hao
2010-04-01
As a first step in the development of community trackers for large-scale online interaction, this paper shows how cohesive subgroup analysis using the Social Cohesion Analysis of Networks (SCAN; Chin and Chignell 2008) and Data-Intensive Socially Similar Evolving Community Tracker (DISSECT; Chin and Chignell 2010) methods can be applied to the problem of identifying cohesive subgroups and tracking them over time. Three case studies are reported, and the findings are used to evaluate how well the SCAN and DISSECT methods work for different types of data. In the largest of the case studies, variations in temporal cohesiveness are identified across a set of subgroups extracted from the inferred social network. Further modifications to the DISSECT methodology are suggested based on the results obtained. The paper concludes with recommendations concerning further research that would be beneficial in addressing the community tracking problem for online data.
Machine learning in sentiment reconstruction of the simulated stock market
NASA Astrophysics Data System (ADS)
Goykhman, Mikhail; Teimouri, Ali
2018-02-01
In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.
Yang, Qinghua
2017-03-01
The increasing popularity of social networking sites (SNSs) has drawn scholarly attention in recent years, and a large amount of efforts have been made in applying SNSs to health behavior change interventions. However, these interventions showed mixed results, with a large variance of effect sizes in Cohen's d ranging from -1.17 to 1.28. To provide a better understanding of SNS-based interventions' effectiveness, a meta-analysis of 21 studies examining the effects of health interventions using SNS was conducted. Results indicated that health behavior change interventions using SNS are effective in general, but the effects were moderated by health topic, methodological features, and participant features. Theoretical and practical implications of findings are discussed.
Sanz-García, Ancor; Vega-Zelaya, Lorena; Pastor, Jesús; Torres, Cristina V.; Sola, Rafael G.; Ortega, Guillermo J.
2016-01-01
Approximately 30% of epilepsy patients are refractory to antiepileptic drugs. In these cases, surgery is the only alternative to eliminate/control seizures. However, a significant minority of patients continues to exhibit post-operative seizures, even in those cases in which the suspected source of seizures has been correctly localized and resected. The protocol presented here combines a clinical procedure routinely employed during the pre-operative evaluation of temporal lobe epilepsy (TLE) patients with a novel technique for network analysis. The method allows for the evaluation of the temporal evolution of mesial network parameters. The bilateral insertion of foramen ovale electrodes (FOE) into the ambient cistern simultaneously records electrocortical activity at several mesial areas in the temporal lobe. Furthermore, network methodology applied to the recorded time series tracks the temporal evolution of the mesial networks both interictally and during the seizures. In this way, the presented protocol offers a unique way to visualize and quantify measures that considers the relationships between several mesial areas instead of a single area. PMID:28060326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Mafalda T., E-mail: mafaldatcosta@gmail.com; Carolino, Elisabete, E-mail: lizcarolino@gmail.com; Oliveira, Teresa A., E-mail: teresa.oliveira@uab.pt
In water supply systems with distribution networkthe most critical aspects of control and Monitoring of water quality, which generates crises system, are the effects of cross-contamination originated by the network typology. The classics of control of quality systems through the application of Shewhart charts are generally difficult to manage in real time due to the high number of charts that must be completed and evaluated. As an alternative to the traditional control systems with Shewhart charts, this study aimed to apply a simplified methodology of a monitoring plan quality parameters in a drinking water distribution, by applying Hotelling’s T{sup 2}more » charts and supplemented with Shewhart charts with Bonferroni limits system, whenever instabilities with processes were detected.« less
Behavioral networks as a model for intelligent agents
NASA Technical Reports Server (NTRS)
Sliwa, Nancy E.
1990-01-01
On-going work at NASA Langley Research Center in the development and demonstration of a paradigm called behavioral networks as an architecture for intelligent agents is described. This work focuses on the need to identify a methodology for smoothly integrating the characteristics of low-level robotic behavior, including actuation and sensing, with intelligent activities such as planning, scheduling, and learning. This work assumes that all these needs can be met within a single methodology, and attempts to formalize this methodology in a connectionist architecture called behavioral networks. Behavioral networks are networks of task processes arranged in a task decomposition hierarchy. These processes are connected by both command/feedback data flow, and by the forward and reverse propagation of weights which measure the dynamic utility of actions and beliefs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bereketli Zafeirakopoulos, Ilke, E-mail: ibereketli@gsu.edu.tr; Erol Genevois, Mujde, E-mail: merol@gsu.edu.tr
Life Cycle Assessment is a tool to assess, in a systematic way, the environmental aspects and its potential environmental impacts and resources used throughout a product's life cycle. It is widely accepted and considered as one of the most powerful tools to support decision-making processes used in ecodesign and sustainable production in order to learn about the most problematic parts and life cycle phases of a product and to have a projection for future improvements. However, since Life Cycle Assessment is a cost and time intensive method, companies do not intend to carry out a full version of it, exceptmore » for large corporate ones. Especially for small and medium sized enterprises, which do not have enough budget for and knowledge on sustainable production and ecodesign approaches, focusing only on the most important possible environmental aspect is unavoidable. In this direction, finding the right environmental aspect to work on is crucial for the companies. In this study, a multi-criteria decision-making methodology, Analytic Network Process is proposed to select the most relevant environmental aspect. The proposed methodology aims at providing a simplified environmental assessment to producers. It is applied for a hand blender, which is a member of the Electrical and Electronic Equipment family. The decision criteria for the environmental aspects and relations of dependence are defined. The evaluation is made by the Analytic Network Process in order to create a realistic approach to inter-dependencies among the criteria. The results are computed via the Super Decisions software. Finally, it is observed that the procedure is completed in less time, with less data, with less cost and in a less subjective way than conventional approaches. - Highlights: • We present a simplified environmental assessment methodology to support LCA. • ANP is proposed to select the most relevant environmental aspect. • ANP deals well with the interdependencies between aspects and impacts. • The methodology is less subjective, less complicated, and less time–money consuming. • The proposed methodology is suitable for use by SMEs.« less
Bianconi, Fortunato; Baldelli, Elisa; Ludovini, Vienna; Luovini, Vienna; Petricoin, Emanuel F; Crinò, Lucio; Valigi, Paolo
2015-10-19
The study of cancer therapy is a key issue in the field of oncology research and the development of target therapies is one of the main problems currently under investigation. This is particularly relevant in different types of tumor where traditional chemotherapy approaches often fail, such as lung cancer. We started from the general definition of robustness introduced by Kitano and applied it to the analysis of dynamical biochemical networks, proposing a new algorithm based on moment independent analysis of input/output uncertainty. The framework utilizes novel computational methods which enable evaluating the model fragility with respect to quantitative performance measures and parameters such as reaction rate constants and initial conditions. The algorithm generates a small subset of parameters that can be used to act on complex networks and to obtain the desired behaviors. We have applied the proposed framework to the EGFR-IGF1R signal transduction network, a crucial pathway in lung cancer, as an example of Cancer Systems Biology application in drug discovery. Furthermore, we have tested our framework on a pulse generator network as an example of Synthetic Biology application, thus proving the suitability of our methodology to the characterization of the input/output synthetic circuits. The achieved results are of immediate practical application in computational biology, and while we demonstrate their use in two specific examples, they can in fact be used to study a wider class of biological systems.
Evaluating IPv6 Adoption in the Internet
NASA Astrophysics Data System (ADS)
Colitti, Lorenzo; Gunderson, Steinar H.; Kline, Erik; Refice, Tiziana
As IPv4 address space approaches exhaustion, large networks are deploying IPv6 or preparing for deployment. However, there is little data available about the quantity and quality of IPv6 connectivity. We describe a methodology to measure IPv6 adoption from the perspective of a Web site operator and to evaluate the impact that adding IPv6 to a Web site will have on its users. We apply our methodology to the Google Web site and present results collected over the last year. Our data show that IPv6 adoption, while growing significantly, is still low, varies considerably by country, and is heavily influenced by a small number of large deployments. We find that native IPv6 latency is comparable to IPv4 and provide statistics on IPv6 transition mechanisms used.
NASA Astrophysics Data System (ADS)
Liberal, Iñigo; Engheta, Nader
2018-02-01
Quantum emitters interacting through a waveguide setup have been proposed as a promising platform for basic research on light-matter interactions and quantum information processing. We propose to augment waveguide setups with the use of multiport devices. Specifically, we demonstrate theoretically the possibility of exciting N -qubit subradiant, maximally entangled, states with the use of suitably designed N -port devices. Our general methodology is then applied based on two different devices: an epsilon-and-mu-near-zero waveguide hub and a nonreciprocal circulator. A sensitivity analysis is carried out to assess the robustness of the system against a number of nonidealities. These findings link and merge the designs of devices for quantum state engineering with classical communication network methodologies.
Team knowledge representation: a network perspective.
Espinosa, J Alberto; Clark, Mark A
2014-03-01
We propose a network perspective of team knowledge that offers both conceptual and methodological advantages, expanding explanatory value through representation and measurement of component structure and content. Team knowledge has typically been conceptualized and measured with relatively simple aggregates, without fully accounting for differing knowledge configurations among team members. Teams with similar aggregate values of team knowledge may have very different team dynamics depending on how knowledge isolates, cliques, and densities are distributed across the team; which members are the most knowledgeable; who shares knowledge with whom; and how knowledge clusters are distributed. We illustrate our proposed network approach through a sample of 57 teams, including how to compute, analyze, and visually represent team knowledge. Team knowledge network structures (isolation, centrality) are associated with outcomes of, respectively, task coordination, strategy coordination, and the proportion of team knowledge cliques, all after controlling for shared team knowledge. Network analysis helps to represent, measure, and understand the relationship of team knowledge to outcomes of interest to team researchers, members, and managers. Our approach complements existing team knowledge measures. Researchers and managers can apply network concepts and measures to help understand where team knowledge is held within a team and how this relational structure may influence team coordination, cohesion, and performance.
[Social support network and health of elderly individuals with chronic pneumopathies].
Mesquita, Rafael Barreto de; Morano, Maria Tereza Aguiar Pessoa; Landim, Fátima Luna Pinheiro; Collares, Patrícia Moreira Costa; Pinto, Juliana Maria de Sousa
2012-05-01
This study sought to analyze characteristics of the social support network of the elderly with chronic pneumopathies, establishing links with health maintenance/rehabilitation. The assumptions of Social Network Analysis (SNA) methodology were used, addressing the social support concept. A questionnaire and semi-structured interviews, both applied to 16 elderly people attended by a public hospital in Fortaleza-CE, were used for data collection. Quantitative data were processed using the UCINET 6.123, NetDraw 2.38 and Microsoft Excel software programs. In the qualitative analysis, the body of material was subjected to interpretations based on relevant and current theoretical references. Each informant brought an average of 10.37 individuals into the network. Among the 3 types of social support, there was a predominance of informational support given by health professionals. The importance of reciprocity in providing/receiving social support was also noted, as well as the participation of health professionals and the family functioning as social support. The conclusion reached was that the network of the elderly with pneumopathies is not cohesive, being restricted to the personal network of each individual, and that even so, the informants recognize and are satisfied with the social support it provides.
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
Koparde, Vishal N.; Scarsdale, J. Neel; Kellogg, Glen E.
2011-01-01
Background The quality of X-ray crystallographic models for biomacromolecules refined from data obtained at high-resolution is assured by the data itself. However, at low-resolution, >3.0 Å, additional information is supplied by a forcefield coupled with an associated refinement protocol. These resulting structures are often of lower quality and thus unsuitable for downstream activities like structure-based drug discovery. Methodology An X-ray crystallography refinement protocol that enhances standard methodology by incorporating energy terms from the HINT (Hydropathic INTeractions) empirical forcefield is described. This protocol was tested by refining synthetic low-resolution structural data derived from 25 diverse high-resolution structures, and referencing the resulting models to these structures. The models were also evaluated with global structural quality metrics, e.g., Ramachandran score and MolProbity clashscore. Three additional structures, for which only low-resolution data are available, were also re-refined with this methodology. Results The enhanced refinement protocol is most beneficial for reflection data at resolutions of 3.0 Å or worse. At the low-resolution limit, ≥4.0 Å, the new protocol generated models with Cα positions that have RMSDs that are 0.18 Å more similar to the reference high-resolution structure, Ramachandran scores improved by 13%, and clashscores improved by 51%, all in comparison to models generated with the standard refinement protocol. The hydropathic forcefield terms are at least as effective as Coulombic electrostatic terms in maintaining polar interaction networks, and significantly more effective in maintaining hydrophobic networks, as synthetic resolution is decremented. Even at resolutions ≥4.0 Å, these latter networks are generally native-like, as measured with a hydropathic interactions scoring tool. PMID:21246043
Identification of functional modules using network topology and high-throughput data.
Ulitsky, Igor; Shamir, Ron
2007-01-26
With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.
To trade or not to trade: Link prediction in the virtual water network
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca
2017-12-01
In the international trade network, links express the (temporary) presence of a commercial exchange of goods between any two countries. Given the dynamical behaviour of the trade network, where links are created and dismissed every year, predicting the link activation/deactivation is an open research question. Through the international trade network of agricultural goods, water resources are 'virtually' transferred from the country of production to the country of consumption. We propose a novel methodology for link prediction applied to the network of virtual water trade. Starting from the assumption of having links between any two countries, we estimate the associated virtual water flows by means of a gravity-law model using country and link characteristics as drivers. We consider the links with estimated flows higher than 1000 m3/year as active links, while the others as non-active links. Flows traded along estimated active links are then re-estimated using a similar but differently-calibrated gravity-law model. We were able to correctly model 84% of the existing links and 93% of the non-existing links in year 2011. It is worth to note that the predicted active links carry 99% of the global virtual water flow; hence, missed links are mainly those where a minimum volume of virtual water is exchanged. Results indicate that, over the period from 1986 to 2011, population, geographical distances between countries, and agricultural efficiency (through fertilizers use) are the major factors driving the link activation and deactivation. As opposed to other (network-based) models for link prediction, the proposed method is able to reconstruct the network architecture without any prior knowledge of the network topology, using only the nodes and links attributes; it thus represents a general method that can be applied to other networks such as food or value trade networks.
Making Supply Chains Resilient to Floods Using a Bayesian Network
NASA Astrophysics Data System (ADS)
Haraguchi, M.
2015-12-01
Natural hazards distress the global economy by disrupting the interconnected supply chain networks. Manufacturing companies have created cost-efficient supply chains by reducing inventories, streamlining logistics and limiting the number of suppliers. As a result, today's supply chains are profoundly susceptible to systemic risks. In Thailand, for example, the GDP growth rate declined by 76 % in 2011 due to prolonged flooding. Thailand incurred economic damage including the loss of USD 46.5 billion, approximately 70% of which was caused by major supply chain disruptions in the manufacturing sector. Similar problems occurred after the Great East Japan Earthquake and Tsunami in 2011, the Mississippi River floods and droughts during 2011 - 2013, and Hurricane Sandy in 2012. This study proposes a methodology for modeling supply chain disruptions using a Bayesian network analysis (BNA) to estimate expected values of countermeasures of floods, such as inventory management, supplier management and hard infrastructure management. We first performed a spatio-temporal correlation analysis between floods and extreme precipitation data for the last 100 years at a global scale. Then we used a BNA to create synthetic networks that include variables associated with the magnitude and duration of floods, major components of supply chains and market demands. We also included decision variables of countermeasures that would mitigate potential losses caused by supply chain disruptions. Finally, we conducted a cost-benefit analysis by estimating the expected values of these potential countermeasures while conducting a sensitivity analysis. The methodology was applied to supply chain disruptions caused by the 2011 Thailand floods. Our study demonstrates desirable typical data requirements for the analysis, such as anonymized supplier network data (i.e. critical dependencies, vulnerability information of suppliers) and sourcing data(i.e. locations of suppliers, and production rates and volume), and data from previous experiences (i.e. companies' risk mitigation strategy decisions).
Development of Methodologies for IV and V of Neural Networks
NASA Technical Reports Server (NTRS)
Taylor, Brian; Darrah, Marjorie
2003-01-01
Non-deterministic systems often rely upon neural network (NN) technology to "lean" to manage flight systems under controlled conditions using carefully chosen training sets. How can these adaptive systems be certified to ensure that they will become increasingly efficient and behave appropriately in real-time situations? The bulk of Independent Verification and Validation (IV&V) research of non-deterministic software control systems such as Adaptive Flight Controllers (AFC's) addresses NNs in well-behaved and constrained environments such as simulations and strict process control. However, neither substantive research, nor effective IV&V techniques have been found to address AFC's learning in real-time and adapting to live flight conditions. Adaptive flight control systems offer good extensibility into commercial aviation as well as military aviation and transportation. Consequently, this area of IV&V represents an area of growing interest and urgency. ISR proposes to further the current body of knowledge to meet two objectives: Research the current IV&V methods and assess where these methods may be applied toward a methodology for the V&V of Neural Network; and identify effective methods for IV&V of NNs that learn in real-time, including developing a prototype test bed for IV&V of AFC's. Currently. no practical method exists. lSR will meet these objectives through the tasks identified and described below. First, ISR will conduct a literature review of current IV&V technology. TO do this, ISR will collect the existing body of research on IV&V of non-deterministic systems and neural network. ISR will also develop the framework for disseminating this information through specialized training. This effort will focus on developing NASA's capability to conduct IV&V of neural network systems and to provide training to meet the increasing need for IV&V expertise in such systems.
Gis-Based Accessibility Analysis of Urban Emergency Shelters: the Case of Adana City
NASA Astrophysics Data System (ADS)
Unal, M.; Uslu, C.
2016-10-01
Accessibility analysis of urban emergency shelters can help support urban disaster prevention planning. Pre-disaster emergency evacuation zoning has become a significant topic on disaster prevention and mitigation research. In this study, we assessed the level of serviceability of urban emergency shelters within maximum capacity, usability, sufficiency and a certain walking time limit by employing spatial analysis techniques of GIS-Network Analyst. The methodology included the following aspects: the distribution analysis of emergency evacuation demands, the calculation of shelter space accessibility and the optimization of evacuation destinations. This methodology was applied to Adana, a city in Turkey, which is located within the Alpine-Himalayan orogenic system, the second major earthquake belt after the Pacific-Belt. It was found that the proposed methodology could be useful in aiding to understand the spatial distribution of urban emergency shelters more accurately and establish effective future urban disaster prevention planning. Additionally, this research provided a feasible way for supporting emergency management in terms of shelter construction, pre-disaster evacuation drills and rescue operations.
Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan
2015-05-15
The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Plagianakos, V P; Magoulas, G D; Vrahatis, M N
2006-03-01
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.
Megacity analysis: a clustering approach to classification
2017-06-01
kinetic or non -kinetic urban operations. We develop and implement a methodology to classify megacities into groups. Using 33 variables, we construct a...is interested in these megacity networks and their implications for potential urban operations. We develop a methodology to group like megacities...is interested in these megacity networks and their implications for potential urban operations. We develop a methodology to group like megacities
Construction and comparison of gene co-expression networks shows complex plant immune responses
López, Camilo; López-Kleine, Liliana
2014-01-01
Gene co-expression networks (GCNs) are graphic representations that depict the coordinated transcription of genes in response to certain stimuli. GCNs provide functional annotations of genes whose function is unknown and are further used in studies of translational functional genomics among species. In this work, a methodology for the reconstruction and comparison of GCNs is presented. This approach was applied using gene expression data that were obtained from immunity experiments in Arabidopsis thaliana, rice, soybean, tomato and cassava. After the evaluation of diverse similarity metrics for the GCN reconstruction, we recommended the mutual information coefficient measurement and a clustering coefficient-based method for similarity threshold selection. To compare GCNs, we proposed a multivariate approach based on the Principal Component Analysis (PCA). Branches of plant immunity that were exemplified by each experiment were analyzed in conjunction with the PCA results, suggesting both the robustness and the dynamic nature of the cellular responses. The dynamic of molecular plant responses produced networks with different characteristics that are differentiable using our methodology. The comparison of GCNs from plant pathosystems, showed that in response to similar pathogens plants could activate conserved signaling pathways. The results confirmed that the closeness of GCNs projected on the principal component space is an indicative of similarity among GCNs. This also can be used to understand global patterns of events triggered during plant immune responses. PMID:25320678
NASA Astrophysics Data System (ADS)
Marques, Haroldo; Monico, João; Aquino, Marcio; Melo, Weyller
2014-05-01
The real time PPP method requires the availability of real time precise orbits and satellites clocks corrections. Currently, it is possible to apply the solutions of clocks and orbits available by BKG within the context of IGS Pilot project or by using the operational predicted IGU ephemeris. The accuracy of the satellite position available in the IGU is enough for several applications requiring good quality. However, the satellites clocks corrections do not provide enough accuracy (3 ns ~ 0.9 m) to accomplish real time PPP with the same level of accuracy. Therefore, for real time PPP application it is necessary to further research and develop appropriated methodologies for estimating the satellite clock corrections in real time with better accuracy. Currently, it is possible to apply the real time solutions of clocks and orbits available by Federal Agency for Cartography and Geodesy (BKG) within the context of IGS Pilot project. The BKG corrections are disseminated by a new proposed format of the RTCM 3.x and can be applied in the broadcasted orbits and clocks. Some investigations have been proposed for the estimation of the satellite clock corrections using GNSS code and phase observable at the double difference level between satellites and epochs (MERVAT, DOUSA, 2007). Another possibility consists of applying a Kalman Filter in the PPP network mode (HAUSCHILD, 2010) and it is also possible the integration of both methods, using network PPP and observables at double difference level in specific time intervals (ZHANG; LI; GUO, 2010). For this work the methodology adopted consists in the estimation of the satellite clock corrections based on the data adjustment in the PPP mode, but for a network of GNSS stations. The clock solution can be solved by using two types of observables: code smoothed by carrier phase or undifferenced code together with carrier phase. In the former, we estimate receiver clock error; satellite clock correction and troposphere, considering that the phase ambiguities are eliminated when applying differences between consecutive epochs. However, when using undifferenced code and phase, the ambiguities may be estimated together with receiver clock errors, satellite clock corrections and troposphere parameters. In both strategies it is also possible to correct the troposphere delay from a Numerical Weather Forecast Model instead of estimating it. The prediction of the satellite clock correction can be performed using a straight line or a second degree polynomial using the time series of the estimated satellites clocks. To estimate satellite clock correction and to accomplish real time PPP two pieces of software have been developed, respectively, "RT_PPP" and "RT_SAT_CLOCK". The system (RT_PPP) is able to process GNSS code and phase data using precise ephemeris and precise satellites clocks corrections together with several corrections required for PPP. In the software RT_SAT_CLOCK we apply a Kalman filter algorithm to estimate satellite clock correction in the network PPP mode. In this case, all PPP corrections must be applied for each station. The experiments were generated in real time and post-processed mode (simulating real time) considering data from the Brazilian continuous GPS network and also from the IGS network in a global satellite clock solution. We have used IGU ephemeris for satellite position and estimated the satellite clock corrections, performing the updates as soon as new ephemeris files were available. Experiments were accomplished in order to assess the accuracy of the estimated clocks when using the Brazilian Numerical Weather Forecast Model (BNWFM) from CPTEC/INPE and also using the ZTD from European Centre for Medium-Range Weather Forecasts (ECMWF) together with Vienna Mapping Function VMF or estimating troposphere with clocks and ambiguities in the Kalman Filter. The daily precision of the estimated satellite clock corrections reached the order of 0.15 nanoseconds. The clocks were applied in the Real Time PPP for Brazilian network stations and also for flight test of the Brazilian airplanes and the results show that it is possible to accomplish real time PPP in the static and kinematic modes with accuracy of the order of 10 to 20 cm, respectively.
Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.
Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A
2016-06-01
An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. Copyright © 2016 Elsevier Inc. All rights reserved.
Automating Risk Analysis of Software Design Models
Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.
2014-01-01
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688
Automating risk analysis of software design models.
Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P
2014-01-01
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.
A QoS-guaranteed coverage precedence routing algorithm for wireless sensor networks.
Jiang, Joe-Air; Lin, Tzu-Shiang; Chuang, Cheng-Long; Chen, Chia-Pang; Sun, Chin-Hong; Juang, Jehn-Yih; Lin, Jiun-Chuan; Liang, Wei-Wen
2011-01-01
For mission-critical applications of wireless sensor networks (WSNs) involving extensive battlefield surveillance, medical healthcare, etc., it is crucial to have low-power, new protocols, methodologies and structures for transferring data and information in a network with full sensing coverage capability for an extended working period. The upmost mission is to ensure that the network is fully functional providing reliable transmission of the sensed data without the risk of data loss. WSNs have been applied to various types of mission-critical applications. Coverage preservation is one of the most essential functions to guarantee quality of service (QoS) in WSNs. However, a tradeoff exists between sensing coverage and network lifetime due to the limited energy supplies of sensor nodes. In this study, we propose a routing protocol to accommodate both energy-balance and coverage-preservation for sensor nodes in WSNs. The energy consumption for radio transmissions and the residual energy over the network are taken into account when the proposed protocol determines an energy-efficient route for a packet. The simulation results demonstrate that the proposed protocol is able to increase the duration of the on-duty network and provide up to 98.3% and 85.7% of extra service time with 100% sensing coverage ratio comparing with LEACH and the LEACH-Coverage-U protocols, respectively.
NASA Astrophysics Data System (ADS)
Rahmati, Mehdi
2017-08-01
Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.
Evaluating the Quality of Evidence from a Network Meta-Analysis
Salanti, Georgia; Del Giovane, Cinzia; Chaimani, Anna; Caldwell, Deborah M.; Higgins, Julian P. T.
2014-01-01
Systematic reviews that collate data about the relative effects of multiple interventions via network meta-analysis are highly informative for decision-making purposes. A network meta-analysis provides two types of findings for a specific outcome: the relative treatment effect for all pairwise comparisons, and a ranking of the treatments. It is important to consider the confidence with which these two types of results can enable clinicians, policy makers and patients to make informed decisions. We propose an approach to determining confidence in the output of a network meta-analysis. Our proposed approach is based on methodology developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group for pairwise meta-analyses. The suggested framework for evaluating a network meta-analysis acknowledges (i) the key role of indirect comparisons (ii) the contributions of each piece of direct evidence to the network meta-analysis estimates of effect size; (iii) the importance of the transitivity assumption to the validity of network meta-analysis; and (iv) the possibility of disagreement between direct evidence and indirect evidence. We apply our proposed strategy to a systematic review comparing topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations. The proposed framework can be used to determine confidence in the results from a network meta-analysis. Judgements about evidence from a network meta-analysis can be different from those made about evidence from pairwise meta-analyses. PMID:24992266
Ma, Chuang; Xin, Mingming; Feldmann, Kenneth A.; Wang, Xiangfeng
2014-01-01
Machine learning (ML) is an intelligent data mining technique that builds a prediction model based on the learning of prior knowledge to recognize patterns in large-scale data sets. We present an ML-based methodology for transcriptome analysis via comparison of gene coexpression networks, implemented as an R package called machine learning–based differential network analysis (mlDNA) and apply this method to reanalyze a set of abiotic stress expression data in Arabidopsis thaliana. The mlDNA first used a ML-based filtering process to remove nonexpressed, constitutively expressed, or non-stress-responsive “noninformative” genes prior to network construction, through learning the patterns of 32 expression characteristics of known stress-related genes. The retained “informative” genes were subsequently analyzed by ML-based network comparison to predict candidate stress-related genes showing expression and network differences between control and stress networks, based on 33 network topological characteristics. Comparative evaluation of the network-centric and gene-centric analytic methods showed that mlDNA substantially outperformed traditional statistical testing–based differential expression analysis at identifying stress-related genes, with markedly improved prediction accuracy. To experimentally validate the mlDNA predictions, we selected 89 candidates out of the 1784 predicted salt stress–related genes with available SALK T-DNA mutagenesis lines for phenotypic screening and identified two previously unreported genes, mutants of which showed salt-sensitive phenotypes. PMID:24520154
A linguistic geometry for 3D strategic planning
NASA Technical Reports Server (NTRS)
Stilman, Boris
1995-01-01
This paper is a new step in the development and application of the Linguistic Geometry. This formal theory is intended to discover the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing Linguistic Geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in this paper on the new pilot example of the solution of the extremely complex 3D optimization problem of strategic planning for the space combat of autonomous vehicles. This example demonstrates deep and highly selective search in comparison with conventional search algorithms.
Reverse Engineering and Security Evaluation of Commercial Tags for RFID-Based IoT Applications.
Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Suárez-Albela, Manuel; Castedo, Luis
2016-12-24
The Internet of Things (IoT) is a distributed system of physical objects that requires the seamless integration of hardware (e.g., sensors, actuators, electronics) and network communications in order to collect and exchange data. IoT smart objects need to be somehow identified to determine the origin of the data and to automatically detect the elements around us. One of the best positioned technologies to perform identification is RFID (Radio Frequency Identification), which in the last years has gained a lot of popularity in applications like access control, payment cards or logistics. Despite its popularity, RFID security has not been properly handled in numerous applications. To foster security in such applications, this article includes three main contributions. First, in order to establish the basics, a detailed review of the most common flaws found in RFID-based IoT systems is provided, including the latest attacks described in the literature. Second, a novel methodology that eases the detection and mitigation of such flaws is presented. Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it. Thus, the methodology is tested in different scenarios where tags are commonly used for identification. In such systems it was possible to clone transponders, extract information, and even emulate both tags and readers. Therefore, it is shown that the methodology proposed is useful for auditing security and reverse engineering RFID communications in IoT applications. It must be noted that, although this paper is aimed at fostering RFID communications security in IoT applications, the methodology can be applied to any RFID communications protocol.
Reverse Engineering and Security Evaluation of Commercial Tags for RFID-Based IoT Applications
Fernández-Caramés, Tiago M.; Fraga-Lamas, Paula; Suárez-Albela, Manuel; Castedo, Luis
2016-01-01
The Internet of Things (IoT) is a distributed system of physical objects that requires the seamless integration of hardware (e.g., sensors, actuators, electronics) and network communications in order to collect and exchange data. IoT smart objects need to be somehow identified to determine the origin of the data and to automatically detect the elements around us. One of the best positioned technologies to perform identification is RFID (Radio Frequency Identification), which in the last years has gained a lot of popularity in applications like access control, payment cards or logistics. Despite its popularity, RFID security has not been properly handled in numerous applications. To foster security in such applications, this article includes three main contributions. First, in order to establish the basics, a detailed review of the most common flaws found in RFID-based IoT systems is provided, including the latest attacks described in the literature. Second, a novel methodology that eases the detection and mitigation of such flaws is presented. Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it. Thus, the methodology is tested in different scenarios where tags are commonly used for identification. In such systems it was possible to clone transponders, extract information, and even emulate both tags and readers. Therefore, it is shown that the methodology proposed is useful for auditing security and reverse engineering RFID communications in IoT applications. It must be noted that, although this paper is aimed at fostering RFID communications security in IoT applications, the methodology can be applied to any RFID communications protocol. PMID:28029119
Simulation of Attacks for Security in Wireless Sensor Network.
Diaz, Alvaro; Sanchez, Pablo
2016-11-18
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Note: Methodology for the analysis of Bluetooth gateways in an implemented scatternet.
Etxaniz, J; Monje, P M; Aranguren, G
2014-03-01
This Note introduces a novel methodology to analyze the time performance of Bluetooth gateways in multi-hop networks, known as scatternets. The methodology is focused on distinguishing between the processing time and the time that each communication between nodes takes along an implemented scatternet. This technique is not only valid for Bluetooth networks but also for other wireless networks that offer access to their middleware in order to include beacons in the operation of the nodes. We show in this Note the results of the tests carried out on a Bluetooth scatternet in order to highlight the reliability and effectiveness of the methodology. The results also validate this technique showing convergence in the results when subtracting the time for the beacons from the delay measurements.
The CTD2 Center at Emory University has developed a computational methodology to combine high-throughput knockdown data with known protein network topologies to infer the importance of protein-protein interactions (PPIs) for the survival of cancer cells. Applying these data to the Achilles shRNA results, the CCLE cell line characterizations, and known and newly identified PPIs provides novel insights for potential new drug targets for cancer therapies and identifies important PPI hubs.
Investigating accident causation through information network modelling.
Griffin, T G C; Young, M S; Stanton, N A
2010-02-01
Management of risk in complex domains such as aviation relies heavily on post-event investigations, requiring complex approaches to fully understand the integration of multi-causal, multi-agent and multi-linear accident sequences. The Event Analysis of Systemic Teamwork methodology (EAST; Stanton et al. 2008) offers such an approach based on network models. In this paper, we apply EAST to a well-known aviation accident case study, highlighting communication between agents as a central theme and investigating the potential for finding agents who were key to the accident. Ultimately, this work aims to develop a new model based on distributed situation awareness (DSA) to demonstrate that the risk inherent in a complex system is dependent on the information flowing within it. By identifying key agents and information elements, we can propose proactive design strategies to optimize the flow of information and help work towards avoiding aviation accidents. Statement of Relevance: This paper introduces a novel application of an holistic methodology for understanding aviation accidents. Furthermore, it introduces an ongoing project developing a nonlinear and prospective method that centralises distributed situation awareness and communication as themes. The relevance of findings are discussed in the context of current ergonomic and aviation issues of design, training and human-system interaction.
Zischg, Jonatan; Goncalves, Mariana L R; Bacchin, Taneha Kuzniecow; Leonhardt, Günther; Viklander, Maria; van Timmeren, Arjan; Rauch, Wolfgang; Sitzenfrei, Robert
2017-09-01
In the urban water cycle, there are different ways of handling stormwater runoff. Traditional systems mainly rely on underground piped, sometimes named 'gray' infrastructure. New and so-called 'green/blue' ambitions aim for treating and conveying the runoff at the surface. Such concepts are mainly based on ground infiltration and temporal storage. In this work a methodology to create and compare different planning alternatives for stormwater handling on their pathways to a desired system state is presented. Investigations are made to assess the system performance and robustness when facing the deeply uncertain spatial and temporal developments in the future urban fabric, including impacts caused by climate change, urbanization and other disruptive events, like shifts in the network layout and interactions of 'gray' and 'green/blue' structures. With the Info-Gap robustness pathway method, three planning alternatives are evaluated to identify critical performance levels at different stages over time. This novel methodology is applied to a real case study problem where a city relocation process takes place during the upcoming decades. In this case study it is shown that hybrid systems including green infrastructures are more robust with respect to future uncertainties, compared to traditional network design.
A multi-objective model for sustainable recycling of municipal solid waste.
Mirdar Harijani, Ali; Mansour, Saeed; Karimi, Behrooz
2017-04-01
The efficient management of municipal solid waste is a major problem for large and populated cities. In many countries, the majority of municipal solid waste is landfilled or dumped owing to an inefficient waste management system. Therefore, an optimal and sustainable waste management strategy is needed. This study introduces a recycling and disposal network for sustainable utilisation of municipal solid waste. In order to optimise the network, we develop a multi-objective mixed integer linear programming model in which the economic, environmental and social dimensions of sustainability are concurrently balanced. The model is able to: select the best combination of waste treatment facilities; specify the type, location and capacity of waste treatment facilities; determine the allocation of waste to facilities; consider the transportation of waste and distribution of processed products; maximise the profit of the system; minimise the environmental footprint; maximise the social impacts of the system; and eventually generate an optimal and sustainable configuration for municipal solid waste management. The proposed methodology could be applied to any region around the world. Here, the city of Tehran, Iran, is presented as a real case study to show the applicability of the methodology.
Social Network Analysis: A New Methodology for Counseling Research.
ERIC Educational Resources Information Center
Koehly, Laura M.; Shivy, Victoria A.
1998-01-01
Social network analysis (SNA) uses indices of relatedness among individuals to produce representations of social structures and positions inherent in dyads or groups. SNA methods provide quantitative representations of ongoing transactional patterns in a given social environment. Methodological issues, applications and resources are discussed…
NASA Technical Reports Server (NTRS)
Madrid, G. A.; Westmoreland, P. T.
1983-01-01
A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.
D'Archivio, Angelo Antonio; Maggi, Maria Anna; Ruggieri, Fabrizio
2014-08-01
In this paper, a multilayer artificial neural network is used to model simultaneously the effect of solute structure and eluent concentration profile on the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient elution. The retention data of 24 triazines, including common herbicides and their metabolites, are collected under 13 different elution modes, covering the following experimental domain: starting acetonitrile volume fraction ranging between 40 and 60% and gradient slope ranging between 0 and 1% acetonitrile/min. The gradient parameters together with five selected molecular descriptors, identified by quantitative structure-retention relationship modelling applied to individual separation conditions, are the network inputs. Predictive performance of this model is evaluated on six external triazines and four unseen separation conditions. For comparison, retention of triazines is modelled by both quantitative structure-retention relationships and response surface methodology, which describe separately the effect of molecular structure and gradient parameters on the retention. Although applied to a wider variable domain, the network provides a performance comparable to that of the above "local" models and retention times of triazines are modelled with accuracy generally better than 7%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Operation of remote mobile sensors for security of drinking water distribution systems.
Perelman, By Lina; Ostfeld, Avi
2013-09-01
The deployment of fixed online water quality sensors in water distribution systems has been recognized as one of the key components of contamination warning systems for securing public health. This study proposes to explore how the inclusion of mobile sensors for inline monitoring of various water quality parameters (e.g., residual chlorine, pH) can enhance water distribution system security. Mobile sensors equipped with sampling, sensing, data acquisition, wireless transmission and power generation systems are being designed, fabricated, and tested, and prototypes are expected to be released in the very near future. This study initiates the development of a theoretical framework for modeling mobile sensor movement in water distribution systems and integrating the sensory data collected from stationary and non-stationary sensor nodes to increase system security. The methodology is applied and demonstrated on two benchmark networks. Performance of different sensor network designs are compared for fixed and combined fixed and mobile sensor networks. Results indicate that complementing online sensor networks with inline monitoring can increase detection likelihood and decrease mean time to detection. Copyright © 2013 Elsevier Ltd. All rights reserved.
Intelligent-based Structural Damage Detection Model
NASA Astrophysics Data System (ADS)
Lee, Eric Wai Ming; Yu, Kin Fung
2010-05-01
This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.
NASA Astrophysics Data System (ADS)
Peng, Xiang; Zhang, Peng; Cai, Lilong
In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.
Prioritizing sewer rehabilitation projects using AHP-PROMETHEE II ranking method.
Kessili, Abdelhak; Benmamar, Saadia
2016-01-01
The aim of this paper is to develop a methodology for the prioritization of sewer rehabilitation projects for Algiers (Algeria) sewer networks to support the National Sanitation Office in its challenge to make decisions on prioritization of sewer rehabilitation projects. The methodology applies multiple-criteria decision making. The study includes 47 projects (collectors) and 12 criteria to evaluate them. These criteria represent the different issues considered in the prioritization of the projects, which are structural, hydraulic, environmental, financial, social and technical. The analytic hierarchy process (AHP) is used to determine weights of the criteria and the Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE II) method is used to obtain the final ranking of the projects. The model was verified using the sewer data of Algiers. The results have shown that the method can be used for prioritizing sewer rehabilitation projects.
Hyperbolic reformulation of a 1D viscoelastic blood flow model and ADER finite volume schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montecinos, Gino I.; Müller, Lucas O.; Toro, Eleuterio F.
2014-06-01
The applicability of ADER finite volume methods to solve hyperbolic balance laws with stiff source terms in the context of well-balanced and non-conservative schemes is extended to solve a one-dimensional blood flow model for viscoelastic vessels, reformulated as a hyperbolic system, via a relaxation time. A criterion for selecting relaxation times is found and an empirical convergence rate assessment is carried out to support this result. The proposed methodology is validated by applying it to a network of viscoelastic vessels for which experimental and numerical results are available. The agreement between the results obtained in the present paper and thosemore » available in the literature is satisfactory. Key features of the present formulation and numerical methodologies, such as accuracy, efficiency and robustness, are fully discussed in the paper.« less
Collagen morphology and texture analysis: from statistics to classification
Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.
2013-01-01
In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Global dynamic optimization approach to predict activation in metabolic pathways.
de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R
2014-01-06
During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.
Assessment of Mixed-Layer Height Estimation from Single-wavelength Ceilometer Profiles.
Knepp, Travis N; Szykman, James J; Long, Russell; Duvall, Rachelle M; Krug, Jonathan; Beaver, Melinda; Cavender, Kevin; Kronmiller, Keith; Wheeler, Michael; Delgado, Ruben; Hoff, Raymond; Berkoff, Timothy; Olson, Erik; Clark, Richard; Wolfe, Daniel; Van Gilst, David; Neil, Doreen
2017-01-01
Differing boundary/mixed-layer height measurement methods were assessed in moderately-polluted and clean environments, with a focus on the Vaisala CL51 ceilometer. This intercomparison was performed as part of ongoing measurements at the Chemistry And Physics of the Atmospheric Boundary Layer Experiment (CAPABLE) site in Hampton, Virginia and during the 2014 Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) field campaign that took place in and around Denver, Colorado. We analyzed CL51 data that were collected via two different methods (BLView software, which applied correction factors, and simple terminal emulation logging) to determine the impact of data collection methodology. Further, we evaluated the STRucture of the ATmosphere (STRAT) algorithm as an open-source alternative to BLView (note that the current work presents an evaluation of the BLView and STRAT algorithms and does not intend to act as a validation of either). Filtering criteria were defined according to the change in mixed-layer height (MLH) distributions for each instrument and algorithm and were applied throughout the analysis to remove high-frequency fluctuations from the MLH retrievals. Of primary interest was determining how the different data-collection methodologies and algorithms compare to each other and to radiosonde-derived boundary-layer heights when deployed as part of a larger instrument network. We determined that data-collection methodology is not as important as the processing algorithm and that much of the algorithm differences might be driven by impacts of local meteorology and precipitation events that pose algorithm difficulties. The results of this study show that a common processing algorithm is necessary for LIght Detection And Ranging (LIDAR)-based MLH intercomparisons, and ceilometer-network operation and that sonde-derived boundary layer heights are higher (10-15% at mid-day) than LIDAR-derived mixed-layer heights. We show that averaging the retrieved MLH to 1-hour resolution (an appropriate time scale for a priori data model initialization) significantly improved correlation between differing instruments and differing algorithms.
DOT National Transportation Integrated Search
1995-01-01
Prepared ca. 1995. This paper illustrates the use of the simulation-optimization technique of response surface methodology (RSM) in traffic signal optimization of urban networks. It also quantifies the gains of using the common random number (CRN) va...
Finding strong lenses in CFHTLS using convolutional neural networks
NASA Astrophysics Data System (ADS)
Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.
2017-10-01
We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.
Social networks of patients with psychosis: a systematic review.
Palumbo, Claudia; Volpe, Umberto; Matanov, Aleksandra; Priebe, Stefan; Giacco, Domenico
2015-10-12
Social networks are important for mental health outcomes as they can mobilise resources and help individuals to cope with social stressors. Individuals with psychosis may have specific difficulties in establishing and maintaining social relationships which impacts on their well-being and quality of life. There has been a growing interest in developing social network interventions for patients with psychotic disorders. A systematic literature review was conducted to investigate the size of social networks of patients with psychotic disorders, as well as their friendship networks. A systematic electronic search was carried out in MEDLINE, EMBASE and PsychINFO databases using a combination of search terms relating to 'social network', 'friendship' and 'psychotic disorder'. The search identified 23 relevant papers. Out of them, 20 reported patient social network size. Four papers reported the mean number of friends in addition to whole network size, while three further papers focused exclusively on the number of friends. Findings varied substantially across the studies, with a weighted mean size of 11.7 individuals for whole social networks and 3.4 individuals for friendship networks. On average, 43.1 % of the whole social network was composed of family members, while friends accounted for 26.5 %. Studies assessing whole social network size and friendship networks of people with psychosis are difficult to compare as different concepts and methods of assessment were applied. The extent of the overlap between different social roles assessed in the networks was not always clear. Greater conceptual and methodological clarity is needed in order to help the development of effective strategies to increase social resources of patients with psychosis.
Chemical combination effects predict connectivity in biological systems
Lehár, Joseph; Zimmermann, Grant R; Krueger, Andrew S; Molnar, Raymond A; Ledell, Jebediah T; Heilbut, Adrian M; Short, Glenn F; Giusti, Leanne C; Nolan, Garry P; Magid, Omar A; Lee, Margaret S; Borisy, Alexis A; Stockwell, Brent R; Keith, Curtis T
2007-01-01
Efforts to construct therapeutically useful models of biological systems require large and diverse sets of data on functional connections between their components. Here we show that cellular responses to combinations of chemicals reveal how their biological targets are connected. Simulations of pathways with pairs of inhibitors at varying doses predict distinct response surface shapes that are reproduced in a yeast experiment, with further support from a larger screen using human tumour cells. The response morphology yields detailed connectivity constraints between nearby targets, and synergy profiles across many combinations show relatedness between targets in the whole network. Constraints from chemical combinations complement genetic studies, because they probe different cellular components and can be applied to disease models that are not amenable to mutagenesis. Chemical probes also offer increased flexibility, as they can be continuously dosed, temporally controlled, and readily combined. After extending this initial study to cover a wider range of combination effects and pathway topologies, chemical combinations may be used to refine network models or to identify novel targets. This response surface methodology may even apply to non-biological systems where responses to targeted perturbations can be measured. PMID:17332758
NASA Astrophysics Data System (ADS)
González, D. L., II; Angus, M. P.; Tetteh, I. K.; Bello, G. A.; Padmanabhan, K.; Pendse, S. V.; Srinivas, S.; Yu, J.; Semazzi, F.; Kumar, V.; Samatova, N. F.
2014-04-01
Decades of hypothesis-driven and/or first-principles research have been applied towards the discovery and explanation of the mechanisms that drive climate phenomena, such as western African Sahel summer rainfall variability. Although connections between various climate factors have been theorized, not all of the key relationships are fully understood. We propose a data-driven approach to identify candidate players in this climate system, which can help explain underlying mechanisms and/or even suggest new relationships, to facilitate building a more comprehensive and predictive model of the modulatory relationships influencing a climate phenomenon of interest. We applied coupled heterogeneous association rule mining (CHARM), Lasso multivariate regression, and Dynamic Bayesian networks to find relationships within a complex system, and explored means with which to obtain a consensus result from the application of such varied methodologies. Using this fusion of approaches, we identified relationships among climate factors that modulate Sahel rainfall, including well-known associations from prior climate knowledge, as well as promising discoveries that invite further research by the climate science community.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Dynamical Networks Characterization of Space Weather Events
NASA Astrophysics Data System (ADS)
Orr, L.; Chapman, S. C.; Dods, J.; Gjerloev, J. W.
2017-12-01
Space weather can cause disturbances to satellite systems, impacting navigation technology and telecommunications; it can cause power loss and aviation disruption. A central aspect of the earth's magnetospheric response to space weather events are large scale and rapid changes in ionospheric current patterns. Space weather is highly dynamic and there are still many controversies about how the current system evolves in time. The recent SuperMAG initiative, collates ground-based vector magnetic field time series from over 200 magnetometers with 1-minute temporal resolution. In principle this combined dataset is an ideal candidate for quantification using dynamical networks. Network properties and parameters allow us to characterize the time dynamics of the full spatiotemporal pattern of the ionospheric current system. However, applying network methodologies to physical data presents new challenges. We establish whether a given pair of magnetometers are connected in the network by calculating their canonical cross correlation. The magnetometers are connected if their cross correlation exceeds a threshold. In our physical time series this threshold needs to be both station specific, as it varies with (non-linear) individual station sensitivity and location, and able to vary with season, which affects ground conductivity. Additionally, the earth rotates and therefore the ground stations move significantly on the timescales of geomagnetic disturbances. The magnetometers are non-uniformly spatially distributed. We will present new methodology which addresses these problems and in particular achieves dynamic normalization of the physical time series in order to form the network. Correlated disturbances across the magnetometers capture transient currents. Once the dynamical network has been obtained [1][2] from the full magnetometer data set it can be used to directly identify detailed inferred transient ionospheric current patterns and track their dynamics. We will show our first results that use network properties such as cliques and clustering coefficients to map these highly dynamic changes in ionospheric current patterns.[l] Dods et al, J. Geophys. Res 120, doi:10.1002/2015JA02 (2015). [2] Dods et al, J. Geophys. Res. 122, doi:10.1002/2016JA02 (2017).
Evaluation of architectures for an ASP MPEG-4 decoder using a system-level design methodology
NASA Astrophysics Data System (ADS)
Garcia, Luz; Reyes, Victor; Barreto, Dacil; Marrero, Gustavo; Bautista, Tomas; Nunez, Antonio
2005-06-01
Trends in multimedia consumer electronics, digital video and audio, aim to reach users through low-cost mobile devices connected to data broadcasting networks with limited bandwidth. An emergent broadcasting network is the digital audio broadcasting network (DAB) which provides CD quality audio transmission together with robustness and efficiency techniques to allow good quality reception in motion conditions. This paper focuses on the system-level evaluation of different architectural options to allow low bandwidth digital video reception over DAB, based on video compression techniques. Profiling and design space exploration techniques are applied over the ASP MPEG-4 decoder in order to find out the best HW/SW partition given the application and platform constraints. An innovative SystemC-based system-level design tool, called CASSE, is being used for modelling, exploration and evaluation of different ASP MPEG-4 decoder HW/SW partitions. System-level trade offs and quantitative data derived from this analysis are also presented in this work.
Grand canonical validation of the bipartite international trade network.
Straka, Mika J; Caldarelli, Guido; Saracco, Fabio
2017-08-01
Devising strategies for economic development in a globally competitive landscape requires a solid and unbiased understanding of countries' technological advancements and similarities among export products. Both can be addressed through the bipartite representation of the International Trade Network. In this paper, we apply the recently proposed grand canonical projection algorithm to uncover country and product communities. Contrary to past endeavors, our methodology, based on information theory, creates monopartite projections in an unbiased and analytically tractable way. Single links between countries or products represent statistically significant signals, which are not accounted for by null models such as the bipartite configuration model. We find stable country communities reflecting the socioeconomic distinction in developed, newly industrialized, and developing countries. Furthermore, we observe product clusters based on the aforementioned country groups. Our analysis reveals the existence of a complicated structure in the bipartite International Trade Network: apart from the diversification of export baskets from the most basic to the most exclusive products, we observe a statistically significant signal of an export specialization mechanism towards more sophisticated products.
Grand canonical validation of the bipartite international trade network
NASA Astrophysics Data System (ADS)
Straka, Mika J.; Caldarelli, Guido; Saracco, Fabio
2017-08-01
Devising strategies for economic development in a globally competitive landscape requires a solid and unbiased understanding of countries' technological advancements and similarities among export products. Both can be addressed through the bipartite representation of the International Trade Network. In this paper, we apply the recently proposed grand canonical projection algorithm to uncover country and product communities. Contrary to past endeavors, our methodology, based on information theory, creates monopartite projections in an unbiased and analytically tractable way. Single links between countries or products represent statistically significant signals, which are not accounted for by null models such as the bipartite configuration model. We find stable country communities reflecting the socioeconomic distinction in developed, newly industrialized, and developing countries. Furthermore, we observe product clusters based on the aforementioned country groups. Our analysis reveals the existence of a complicated structure in the bipartite International Trade Network: apart from the diversification of export baskets from the most basic to the most exclusive products, we observe a statistically significant signal of an export specialization mechanism towards more sophisticated products.
CellNet: network biology applied to stem cell engineering.
Cahan, Patrick; Li, Hu; Morris, Samantha A; Lummertz da Rocha, Edroaldo; Daley, George Q; Collins, James J
2014-08-14
Somatic cell reprogramming, directed differentiation of pluripotent stem cells, and direct conversions between differentiated cell lineages represent powerful approaches to engineer cells for research and regenerative medicine. We have developed CellNet, a network biology platform that more accurately assesses the fidelity of cellular engineering than existing methodologies and generates hypotheses for improving cell derivations. Analyzing expression data from 56 published reports, we found that cells derived via directed differentiation more closely resemble their in vivo counterparts than products of direct conversion, as reflected by the establishment of target cell-type gene regulatory networks (GRNs). Furthermore, we discovered that directly converted cells fail to adequately silence expression programs of the starting population and that the establishment of unintended GRNs is common to virtually every cellular engineering paradigm. CellNet provides a platform for quantifying how closely engineered cell populations resemble their target cell type and a rational strategy to guide enhanced cellular engineering. Copyright © 2014 Elsevier Inc. All rights reserved.
Model-Based Anomaly Detection for a Transparent Optical Transmission System
NASA Astrophysics Data System (ADS)
Bengtsson, Thomas; Salamon, Todd; Ho, Tin Kam; White, Christopher A.
In this chapter, we present an approach for anomaly detection at the physical layer of networks where detailed knowledge about the devices and their operations is available. The approach combines physics-based process models with observational data models to characterize the uncertainties and derive the alarm decision rules. We formulate and apply three different methods based on this approach for a well-defined problem in optical network monitoring that features many typical challenges for this methodology. Specifically, we address the problem of monitoring optically transparent transmission systems that use dynamically controlled Raman amplification systems. We use models of amplifier physics together with statistical estimation to derive alarm decision rules and use these rules to automatically discriminate between measurement errors, anomalous losses, and pump failures. Our approach has led to an efficient tool for systematically detecting anomalies in the system behavior of a deployed network, where pro-active measures to address such anomalies are key to preventing unnecessary disturbances to the system's continuous operation.
Cluster Analysis of Weighted Bipartite Networks: A New Copula-Based Approach
Chessa, Alessandro; Crimaldi, Irene; Riccaboni, Massimo; Trapin, Luca
2014-01-01
In this work we are interested in identifying clusters of “positional equivalent” actors, i.e. actors who play a similar role in a system. In particular, we analyze weighted bipartite networks that describes the relationships between actors on one side and features or traits on the other, together with the intensity level to which actors show their features. We develop a methodological approach that takes into account the underlying multivariate dependence among groups of actors. The idea is that positions in a network could be defined on the basis of the similar intensity levels that the actors exhibit in expressing some features, instead of just considering relationships that actors hold with each others. Moreover, we propose a new clustering procedure that exploits the potentiality of copula functions, a mathematical instrument for the modelization of the stochastic dependence structure. Our clustering algorithm can be applied both to binary and real-valued matrices. We validate it with simulations and applications to real-world data. PMID:25303095
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G
2012-09-01
This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.
Network Analysis in Comparative Social Sciences
ERIC Educational Resources Information Center
Vera, Eugenia Roldan; Schupp, Thomas
2006-01-01
This essay describes the pertinence of Social Network Analysis (SNA) for the social sciences in general, and discusses its methodological and conceptual implications for comparative research in particular. The authors first present a basic summary of the theoretical and methodological assumptions of SNA, followed by a succinct overview of its…
Toddi A. Steelman; Branda Nowell; Deena Bayoumi; Sarah McCaffrey
2014-01-01
We leverage economic theory, network theory, and social network analytical techniques to bring greater conceptual and methodological rigor to understand how information is exchanged during disasters. We ask, "How can information relationships be evaluated more systematically during a disaster response?" "Infocentric analysis"a term and...
NASA Astrophysics Data System (ADS)
Cisty, Milan; Bajtek, Zbynek; Celar, Lubomir; Soldanova, Veronika
2017-04-01
Finding effective ways to build irrigation systems which meet irrigation demands and also achieve positive environmental and economic outcomes requires, among other activities, the development of new modelling tools. Due to the high costs associated with the necessary material and the installation of an irrigation water distribution system (WDS), it is essential to optimize the design of the WDS, while the hydraulic requirements (e.g., the required pressure on irrigation machines) of the network are gratified. In this work an optimal design of a water distribution network is proposed for large irrigation networks. In the present work, a multi-step optimization approach is proposed in such a way that the optimization is accomplished in two phases. In the first phase suboptimal solutions are searched for; in the second phase, the optimization problem is solved with a reduced search space based on these solutions, which significantly supports the finding of an optimal solution. The first phase of the optimization consists of several runs of the NSGA-II, which is applied in this phase by varying its parameters for every run, i.e., changing the population size, the number of generations, and the crossover and mutation parameters. This is done with the aim of obtaining different sub-optimal solutions which have a relatively low cost. These sub-optimal solutions are subsequently used in the second phase of the proposed methodology, in which the final optimization run is built on sub-optimal solutions from the previous phase. The purpose of the second phase is to improve the results of the first phase by searching through the reduced search space. The reduction is based on the minimum and maximum diameters for each pipe from all the networks from the first stage. In this phase, NSGA-II do not consider diameters which are outside of this range. After the NSGA-II second phase computations, the best result published so far for the Balerma benchmark network which was used for methodology testing was achieved in the presented work. The knowledge gained from these computational experiments lies not in offering a new advanced heuristic or hybrid optimization methods of a water distribution network, but in the fact that it is possible to obtain very good results with simple, known methods if they are properly used methodologically. ACKNOWLEDGEMENT This work was supported by the Slovak Research and Development Agency under Contract No. APVV-15-0489 and by the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences, Grant No. 1/0665/15.
Decentralized Energy Management System for Networked Microgrids in Grid-connected and Islanded Modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
This paper proposes a decentralized energy management system (EMS) for the coordinated operation of networked Microgirds (MGs) in a distribution system. In the grid-connected mode, the distribution network operator (DNO) and each MG are considered as distinct entities with individual objectives to minimize their own operation costs. It is assumed that both dispatchable and renewable energy source (RES)-based distributed generators (DGs) exist in the distribution network and the networked MGs. In order to coordinate the operation of all entities, we apply a decentralized bi-level algorithm to solve the problem with the first level to conduct negotiations among all entities andmore » the second level to update the non-converging penalties. In the islanded mode, the objective of each MG is to maintain a reliable power supply to its customers. In order to take into account the uncertainties of DG outputs and load consumption, we formulate the problems as two-stage stochastic programs. The first stage is to determine base generation setpoints based on the forecasts and the second stage is to adjust the generation outputs based on the realized scenarios. Case studies of a distribution system with networked MGs demonstrate the effectiveness of the proposed methodology in both grid-connected and islanded modes.« less
Challenges to inferring causality from viral information dispersion in dynamic social networks
NASA Astrophysics Data System (ADS)
Ternovski, John
2014-06-01
Understanding the mechanism behind large-scale information dispersion through complex networks has important implications for a variety of industries ranging from cyber-security to public health. With the unprecedented availability of public data from online social networks (OSNs) and the low cost nature of most OSN outreach, randomized controlled experiments, the "gold standard" of causal inference methodologies, have been used with increasing regularity to study viral information dispersion. And while these studies have dramatically furthered our understanding of how information disseminates through social networks by isolating causal mechanisms, there are still major methodological concerns that need to be addressed in future research. This paper delineates why modern OSNs are markedly different from traditional sociological social networks and why these differences present unique challenges to experimentalists and data scientists. The dynamic nature of OSNs is particularly troublesome for researchers implementing experimental designs, so this paper identifies major sources of bias arising from network mutability and suggests strategies to circumvent and adjust for these biases. This paper also discusses the practical considerations of data quality and collection, which may adversely impact the efficiency of the estimator. The major experimental methodologies used in the current literature on virality are assessed at length, and their strengths and limits identified. Other, as-yetunsolved threats to the efficiency and unbiasedness of causal estimators--such as missing data--are also discussed. This paper integrates methodologies and learnings from a variety of fields under an experimental and data science framework in order to systematically consolidate and identify current methodological limitations of randomized controlled experiments conducted in OSNs.
Simulation of Attacks for Security in Wireless Sensor Network
Diaz, Alvaro; Sanchez, Pablo
2016-01-01
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710
Models and simulation of 3D neuronal dendritic trees using Bayesian networks.
López-Cruz, Pedro L; Bielza, Concha; Larrañaga, Pedro; Benavides-Piccione, Ruth; DeFelipe, Javier
2011-12-01
Neuron morphology is crucial for neuronal connectivity and brain information processing. Computational models are important tools for studying dendritic morphology and its role in brain function. We applied a class of probabilistic graphical models called Bayesian networks to generate virtual dendrites from layer III pyramidal neurons from three different regions of the neocortex of the mouse. A set of 41 morphological variables were measured from the 3D reconstructions of real dendrites and their probability distributions used in a machine learning algorithm to induce the model from the data. A simulation algorithm is also proposed to obtain new dendrites by sampling values from Bayesian networks. The main advantage of this approach is that it takes into account and automatically locates the relationships between variables in the data instead of using predefined dependencies. Therefore, the methodology can be applied to any neuronal class while at the same time exploiting class-specific properties. Also, a Bayesian network was defined for each part of the dendrite, allowing the relationships to change in the different sections and to model heterogeneous developmental factors or spatial influences. Several univariate statistical tests and a novel multivariate test based on Kullback-Leibler divergence estimation confirmed that virtual dendrites were similar to real ones. The analyses of the models showed relationships that conform to current neuroanatomical knowledge and support model correctness. At the same time, studying the relationships in the models can help to identify new interactions between variables related to dendritic morphology.
A Complex Systems Approach to Causal Discovery in Psychiatry.
Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin
2016-01-01
Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.
Theofilatos, Konstantinos; Pavlopoulou, Niki; Papasavvas, Christoforos; Likothanassis, Spiros; Dimitrakopoulos, Christos; Georgopoulos, Efstratios; Moschopoulos, Charalampos; Mavroudi, Seferina
2015-03-01
Proteins are considered to be the most important individual components of biological systems and they combine to form physical protein complexes which are responsible for certain molecular functions. Despite the large availability of protein-protein interaction (PPI) information, not much information is available about protein complexes. Experimental methods are limited in terms of time, efficiency, cost and performance constraints. Existing computational methods have provided encouraging preliminary results, but they phase certain disadvantages as they require parameter tuning, some of them cannot handle weighted PPI data and others do not allow a protein to participate in more than one protein complex. In the present paper, we propose a new fully unsupervised methodology for predicting protein complexes from weighted PPI graphs. The proposed methodology is called evolutionary enhanced Markov clustering (EE-MC) and it is a hybrid combination of an adaptive evolutionary algorithm and a state-of-the-art clustering algorithm named enhanced Markov clustering. EE-MC was compared with state-of-the-art methodologies when applied to datasets from the human and the yeast Saccharomyces cerevisiae organisms. Using public available datasets, EE-MC outperformed existing methodologies (in some datasets the separation metric was increased by 10-20%). Moreover, when applied to new human datasets its performance was encouraging in the prediction of protein complexes which consist of proteins with high functional similarity. In specific, 5737 protein complexes were predicted and 72.58% of them are enriched for at least one gene ontology (GO) function term. EE-MC is by design able to overcome intrinsic limitations of existing methodologies such as their inability to handle weighted PPI networks, their constraint to assign every protein in exactly one cluster and the difficulties they face concerning the parameter tuning. This fact was experimentally validated and moreover, new potentially true human protein complexes were suggested as candidates for further validation using experimental techniques. Copyright © 2015 Elsevier B.V. All rights reserved.
A Vulnerability Index and Analysis for the Road Network of Rural Chile
NASA Astrophysics Data System (ADS)
Braun, Andreas; Stötzer, Johanna; Kubisch, Susanne; Dittrich, Andre; Keller, Sina
2017-04-01
Natural hazards impose considerable threats to the physical and socio-economic wellbeing of people, a fact, which is well understood and investigated for many regions. However, not only people are vulnerable. During the last decades, a considerable amount of literature has focussed the particular vulnerability of the critical infrastructure: for example road networks. Considering critical infrastructure, far less reliable information exists for many regions worldwide - particularly, regions outside of the so called developed world. Critical infrastructure is destroyed in many disasters, causing cascade and follow up effects, for instance, impediments during evacuation, rescue and during the resilience phase. These circumstances, which are general enough to be applied to most regions, aggravate in regions characterized by high disparities between the urban and the rural sphere. Peripheral rural areas are especially prone to get isolated due to defects of the few roads which connect them to larger urban centres (where, frequently, disaster and emergency actors are situated). The rural area of Central Chile is a appropriate example for these circumstances. It is prone to destruction by several geo-hazards and furthermore, characterized by the aforementioned disparities. Past disasters, e.g. the 1991 Cerro Hudson eruption and the 2010 Maule earthquake have led to follow up effects (e.g. farmers, being unable to evacuate their animals due to road failures in the first case, and difficultires to evacuate people from places such as Caleta Tumbes or Dichato, which are connected by just a single road only in the second). The contribution develops a methodology to investigate into the critical infrastructure of such places. It develops a remoteness index for Chile, which identifies remote, peripheral rural areas, prone to get isolated due to road network failures during disasters. The approach is graph based. It offers particular advantages for regions like rural Chile since 1. it does not require traffic flow data which do not exist, 2. identifies peripheral areas particularly well, 3. identifies both nodes (places) prone to isolation and edges (roads) critical for the connectivity of rural areas, 4. based on a mathematical structure, it implies several possible planning solutions to reduce vulnerability of the critical infrastructure and people dependent on it. The methodology is presented and elaborated theoretically. Afterwards, it is demonstrated on an actual dataset from central Chile. It is demonstrated, how the methodology can be applied to derive planning solutions for peripheral rural areas.
Connectivity Measures in EEG Microstructural Sleep Elements.
Sakellariou, Dimitris; Koupparis, Andreas M; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K
2016-01-01
During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an "EEG-element connectivity" methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the occurrence of EEG microstructural elements. Network characterization of specified physiological or pathological EEG microstructural elements can potentially be of great importance in the understanding, identification, and prediction of health and disease.
Connectivity Measures in EEG Microstructural Sleep Elements
Sakellariou, Dimitris; Koupparis, Andreas M.; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K.
2016-01-01
During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an “EEG-element connectivity” methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the occurrence of EEG microstructural elements. Network characterization of specified physiological or pathological EEG microstructural elements can potentially be of great importance in the understanding, identification, and prediction of health and disease. PMID:26924980
NASA Astrophysics Data System (ADS)
Riasi, S.; Huang, G.; Montemagno, C.; Yeghiazarian, L.
2013-12-01
Micro-scale modeling of multiphase flow in porous media is critical to characterize porous materials. Several modeling techniques have been implemented to date, but none can be used as a general strategy for all porous media applications due to challenges presented by non-smooth high-curvature solid surfaces, and by a wide range of pore sizes and porosities. Finite approaches like the finite volume method require a high quality, problem-dependent mesh, while particle-based approaches like the lattice Boltzmann require too many particles to achieve a stable meaningful solution. Both come at a large computational cost. Other methods such as pore network modeling (PNM) have been developed to accelerate the solution process by simplifying the solution domain, but so far a unique and straightforward methodology to implement PNM is lacking. We have developed a general, stable and fast methodology to model multi-phase fluid flow in porous materials, irrespective of their porosity and solid phase topology. We have applied this methodology to highly porous fibrous materials in which void spaces are not distinctly separated, and where simplifying the geometry into a network of pore bodies and throats, as in PNM, does not result in a topology-consistent network. To this end, we have reduced the complexity of the 3-D void space geometry by working with its medial surface. We have used a non-iterative fast medial surface finder algorithm to determine a voxel-wide medial surface of the void space, and then solved the quasi-static drainage and imbibition on the resulting domain. The medial surface accurately represents the topology of the porous structure including corners, irregular cross sections, etc. This methodology is capable of capturing corner menisci and the snap-off mechanism numerically. It also allows for calculation of pore size distribution, permeability and capillary pressure-saturation-specific interfacial area surface of the porous structure. To show the capability of this method to numerically estimate the capillary pressure in irregular cross sections, we compared our results with analytical solutions available for capillary tubes with non-circular cross sections. We also validated this approach by implementing it on well-known benchmark problems such as a bundle of cylinders and packed spheres.
NASA Astrophysics Data System (ADS)
Barton, Alan J.; Haqqani, Arsalan S.
2011-11-01
Three public biological network data sets (KEGG, GeneRIF and Reactome) are collected and described. Two problems are investigated (inter- and intra- cellular interactions) via augmentation of the collected networks to the problem specific data. Results include an estimate of the importance of proteins for the interaction of inflammatory cells with the blood-brain barrier via the computation of Betweenness Centrality. Subsequently, the interactions may be validated from a number of differing perspectives; including comparison with (i) existing biological results, (ii) the literature, and (iii) new hypothesis driven biological experiments. Novel therapeutic and diagnostic targets for inhibiting inflammation at the blood-brain barrier in a number of brain diseases including Alzheimer's disease, stroke and multiple sclerosis are possible. In addition, this methodology may also be applicable towards investigating the breast cancer tumour microenvironment.
NASA Astrophysics Data System (ADS)
Kim, Jungja; Ceong, Heetaek; Won, Yonggwan
In market-basket analysis, weighted association rule (WAR) discovery can mine the rules that include more beneficial information by reflecting item importance for special products. In the point-of-sale database, each transaction is composed of items with similar properties, and item weights are pre-defined and fixed by a factor such as the profit. However, when items are divided into more than one group and the item importance must be measured independently for each group, traditional weighted association rule discovery cannot be used. To solve this problem, we propose a new weighted association rule mining methodology. The items should be first divided into subgroups according to their properties, and the item importance, i.e. item weight, is defined or calculated only with the items included in the subgroup. Then, transaction weight is measured by appropriately summing the item weights from each subgroup, and the weighted support is computed as the fraction of the transaction weights that contains the candidate items relative to the weight of all transactions. As an example, our proposed methodology is applied to assess the vulnerability to threats of computer systems that provide networked services. Our algorithm provides both quantitative risk-level values and qualitative risk rules for the security assessment of networked computer systems using WAR discovery. Also, it can be widely used for new applications with many data sets in which the data items are distinctly separated.
Automatic Road Sign Inventory Using Mobile Mapping Systems
NASA Astrophysics Data System (ADS)
Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P.
2016-06-01
The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS) use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS), thus, road sign information can be available to be used in a Smart City context.
Reverse Engineering Validation using a Benchmark Synthetic Gene Circuit in Human Cells
Kang, Taek; White, Jacob T.; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas
2013-01-01
Multi-component biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network. PMID:23654266
Reverse engineering validation using a benchmark synthetic gene circuit in human cells.
Kang, Taek; White, Jacob T; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas
2013-05-17
Multicomponent biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network.
Western and Eastern Views on Social Networks
ERIC Educational Resources Information Center
Ordonez de Pablos, Patricia
2005-01-01
Purpose: The aim of this paper is to examine social networks from a Western and Eastern view. Design/methodology/approach: The paper uses case study methodology to gather evidence of how world pioneering firms from Asia and Europe measure and report their social connections from a Western perspective. Findings: It examined the basic indicators…
A methodology aimed at fostering and sustaining the development processes of an IE-based industry
NASA Astrophysics Data System (ADS)
Corallo, Angelo; Errico, Fabrizio; de Maggio, Marco; Giangreco, Enza
In the current competitive scenario, where business relationships are fundamental in building successful business models and inter/intra organizational business processes are progressively digitalized, an end-to-end methodology is required that is capable of guiding business networks through the Internetworked Enterprise (IE) paradigm: a new and innovative organizational model able to leverage Internet technologies to perform real-time coordination of intra and inter-firm activities, to create value by offering innovative and personalized products/services and reduce transaction costs. This chapter presents the TEKNE project Methodology of change that guides business networks, by means of a modular and flexible approach, towards the IE techno-organizational paradigm, taking into account the competitive environment of the network and how this environment influences its strategic, organizational and technological levels. Contingency, the business model, enterprise architecture and performance metrics are the key concepts that form the cornerstone of this methodological framework.
Lee, Insuk; Li, Zhihua; Marcotte, Edward M.
2007-01-01
Background Probabilistic functional gene networks are powerful theoretical frameworks for integrating heterogeneous functional genomics and proteomics data into objective models of cellular systems. Such networks provide syntheses of millions of discrete experimental observations, spanning DNA microarray experiments, physical protein interactions, genetic interactions, and comparative genomics; the resulting networks can then be easily applied to generate testable hypotheses regarding specific gene functions and associations. Methodology/Principal Findings We report a significantly improved version (v. 2) of a probabilistic functional gene network [1] of the baker's yeast, Saccharomyces cerevisiae. We describe our optimization methods and illustrate their effects in three major areas: the reduction of functional bias in network training reference sets, the application of a probabilistic model for calculating confidences in pair-wise protein physical or genetic interactions, and the introduction of simple thresholds that eliminate many false positive mRNA co-expression relationships. Using the network, we predict and experimentally verify the function of the yeast RNA binding protein Puf6 in 60S ribosomal subunit biogenesis. Conclusions/Significance YeastNet v. 2, constructed using these optimizations together with additional data, shows significant reduction in bias and improvements in precision and recall, in total covering 102,803 linkages among 5,483 yeast proteins (95% of the validated proteome). YeastNet is available from http://www.yeastnet.org. PMID:17912365
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-05-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called "Coactive Neuro-Fuzzy Inference System" (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) - as a well-known technique to solve the complex optimization problems - is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS-GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS-GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems.
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-01-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called “Coactive Neuro-Fuzzy Inference System” (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) – as a well-known technique to solve the complex optimization problems – is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS–GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS–GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems. PMID:25540468
Dos Santos Vasconcelos, Crhisllane Rafaele; de Lima Campos, Túlio; Rezende, Antonio Mauro
2018-03-06
Systematic analysis of a parasite interactome is a key approach to understand different biological processes. It makes possible to elucidate disease mechanisms, to predict protein functions and to select promising targets for drug development. Currently, several approaches for protein interaction prediction for non-model species incorporate only small fractions of the entire proteomes and their interactions. Based on this perspective, this study presents an integration of computational methodologies, protein network predictions and comparative analysis of the protozoan species Leishmania braziliensis and Leishmania infantum. These parasites cause Leishmaniasis, a worldwide distributed and neglected disease, with limited treatment options using currently available drugs. The predicted interactions were obtained from a meta-approach, applying rigid body docking tests and template-based docking on protein structures predicted by different comparative modeling techniques. In addition, we trained a machine-learning algorithm (Gradient Boosting) using docking information performed on a curated set of positive and negative protein interaction data. Our final model obtained an AUC = 0.88, with recall = 0.69, specificity = 0.88 and precision = 0.83. Using this approach, it was possible to confidently predict 681 protein structures and 6198 protein interactions for L. braziliensis, and 708 protein structures and 7391 protein interactions for L. infantum. The predicted networks were integrated to protein interaction data already available, analyzed using several topological features and used to classify proteins as essential for network stability. The present study allowed to demonstrate the importance of integrating different methodologies of interaction prediction to increase the coverage of the protein interaction of the studied protocols, besides it made available protein structures and interactions not previously reported.
NASA Astrophysics Data System (ADS)
Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios
2010-01-01
Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.
Determination of the Territorial Sea Baseline - Measurement Aspect
NASA Astrophysics Data System (ADS)
Specht, Cezary; Weintrit, Adam; Specht, Mariusz; Dabrowski, Pawel
2017-12-01
Determining the course of the territorial sea baseline (TSB) of the coastal state is the basis for establishing its maritime boundaries, thus becoming indirect part of maritime policy of the state. Besides the following aspects: legal and methodological as described in the conventions, acts, standards and regulations, equally important is the issue of measurement methodology with respect to the boundaries of the territorial sea. The publication discussed accuracy requirements of the TSB measurement implementation, the relationship of sea level with a choice of the method of its determination, and discussed the implementation of such a measurement on a selected example. As the test reservoir was used the 400-meter stretch of the public beach in Gdynia. During the measurements they used the GNSS geodetic receiver operating in real time based on the geodetic network - VRSnet.pl. Additionally, a comparison was made of the applied method with analogous measurements of the TSB performed in 1999.
Modeling-Enabled Systems Nutritional Immunology
Verma, Meghna; Hontecillas, Raquel; Abedi, Vida; Leber, Andrew; Tubau-Juni, Nuria; Philipson, Casandra; Carbo, Adria; Bassaganya-Riera, Josep
2016-01-01
This review highlights the fundamental role of nutrition in the maintenance of health, the immune response, and disease prevention. Emerging global mechanistic insights in the field of nutritional immunology cannot be gained through reductionist methods alone or by analyzing a single nutrient at a time. We propose to investigate nutritional immunology as a massively interacting system of interconnected multistage and multiscale networks that encompass hidden mechanisms by which nutrition, microbiome, metabolism, genetic predisposition, and the immune system interact to delineate health and disease. The review sets an unconventional path to apply complex science methodologies to nutritional immunology research, discovery, and development through “use cases” centered around the impact of nutrition on the gut microbiome and immune responses. Our systems nutritional immunology analyses, which include modeling and informatics methodologies in combination with pre-clinical and clinical studies, have the potential to discover emerging systems-wide properties at the interface of the immune system, nutrition, microbiome, and metabolism. PMID:26909350
Lewis, George K; Lewis, George K; Olbricht, William
2008-01-01
This paper explains the circuitry and signal processing to perform electrical impedance spectroscopy on piezoelectric materials and ultrasound transducers. Here, we measure and compare the impedance spectra of 2−5 MHz piezoelectrics, but the methodology applies for 700 kHz–20 MHz ultrasonic devices as well. Using a 12 ns wide 5 volt pulsing circuit as an impulse, we determine the electrical impedance curves experimentally using Ohm's law and fast Fourier transform (FFT), and compare results with mathematical models. The method allows for rapid impedance measurement for a range of frequencies using a narrow input pulse, digital oscilloscope and FFT techniques. The technique compares well to current methodologies such as network and impedance analyzers while providing additional versatility in the electrical impedance measurement. The technique is theoretically simple, easy to implement and completed with ordinary laboratory instrumentation for minimal cost. PMID:19081773
NASA Astrophysics Data System (ADS)
Kouloumentas, Christos
2011-09-01
The concept of the all-fiberized multi-wavelength regenerator is analyzed, and the design methodology for operation at 40 Gb/s is presented. The specific methodology has been applied in the past for the experimental proof-of-principle of the technique, but it has never been reported in detail. The regenerator is based on a strong dispersion map that is implemented using alternating dispersion compensating fibers (DCF) and single-mode fibers (SMF), and minimizes the nonlinear interaction between the wavelength-division multiplexing (WDM) channels. The optimized regenerator design with + 0.86 ps/nm/km average dispersion of the nonlinear fiber section is further investigated. The specific design is capable of simultaneously processing five WDM channels with 800 GHz channel spacing and providing Q-factor improvement higher than 1 dB for each channel. The cascadeability of the regenerator is also indicated using a 6-node metropolitan network simulation model.
Identifying the starting point of a spreading process in complex networks.
Comin, Cesar Henrique; Costa, Luciano da Fontoura
2011-11-01
When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.
Exploring the structure and function of temporal networks with dynamic graphlets
Hulovatyy, Y.; Chen, H.; Milenković, T.
2015-01-01
Motivation: With increasing availability of temporal real-world networks, how to efficiently study these data? One can model a temporal network as a single aggregate static network, or as a series of time-specific snapshots, each being an aggregate static network over the corresponding time window. Then, one can use established methods for static analysis on the resulting aggregate network(s), but losing in the process valuable temporal information either completely, or at the interface between different snapshots, respectively. Here, we develop a novel approach for studying a temporal network more explicitly, by capturing inter-snapshot relationships. Results: We base our methodology on well-established graphlets (subgraphs), which have been proven in numerous contexts in static network research. We develop new theory to allow for graphlet-based analyses of temporal networks. Our new notion of dynamic graphlets is different from existing dynamic network approaches that are based on temporal motifs (statistically significant subgraphs). The latter have limitations: their results depend on the choice of a null network model that is required to evaluate the significance of a subgraph, and choosing a good null model is non-trivial. Our dynamic graphlets overcome the limitations of the temporal motifs. Also, when we aim to characterize the structure and function of an entire temporal network or of individual nodes, our dynamic graphlets outperform the static graphlets. Clearly, accounting for temporal information helps. We apply dynamic graphlets to temporal age-specific molecular network data to deepen our limited knowledge about human aging. Availability and implementation: http://www.nd.edu/∼cone/DG. Contact: tmilenko@nd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072480
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
An effective fractal-tree closure model for simulating blood flow in large arterial networks.
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em
2015-06-01
The aim of the present work is to address the closure problem for hemodynamic simulations by developing a flexible and effective model that accurately distributes flow in the downstream vasculature and can stably provide a physiological pressure outflow boundary condition. To achieve this goal, we model blood flow in the sub-pixel vasculature by using a non-linear 1D model in self-similar networks of compliant arteries that mimic the structure and hierarchy of vessels in the meso-vascular regime (radii [Formula: see text]). We introduce a variable vessel length-to-radius ratio for small arteries and arterioles, while also addressing non-Newtonian blood rheology and arterial wall viscoelasticity effects in small arteries and arterioles. This methodology aims to overcome substantial cut-off radius sensitivities, typically arising in structured tree and linearized impedance models. The proposed model is not sensitive to outflow boundary conditions applied at the end points of the fractal network, and thus does not require calibration of resistance/capacitance parameters typically required for outflow conditions. The proposed model convergences to a periodic state in two cardiac cycles even when started from zero-flow initial conditions. The resulting fractal-trees typically consist of thousands to millions of arteries, posing the need for efficient parallel algorithms. To this end, we have scaled up a Discontinuous Galerkin solver that utilizes the MPI/OpenMP hybrid programming paradigm to thousands of computer cores, and can simulate blood flow in networks of millions of arterial segments at the rate of one cycle per 5 min. The proposed model has been extensively tested on a large and complex cranial network with 50 parent, patient-specific arteries and 21 outlets to which fractal trees where attached, resulting to a network of up to 4,392,484 vessels in total, and a detailed network of the arm with 276 parent arteries and 103 outlets (a total of 702,188 vessels after attaching the fractal trees), returning physiological flow and pressure wave predictions without requiring any parameter estimation or calibration procedures. We present a novel methodology to overcome substantial cut-off radius sensitivities.
An efective fractal-tree closure model for simulating blood flow in large arterial networks
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em.
2014-01-01
The aim of the present work is to address the closure problem for hemodynamic simulations by developing a exible and effective model that accurately distributes flow in the downstream vasculature and can stably provide a physiological pressure out flow boundary condition. To achieve this goal, we model blood flow in the sub-pixel vasculature by using a non-linear 1D model in self-similar networks of compliant arteries that mimic the structure and hierarchy of vessels in the meso-vascular regime (radii 500 μm – 10 μm). We introduce a variable vessel length-to-radius ratio for small arteries and arterioles, while also addressing non-Newtonian blood rheology and arterial wall viscoelasticity effects in small arteries and arterioles. This methodology aims to overcome substantial cut-off radius sensitivities, typically arising in structured tree and linearized impedance models. The proposed model is not sensitive to out flow boundary conditions applied at the end points of the fractal network, and thus does not require calibration of resistance/capacitance parameters typically required for out flow conditions. The proposed model convergences to a periodic state in two cardiac cycles even when started from zero-flow initial conditions. The resulting fractal-trees typically consist of thousands to millions of arteries, posing the need for efficient parallel algorithms. To this end, we have scaled up a Discontinuous Galerkin solver that utilizes the MPI/OpenMP hybrid programming paradigm to thousands of computer cores, and can simulate blood flow in networks of millions of arterial segments at the rate of one cycle per 5 minutes. The proposed model has been extensively tested on a large and complex cranial network with 50 parent, patient-specific arteries and 21 outlets to which fractal trees where attached, resulting to a network of up to 4,392,484 vessels in total, and a detailed network of the arm with 276 parent arteries and 103 outlets (a total of 702,188 vessels after attaching the fractal trees), returning physiological flow and pressure wave predictions without requiring any parameter estimation or calibration procedures. We present a novel methodology to overcome substantial cut-off radius sensitivities PMID:25510364
Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport
NASA Astrophysics Data System (ADS)
Kul'ka, Jozef; Mantič, Martin; Kopas, Melichar; Faltinová, Eva; Kachman, Daniel
2017-02-01
The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.
Anthropic Risk Assessment on Biodiversity
NASA Astrophysics Data System (ADS)
Piragnolo, M.; Pirotti, F.; Vettore, A.; Salogni, G.
2013-01-01
This paper presents a methodology for risk assessment of anthropic activities on habitats and species. The method has been developed for Veneto Region, in order to simplify and improve the quality of EIA procedure (VINCA). Habitats and species, animals and plants, are protected by European Directive 92/43/EEC and 2009/147/EC but they are subject at hazard due to pollution produced by human activities. Biodiversity risks may conduct to deterioration and disturbance in ecological niches, with consequence of loss of biodiversity. Ecological risk assessment applied on Natura 2000 network, is needed to best practice of management and monitoring of environment and natural resources. Threats, pressure and activities, stress and indicators may be managed by geodatabase and analysed using GIS technology. The method used is the classic risk assessment in ecological context, and it defines the natural hazard as influence, element of risk as interference and vulnerability. Also it defines a new parameter called pressure. It uses risk matrix for the risk analysis on spatial and temporal scale. The methodology is qualitative and applies the precautionary principle in environmental assessment. The final product is a matrix which excludes the risk and could find application in the development of a territorial information system.
What Does Global Migration Network Say about Recent Changes in the World System Structure?
ERIC Educational Resources Information Center
Zinkina, Julia; Korotayev, Andrey
2014-01-01
Purpose: The aim of this paper is to investigate whether the structure of the international migration system has remained stable through the recent turbulent changes in the world system. Design/methodology/approach: The methodology draws on the social network analysis framework--but with some noteworthy limitations stipulated by the specifics of…
ERIC Educational Resources Information Center
Sheffield, Jenna Pack; Kimme Hea, Amy C.
2016-01-01
While composition studies researchers have examined the ways social media are impacting our lives inside and outside of the classroom, less attention has been given to the ways in which social media--specifically Social Network Sites (SNSs)--may enhance our own research methods and methodologies by helping to combat research participant attrition…
NASA Astrophysics Data System (ADS)
Guijarro, José A.; López, José A.; Aguilar, Enric; Domonkos, Peter; Venema, Victor; Sigró, Javier; Brunet, Manola
2017-04-01
After the successful inter-comparison of homogenization methods carried out in the COST Action ES0601 (HOME), many methods kept improving their algorithms, suggesting the need of performing new inter-comparison exercises. However, manual applications of the methodologies to a large number of testing networks cannot be afforded without involving the work of many researchers over an extended time. The alternative is to make the comparisons as automatic as possible, as in the MULTITEST project, which, funded by the Spanish Ministry of Economy and Competitiveness, tests homogenization methods by applying them to a large number of synthetic networks of monthly temperature and precipitation. One hundred networks of 10 series were sampled from different master networks containing 100 series of 720 values (60 years times 12 months). Three master temperature networks were built with different degree of cross-correlations between the series in order to simulate conditions of different station densities or climatic heterogeneity. Also three master synthetic networks were developed for precipitation, this time mimicking the characteristics of three different climates: Atlantic temperate, Mediterranean and monsoonal. Inhomogeneities were introduced in every network sampled from the master networks, and all publicly available homogenization methods that we could run in an automatic way were applied to them: ACMANT 3.0, Climatol 3.0, MASH 3.03, RHTestV4, USHCN v52d and HOMER 2.6. Most of them were tested with different settings, and their comparative results can be inspected in box-plot graphics of Root Mean Squared Errors and trend biases computed between the homogenized data and their original homogeneous series. In a first stage, inhomogeneities were applied to the synthetic homogeneous series with five different settings with increasing difficulty and realism: i) big shifts in half of the series; ii) the same with a strong seasonality; iii) short term platforms and local trends; iv) random number of shifts with random size and location in all series; and v) the same plus seasonality of random amplitude. The shifts were additive for temperature and multiplicative for precipitation. The second stage is dedicated to study the impact of the number of series in the networks, seasonalities other than sinusoidal, and the occurrence of simultaneous shifts in a high number of series. Finally, tests will be performed on a longer and more realistic benchmark, with varying number of missing data along time, similar to that used in the COST Action ES0601. These inter-comparisons will be valuable both to the users and to the developers of the tested packages, who can see how their algorithms behave under varied climate conditions.
Prior knowledge driven Granger causality analysis on gene regulatory network discovery
Yao, Shun; Yoo, Shinjae; Yu, Dantong
2015-08-28
Our study focuses on discovering gene regulatory networks from time series gene expression data using the Granger causality (GC) model. However, the number of available time points (T) usually is much smaller than the number of target genes (n) in biological datasets. The widely applied pairwise GC model (PGC) and other regularization strategies can lead to a significant number of false identifications when n>>T. In this study, we proposed a new method, viz., CGC-2SPR (CGC using two-step prior Ridge regularization) to resolve the problem by incorporating prior biological knowledge about a target gene data set. In our simulation experiments, themore » propose new methodology CGC-2SPR showed significant performance improvement in terms of accuracy over other widely used GC modeling (PGC, Ridge and Lasso) and MI-based (MRNET and ARACNE) methods. In addition, we applied CGC-2SPR to a real biological dataset, i.e., the yeast metabolic cycle, and discovered more true positive edges with CGC-2SPR than with the other existing methods. In our research, we noticed a “ 1+1>2” effect when we combined prior knowledge and gene expression data to discover regulatory networks. Based on causality networks, we made a functional prediction that the Abm1 gene (its functions previously were unknown) might be related to the yeast’s responses to different levels of glucose. In conclusion, our research improves causality modeling by combining heterogeneous knowledge, which is well aligned with the future direction in system biology. Furthermore, we proposed a method of Monte Carlo significance estimation (MCSE) to calculate the edge significances which provide statistical meanings to the discovered causality networks. All of our data and source codes will be available under the link https://bitbucket.org/dtyu/granger-causality/wiki/Home.« less
Fluxes through plant metabolic networks: measurements, predictions, insights and challenges.
Kruger, Nicholas J; Ratcliffe, R George
2015-01-01
Although the flows of material through metabolic networks are central to cell function, they are not easy to measure other than at the level of inputs and outputs. This is particularly true in plant cells, where the network spans multiple subcellular compartments and where the network may function either heterotrophically or photoautotrophically. For many years, kinetic modelling of pathways provided the only method for describing the operation of fragments of the network. However, more recently, it has become possible to map the fluxes in central carbon metabolism using the stable isotope labelling techniques of metabolic flux analysis (MFA), and to predict intracellular fluxes using constraints-based modelling procedures such as flux balance analysis (FBA). These approaches were originally developed for the analysis of microbial metabolism, but over the last decade, they have been adapted for the more demanding analysis of plant metabolic networks. Here, the principal features of MFA and FBA as applied to plants are outlined, followed by a discussion of the insights that have been gained into plant metabolic networks through the application of these time-consuming and non-trivial methods. The discussion focuses on how a system-wide view of plant metabolism has increased our understanding of network structure, metabolic perturbations and the provision of reducing power and energy for cell function. Current methodological challenges that limit the scope of plant MFA are discussed and particular emphasis is placed on the importance of developing methods for cell-specific MFA.
Temporal motifs reveal collaboration patterns in online task-oriented networks
NASA Astrophysics Data System (ADS)
Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir
2015-05-01
Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.
Temporal motifs reveal collaboration patterns in online task-oriented networks.
Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir
2015-05-01
Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.
Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai
2016-01-01
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209
Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network
NASA Astrophysics Data System (ADS)
MolaAbasi, H.; Shooshpasha, I.
2016-04-01
The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.
Defining and characterizing the critical transition state prior to the type 2 diabetes disease
Zhu, Chunqing; Zhou, Xin; Chen, Pei; Fu, Tianyun; Hu, Zhongkai; Wu, Qian; Liu, Wei; Liu, Daowei; Yu, Yunxian; Zhang, Yan; McElhinney, Doff B.; Li, Yu-Ming; Culver, Devore S; Alfreds, Shaun T.; Stearns, Frank; Sylvester, Karl G.; Widen, Eric
2017-01-01
Background Type 2 diabetes mellitus (T2DM), with increased risk of serious long-term complications, currently represents 8.3% of the adult population. We hypothesized that a critical transition state prior to the new onset T2DM can be revealed through the longitudinal electronic medical record (EMR) analysis. Method We applied the transition-based network entropy methodology which previously identified a dynamic driver network (DDN) underlying the critical T2DM transition at the tissue molecular biological level. To profile pre-disease phenotypical changes that indicated a critical transition state, a cohort of 7,334 patients was assembled from the Maine State Health Information Exchange (HIE). These patients all had their first confirmative diagnosis of T2DM between January 1, 2013 and June 30, 2013. The cohort’s EMRs from the 24 months preceding their date of first T2DM diagnosis were extracted. Results Analysis of these patients’ pre-disease clinical history identified a dynamic driver network (DDN) and an associated critical transition state six months prior to their first confirmative T2DM state. Conclusions This 6-month window before the disease state provides an early warning of the impending T2DM, warranting an opportunity to apply proactive interventions to prevent or delay the new onset of T2DM. PMID:28686739
Kreula, Sanna M; Kaewphan, Suwisa; Ginter, Filip; Jones, Patrik R
2018-01-01
The increasing move towards open access full-text scientific literature enhances our ability to utilize advanced text-mining methods to construct information-rich networks that no human will be able to grasp simply from 'reading the literature'. The utility of text-mining for well-studied species is obvious though the utility for less studied species, or those with no prior track-record at all, is not clear. Here we present a concept for how advanced text-mining can be used to create information-rich networks even for less well studied species and apply it to generate an open-access gene-gene association network resource for Synechocystis sp. PCC 6803, a representative model organism for cyanobacteria and first case-study for the methodology. By merging the text-mining network with networks generated from species-specific experimental data, network integration was used to enhance the accuracy of predicting novel interactions that are biologically relevant. A rule-based algorithm (filter) was constructed in order to automate the search for novel candidate genes with a high degree of likely association to known target genes by (1) ignoring established relationships from the existing literature, as they are already 'known', and (2) demanding multiple independent evidences for every novel and potentially relevant relationship. Using selected case studies, we demonstrate the utility of the network resource and filter to ( i ) discover novel candidate associations between different genes or proteins in the network, and ( ii ) rapidly evaluate the potential role of any one particular gene or protein. The full network is provided as an open-source resource.
An Approach to V&V of Embedded Adaptive Systems
NASA Technical Reports Server (NTRS)
Liu, Yan; Yerramalla, Sampath; Fuller, Edgar; Cukic, Bojan; Gururajan, Srikaruth
2004-01-01
Rigorous Verification and Validation (V&V) techniques are essential for high assurance systems. Lately, the performance of some of these systems is enhanced by embedded adaptive components in order to cope with environmental changes. Although the ability of adapting is appealing, it actually poses a problem in terms of V&V. Since uncertainties induced by environmental changes have a significant impact on system behavior, the applicability of conventional V&V techniques is limited. In safety-critical applications such as flight control system, the mechanisms of change must be observed, diagnosed, accommodated and well understood prior to deployment. In this paper, we propose a non-conventional V&V approach suitable for online adaptive systems. We apply our approach to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm. Presented methodology consists of a novelty detection technique and online stability monitoring tools. The novelty detection technique is based on Support Vector Data Description that detects novel (abnormal) data patterns. The Online Stability Monitoring tools based on Lyapunov's Stability Theory detect unstable learning behavior in neural networks. Cases studies based on a high fidelity simulator of NASA's Intelligent Flight Control System demonstrate a successful application of the presented V&V methodology. ,
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Robertson, Franklin R.; Clayson, Carol Anne
2012-01-01
Improved estimates of near-surface air temperature and air humidity are critical to the development of more accurate turbulent surface heat fluxes over the ocean. Recent progress in retrieving these parameters has been made through the application of artificial neural networks (ANN) and the use of multi-sensor passive microwave observations. Details are provided on the development of an improved retrieval algorithm that applies the nonlinear statistical ANN methodology to a set of observations from the Advanced Microwave Scanning Radiometer (AMSR-E) and the Advanced Microwave Sounding Unit (AMSU-A) that are currently available from the NASA AQUA satellite platform. Statistical inversion techniques require an adequate training dataset to properly capture embedded physical relationships. The development of multiple training datasets containing only in-situ observations, only synthetic observations produced using the Community Radiative Transfer Model (CRTM), or a mixture of each is discussed. An intercomparison of results using each training dataset is provided to highlight the relative advantages and disadvantages of each methodology. Particular emphasis will be placed on the development of retrievals in cloudy versus clear-sky conditions. Near-surface air temperature and humidity retrievals using the multi-sensor ANN algorithms are compared to previous linear and non-linear retrieval schemes.
NASA Astrophysics Data System (ADS)
Luque, Pablo; Mántaras, Daniel A.; Fidalgo, Eloy; Álvarez, Javier; Riva, Paolo; Girón, Pablo; Compadre, Diego; Ferran, Jordi
2013-12-01
The main objective of this work is to determine the limit of safe driving conditions by identifying the maximal friction coefficient in a real vehicle. The study will focus on finding a method to determine this limit before reaching the skid, which is valuable information in the context of traffic safety. Since it is not possible to measure the friction coefficient directly, it will be estimated using the appropriate tools in order to get the most accurate information. A real vehicle is instrumented to collect information of general kinematics and steering tie-rod forces. A real-time algorithm is developed to estimate forces and aligning torque in the tyres using an extended Kalman filter and neural networks techniques. The methodology is based on determining the aligning torque; this variable allows evaluation of the behaviour of the tyre. It transmits interesting information from the tyre-road contact and can be used to predict the maximal tyre grip and safety margin. The maximal grip coefficient is estimated according to a knowledge base, extracted from computer simulation of a high detailed three-dimensional model, using Adams® software. The proposed methodology is validated and applied to real driving conditions, in which maximal grip and safety margin are properly estimated.
NASA Astrophysics Data System (ADS)
Hupe, Patrick; Ceranna, Lars; Pilger, Christoph
2018-04-01
The International Monitoring System (IMS) has been established to monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty and comprises four technologies, one of which is infrasound. When fully established, the IMS infrasound network consists of 60 sites uniformly distributed around the globe. Besides its primary purpose of determining explosions in the atmosphere, the recorded data reveal information on other anthropogenic and natural infrasound sources. Furthermore, the almost continuous multi-year recordings of differential and absolute air pressure allow for analysing the atmospheric conditions. In this paper, spectral analysis tools are applied to derive atmospheric dynamics from barometric time series. Based on the solar atmospheric tides, a methodology for performing geographic and temporal variability analyses is presented, which is supposed to serve for upcoming studies related to atmospheric dynamics. The surplus value of using the IMS infrasound network data for such purposes is demonstrated by comparing the findings on the thermal tides with previous studies and the Modern-Era Retrospective analysis for Research and Applications Version 2 (MERRA-2), which represents the solar tides well in its surface pressure fields. Absolute air pressure recordings reveal geographical characteristics of atmospheric tides related to the solar day and even to the lunar day. We therefore claim the chosen methodology of using the IMS infrasound network to be applicable for global and temporal studies on specific atmospheric dynamics. Given the accuracy and high temporal resolution of the barometric data from the IMS infrasound network, interactions with gravity waves and planetary waves can be examined in future for refining the knowledge of atmospheric dynamics, e.g. the origin of tidal harmonics up to 9 cycles per day as found in the barometric data sets. Data assimilation in empirical models of solar tides would be a valuable application of the IMS infrasound data.
Health impact assessment in a network of European cities.
Ison, Erica
2013-10-01
The methodology of health impact assessment (HIA) was introduced as one of four core themes for Phase IV (2003-2008) of the World Health Organization European Healthy Cities Network (WHO-EHCN). Four objectives for HIA were set at the beginning of the phase. We report on the results of the evaluation of introducing and implementing this methodology in cities from countries across Europe with widely differing economies and sociopolitical contexts. Two main sources of data were used: a general questionnaire designed for the Phase IV evaluation and the annual reporting template for 2007-2008. Sources of bias included the proportion of non-responders and the requirement to communicate in English. Main barriers to the introduction and implementation of HIA were a lack of skill, knowledge and experience of HIA, the newness of the concept, the lack of a legal basis for implementation and a lack of political support. Main facilitating factors were political support, training in HIA, collaboration with an academic/public health institution or local health agency, a pre-existing culture of intersectoral working, a supportive national policy context, access to WHO materials about or expertise in HIA and membership of the WHO-EHCN, HIA Sub-Network or a National Network. The majority of respondents did not feel that they had had the resources, knowledge or experience to achieve all of the objectives set for HIA in Phase IV. The cities that appear to have been most successful at introducing and implementing HIA had pre-existing experience of HIA, came from a country with a history of applying HIA, were HIA Sub-Network members or had made a commitment to implementing HIA during successive years of Phase IV. Although HIA was recognised as an important component of Healthy Cities' work, the experience in the WHO-EHCN underscores the need for political buy-in, capacity building and adequate resourcing for the introduction and implementation of HIA to be successful.
Understanding and managing disaster evacuation on a transportation network.
Lambert, James H; Parlak, Ayse I; Zhou, Qian; Miller, John S; Fontaine, Michael D; Guterbock, Thomas M; Clements, Janet L; Thekdi, Shital A
2013-01-01
Uncertain population behaviors in a regional emergency could potentially harm the performance of the region's transportation system and subsequent evacuation effort. The integration of behavioral survey data with travel demand modeling enables an assessment of transportation system performance and the identification of operational and public health countermeasures. This paper analyzes transportation system demand and system performance for emergency management in three disaster scenarios. A two-step methodology first estimates the number of trips evacuating the region, thereby capturing behavioral aspects in a scientifically defensible manner based on survey results, and second, assigns these trips to a regional highway network, using geographic information systems software, thereby making the methodology transferable to other locations. Performance measures are generated for each scenario including maps of volume-to-capacity ratios, geographic contours of evacuation time from the center of the region, and link-specific metrics such as weighted average speed and traffic volume. The methods are demonstrated on a 600 segment transportation network in Washington, DC (USA) and are applied to three scenarios involving attacks from radiological dispersion devices (e.g., dirty bombs). The results suggests that: (1) a single detonation would degrade transportation system performance two to three times more than that which occurs during a typical weekday afternoon peak hour, (2) volume on several critical arterials within the network would exceed capacity in the represented scenarios, and (3) resulting travel times to reach intended destinations imply that un-aided evacuation is impractical. These results assist decisions made by two categories of emergency responders: (1) transportation managers who provide traveler information and who make operational adjustments to improve the network (e.g., signal retiming) and (2) public health officials who maintain shelters, food and water stations, or first aid centers along evacuation routes. This approach may also interest decisionmakers who are in a position to influence the allocation of emergency resources, including healthcare providers, infrastructure owners, transit providers, and regional or local planning staff. Copyright © 2012 Elsevier Ltd. All rights reserved.
Complexity Measures in Magnetoencephalography: Measuring "Disorder" in Schizophrenia
Brookes, Matthew J.; Hall, Emma L.; Robson, Siân E.; Price, Darren; Palaniyappan, Lena; Liddle, Elizabeth B.; Liddle, Peter F.; Robinson, Stephen E.; Morris, Peter G.
2015-01-01
This paper details a methodology which, when applied to magnetoencephalography (MEG) data, is capable of measuring the spatio-temporal dynamics of ‘disorder’ in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks) generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls) in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices). PMID:25886553
NASA Astrophysics Data System (ADS)
Guruprasad, R.; Behera, B. K.
2015-10-01
Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.
New methodologies for multi-scale time-variant reliability analysis of complex lifeline networks
NASA Astrophysics Data System (ADS)
Kurtz, Nolan Scot
The cost of maintaining existing civil infrastructure is enormous. Since the livelihood of the public depends on such infrastructure, its state must be managed appropriately using quantitative approaches. Practitioners must consider not only which components are most fragile to hazard, e.g. seismicity, storm surge, hurricane winds, etc., but also how they participate on a network level using network analysis. Focusing on particularly damaged components does not necessarily increase network functionality, which is most important to the people that depend on such infrastructure. Several network analyses, e.g. S-RDA, LP-bounds, and crude-MCS, and performance metrics, e.g. disconnection bounds and component importance, are available for such purposes. Since these networks are existing, the time state is also important. If networks are close to chloride sources, deterioration may be a major issue. Information from field inspections may also have large impacts on quantitative models. To address such issues, hazard risk analysis methodologies for deteriorating networks subjected to seismicity, i.e. earthquakes, have been created from analytics. A bridge component model has been constructed for these methodologies. The bridge fragilities, which were constructed from data, required a deeper level of analysis as these were relevant for specific structures. Furthermore, chloride-induced deterioration network effects were investigated. Depending on how mathematical models incorporate new information, many approaches are available, such as Bayesian model updating. To make such procedures more flexible, an adaptive importance sampling scheme was created for structural reliability problems. Additionally, such a method handles many kinds of system and component problems with singular or multiple important regions of the limit state function. These and previously developed analysis methodologies were found to be strongly sensitive to the network size. Special network topologies may be more or less computationally difficult, while the resolution of the network also has large affects. To take advantage of some types of topologies, network hierarchical structures with super-link representation have been used in the literature to increase the computational efficiency by analyzing smaller, densely connected networks; however, such structures were based on user input and subjective at times. To address this, algorithms must be automated and reliable. These hierarchical structures may indicate the structure of the network itself. This risk analysis methodology has been expanded to larger networks using such automated hierarchical structures. Component importance is the most important objective from such network analysis; however, this may only provide the information of which bridges to inspect/repair earliest and little else. High correlations influence such component importance measures in a negative manner. Additionally, a regional approach is not appropriately modelled. To investigate a more regional view, group importance measures based on hierarchical structures have been created. Such structures may also be used to create regional inspection/repair approaches. Using these analytical, quantitative risk approaches, the next generation of decision makers may make both component and regional-based optimal decisions using information from both network function and further effects of infrastructure deterioration.
Omony, Jimmy; de Jong, Anne; Krawczyk, Antonina O.; Eijlander, Robyn T.; Kuipers, Oscar P.
2018-01-01
Sporulation is a survival strategy, adapted by bacterial cells in response to harsh environmental adversities. The adaptation potential differs between strains and the variations may arise from differences in gene regulation. Gene networks are a valuable way of studying such regulation processes and establishing associations between genes. We reconstructed and compared sporulation gene co-expression networks (GCNs) of the model laboratory strain Bacillus subtilis 168 and the food-borne industrial isolate Bacillus amyloliquefaciens. Transcriptome data obtained from samples of six stages during the sporulation process were used for network inference. Subsequently, a gene set enrichment analysis was performed to compare the reconstructed GCNs of B. subtilis 168 and B. amyloliquefaciens with respect to biological functions, which showed the enriched modules with coherent functional groups associated with sporulation. On basis of the GCNs and time-evolution of differentially expressed genes, we could identify novel candidate genes strongly associated with sporulation in B. subtilis 168 and B. amyloliquefaciens. The GCNs offer a framework for exploring transcription factors, their targets, and co-expressed genes during sporulation. Furthermore, the methodology described here can conveniently be applied to other species or biological processes. PMID:29424683
Omony, Jimmy; de Jong, Anne; Krawczyk, Antonina O; Eijlander, Robyn T; Kuipers, Oscar P
2018-02-09
Sporulation is a survival strategy, adapted by bacterial cells in response to harsh environmental adversities. The adaptation potential differs between strains and the variations may arise from differences in gene regulation. Gene networks are a valuable way of studying such regulation processes and establishing associations between genes. We reconstructed and compared sporulation gene co-expression networks (GCNs) of the model laboratory strain Bacillus subtilis 168 and the food-borne industrial isolate Bacillus amyloliquefaciens. Transcriptome data obtained from samples of six stages during the sporulation process were used for network inference. Subsequently, a gene set enrichment analysis was performed to compare the reconstructed GCNs of B. subtilis 168 and B. amyloliquefaciens with respect to biological functions, which showed the enriched modules with coherent functional groups associated with sporulation. On basis of the GCNs and time-evolution of differentially expressed genes, we could identify novel candidate genes strongly associated with sporulation in B. subtilis 168 and B. amyloliquefaciens. The GCNs offer a framework for exploring transcription factors, their targets, and co-expressed genes during sporulation. Furthermore, the methodology described here can conveniently be applied to other species or biological processes.
A frequency-domain approach to improve ANNs generalization quality via proper initialization.
Chaari, Majdi; Fekih, Afef; Seibi, Abdennour C; Hmida, Jalel Ben
2018-08-01
The ability to train a network without memorizing the input/output data, thereby allowing a good predictive performance when applied to unseen data, is paramount in ANN applications. In this paper, we propose a frequency-domain approach to evaluate the network initialization in terms of quality of training, i.e., generalization capabilities. As an alternative to the conventional time-domain methods, the proposed approach eliminates the approximate nature of network validation using an excess of unseen data. The benefits of the proposed approach are demonstrated using two numerical examples, where two trained networks performed similarly on the training and the validation data sets, yet they revealed a significant difference in prediction accuracy when tested using a different data set. This observation is of utmost importance in modeling applications requiring a high degree of accuracy. The efficiency of the proposed approach is further demonstrated on a real-world problem, where unlike other initialization methods, a more conclusive assessment of generalization is achieved. On the practical front, subtle methodological and implementational facets are addressed to ensure reproducibility and pinpoint the limitations of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin
2010-05-01
This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.
Stability of ecological industry chain: an entropy model approach.
Wang, Qingsong; Qiu, Shishou; Yuan, Xueliang; Zuo, Jian; Cao, Dayong; Hong, Jinglan; Zhang, Jian; Dong, Yong; Zheng, Ying
2016-07-01
A novel methodology is proposed in this study to examine the stability of ecological industry chain network based on entropy theory. This methodology is developed according to the associated dissipative structure characteristics, i.e., complexity, openness, and nonlinear. As defined in the methodology, network organization is the object while the main focus is the identification of core enterprises and core industry chains. It is proposed that the chain network should be established around the core enterprise while supplementation to the core industry chain helps to improve system stability, which is verified quantitatively. Relational entropy model can be used to identify core enterprise and core eco-industry chain. It could determine the core of the network organization and core eco-industry chain through the link form and direction of node enterprises. Similarly, the conductive mechanism of different node enterprises can be examined quantitatively despite the absence of key data. Structural entropy model can be employed to solve the problem of order degree for network organization. Results showed that the stability of the entire system could be enhanced by the supplemented chain around the core enterprise in eco-industry chain network organization. As a result, the sustainability of the entire system could be further improved.
Analysis and Reduction of Complex Networks Under Uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghanem, Roger G
2014-07-31
This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less
NASA Astrophysics Data System (ADS)
Dimond, David A.; Burgess, Robert; Barrios, Nolan; Johnson, Neil D.
2000-05-01
Traditionally, to guarantee the network performance of medical image data transmission, imaging traffic was isolated on a separate network. Organizations are depending on a new generation of multi-purpose networks to transport both normal information and image traffic as they expand access to images throughout the enterprise. These organi want to leverage their existing infrastructure for imaging traffic, but are not willing to accept degradations in overall network performance. To guarantee 'on demand' network performance for image transmissions anywhere at any time, networks need to be designed with the ability to 'carve out' bandwidth for specific applications and to minimize the chances of network failures. This paper will present the methodology Cincinnati Children's Hospital Medical Center (CHMC) used to enhance the physical and logical network design of the existing hospital network to guarantee a class of service for imaging traffic. PACS network designs should utilize the existing enterprise local area network i.e. (LAN) infrastructure where appropriate. Logical separation or segmentation provides the application independence from other clinical and administrative applications as required, ensuring bandwidth and service availability.
A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM)
2017-10-01
TECHNICAL REPORT 3079 October 2017 A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM...Head 55190 Networks Division iii EXECUTIVE SUMMARY This report summarizes the methodology developed to improve the radar threshold modeling...PHASED ARRAY RADAR CONFIGURATION ..................................................................... 1 3. METHODOLOGY
Modeling and clustering water demand patterns from real-world smart meter data
NASA Astrophysics Data System (ADS)
Cheifetz, Nicolas; Noumir, Zineb; Samé, Allou; Sandraz, Anne-Claire; Féliers, Cédric; Heim, Véronique
2017-08-01
Nowadays, drinking water utilities need an acute comprehension of the water demand on their distribution network, in order to efficiently operate the optimization of resources, manage billing and propose new customer services. With the emergence of smart grids, based on automated meter reading (AMR), a better understanding of the consumption modes is now accessible for smart cities with more granularities. In this context, this paper evaluates a novel methodology for identifying relevant usage profiles from the water consumption data produced by smart meters. The methodology is fully data-driven using the consumption time series which are seen as functions or curves observed with an hourly time step. First, a Fourier-based additive time series decomposition model is introduced to extract seasonal patterns from time series. These patterns are intended to represent the customer habits in terms of water consumption. Two functional clustering approaches are then used to classify the extracted seasonal patterns: the functional version of K-means, and the Fourier REgression Mixture (FReMix) model. The K-means approach produces a hard segmentation and K representative prototypes. On the other hand, the FReMix is a generative model and also produces K profiles as well as a soft segmentation based on the posterior probabilities. The proposed approach is applied to a smart grid deployed on the largest water distribution network (WDN) in France. The two clustering strategies are evaluated and compared. Finally, a realistic interpretation of the consumption habits is given for each cluster. The extensive experiments and the qualitative interpretation of the resulting clusters allow one to highlight the effectiveness of the proposed methodology.
Enhanced three-dimensional stochastic adjustment for combined volcano geodetic networks
NASA Astrophysics Data System (ADS)
Del Potro, R.; Muller, C.
2009-12-01
Volcano geodesy is unquestionably a necessary technique in studies of physical volcanology and for eruption early warning systems. However, as every volcano geodesist knows, obtaining measurements of the required resolution using traditional campaigns and techniques is time consuming and requires a large manpower. Moreover, most volcano geodetic networks worldwide use a combination of data from traditional techniques; levelling, electronic distance measurements (EDM), triangulation and Global Navigation Satellite Systems (GNSS) but, in most cases, these data are surveyed, analysed and adjusted independently. This then leaves it to the authors’ criteria to decide which technique renders the most realistic results in each case. Herein we present a way of solving the problem of inter-methodology data integration in a cost-effective manner following a methodology were all the geodetic data of a redundant, combined network (e.g. surveyed by GNSS, levelling, distance, angular data, INSAR, extensometers, etc.) is adjusted stochastically within a single three-dimensional referential frame. The adjustment methodology is based on the least mean square method and links the data with its geometrical component providing combined, precise, three-dimensional, displacement vectors, relative to external reference points as well as stochastically-quantified, benchmark-specific, uncertainty ellipsoids. Three steps in the adjustment allow identifying, and hence dismissing, flagrant measurement errors (antenna height, atmospheric effects, etc.), checking the consistency of external reference points and a final adjustment of the data. Moreover, since the statistical indicators can be obtained from expected uncertainties in the measurements of the different geodetic techniques used (i.e. independent of the measured data), it is possible to run a priori simulations of a geodetic network in order to constrain its resolution, and reduce logistics, before the network is even built. In this work we present a first effort to apply this technique to a new volcano geodetic network on Arenal volcano in Costa Rica, using triangulation, EDM and GNSS data from four campaigns. An a priori simulation, later confirmed by field measurements, of the movement detection capacity of different benchmarks within the network, shows how the network design is optimised to detect smaller displacement at the points where these are expected. Data from the four campaigns also proves the repeatability and consistency of the statistical indicators. A preliminary interpretation of the geodetic data relative to Arenal’s volcanic activity could indicate a correlation between displacement velocity and direction with the location and thickness of the recent lava flow field. This then suggests that a deflation caused by the weight of the lava field could be obscuring the effects of possible deep magmatic sources. Although this study is specific to Arenal volcano and its regional tectonic setting, we suggest that the cost-effective, high-quality results we present, prove the methodology’s potential to be incorporated into the design and analysis of volcano geodetic networks worldwide.
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
Costello, Tracy J; Falk, Catherine T; Ye, Kenny Q
2003-01-01
The Framingham Heart Study data, as well as a related simulated data set, were generously provided to the participants of the Genetic Analysis Workshop 13 in order that newly developed and emerging statistical methodologies could be tested on that well-characterized data set. The impetus driving the development of novel methods is to elucidate the contributions of genes, environment, and interactions between and among them, as well as to allow comparison between and validation of methods. The seven papers that comprise this group used data-mining methodologies (tree-based methods, neural networks, discriminant analysis, and Bayesian variable selection) in an attempt to identify the underlying genetics of cardiovascular disease and related traits in the presence of environmental and genetic covariates. Data-mining strategies are gaining popularity because they are extremely flexible and may have greater efficiency and potential in identifying the factors involved in complex disorders. While the methods grouped together here constitute a diverse collection, some papers asked similar questions with very different methods, while others used the same underlying methodology to ask very different questions. This paper briefly describes the data-mining methodologies applied to the Genetic Analysis Workshop 13 data sets and the results of those investigations. Copyright 2003 Wiley-Liss, Inc.
Challenges to the Learning Organization in the Context of Generational Diversity and Social Networks
ERIC Educational Resources Information Center
Kaminska, Renata; Borzillo, Stefano
2018-01-01
Purpose: The purpose of this paper is to gain a better understanding of the challenges to the emergence of a learning organization (LO) posed by a context of generational diversity and an enterprise social networking system (ESNS). Design/methodology/approach: This study uses a qualitative methodology based on an analysis of 20 semi-structured…
2010-04-01
Methodological Results / Details ................................................ 24 4.1.3.1 Clock Synchronization , Network & Temporal Resolution...xii DRDC Atlantic CR 2010-058 Acknowledgements Special thanks to Carl Helmick, Patti Devlin, Mike Taber, and the Dalhousie lab...Methodological Results / Details 4.1.3.1 Clock Synchronization , Network & Temporal Resolution Due to drift in computer clock times, especially laptop
A climate stress-test of the financial system
NASA Astrophysics Data System (ADS)
Battiston, Stefano; Mandel, Antoine; Monasterolo, Irene; Schütze, Franziska; Visentin, Gabriele
2017-03-01
The urgency of estimating the impact of climate risks on the financial system is increasingly recognized among scholars and practitioners. By adopting a network approach to financial dependencies, we look at how climate policy risk might propagate through the financial system. We develop a network-based climate stress-test methodology and apply it to large Euro Area banks in a `green' and a `brown' scenario. We find that direct and indirect exposures to climate-policy-relevant sectors represent a large portion of investors' equity portfolios, especially for investment and pension funds. Additionally, the portion of banks' loan portfolios exposed to these sectors is comparable to banks' capital. Our results suggest that climate policy timing matters. An early and stable policy framework would allow for smooth asset value adjustments and lead to potential net winners and losers. In contrast, a late and abrupt policy framework could have adverse systemic consequences.
NASA Astrophysics Data System (ADS)
Sanai, L.; Chenini, I.; Ben Mammou, A.; Mercier, E.
2015-01-01
The spatial distribution of fracturing in hard rocks is extremely related to the structural profile and traduces the kinematic evolution. The quantitative and qualitative analysis of fracturing combined to GIS techniques seem to be primordial and efficient in geometric characterization of lineament's network and to reconstruct the relative timing and interaction of the folding and fracturing histories. Also a detailed study of the area geology, lithology, tectonics, is primordial for any hydrogeological study. For that purpose we used a structural approach that consist in comparison between fracture sets before and after unfolding completed by aerospace data and DEM generated from topographic map. The above methodology applied in this study carried out in J. Rebia located in Northwestern of Tunisia demonstrated the heterogeneity of fracturing network and his relation with the fold growth throught time and his importance on groundwater flow.
De retibus socialibus et legibus momenti
NASA Astrophysics Data System (ADS)
Gayo-Avello, D.; Brenes, D. J.; Fernández-Fernández, D.; Fernández-Menéndez, M. E.; García-Suárez, R.
2011-05-01
Online Social Networks (OSNs) are a cutting edge topic. Almost everybody —users, marketers, brands, companies, and researchers— is approaching OSNs to better understand them and take advantage of their benefits. Maybe one of the key concepts underlying OSNs is that of influence which is highly related, although not entirely identical, to those of popularity and centrality. Influence is, according to Merriam-Webster, "the capacity of causing an effect in indirect or intangible ways". Hence, in the context of OSNs, it has been proposed to analyze the clicks received by promoted URLs in order to check for any positive correlation between the number of visits and different "influence" scores. That evaluation methodology is used in this letter to compare a number of those techniques with a new method firstly described here. That new method is a simple and rather elegant solution which tackles with influence in OSNs by applying a physical metaphor. On social networks and the laws of influence.
Analysis of dynamic brain oscillations: methodological advances.
Le Van Quyen, Michel; Bragin, Anatol
2007-07-01
In recent years, new recording technologies have advanced such that, at high temporal and spatial resolutions, oscillations of neuronal networks can be identified from simultaneous, multisite recordings. However, because of the deluge of multichannel data generated by these experiments, achieving the full potential of parallel neuronal recordings also depends on the development of new mathematical methods that can extract meaningful information relating to time, frequency and space. Here, we aim to bridge this gap by focusing on up-to-date recording techniques for measurement of network oscillations and new analysis tools for their quantitative assessment. In particular, we emphasize how these methods can be applied, what property might be inferred from neuronal signals and potentially productive future directions. This review is part of the INMED and TINS special issue, Physiogenic and pathogenic oscillations: the beauty and the beast, derived from presentations at the annual INMED and TINS symposium (http://inmednet.com).
Godino-Llorente, J I; Gómez-Vilda, P
2004-02-01
It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2017-10-01
In order to obtain double encryption via elliptic curve cryptography (ECC) and chaotic synchronisation, this study presents a design methodology for neural-network (NN)-based secure communications in multiple time-delay chaotic systems. ECC is an asymmetric encryption and its strength is based on the difficulty of solving the elliptic curve discrete logarithm problem which is a much harder problem than factoring integers. Because it is much harder, we can get away with fewer bits to provide the same level of security. To enhance the strength of the cryptosystem, we conduct double encryption that combines chaotic synchronisation with ECC. According to the improved genetic algorithm, a fuzzy controller is synthesised to realise the exponential synchronisation and achieves optimal H∞ performance by minimising the disturbances attenuation level. Finally, a numerical example with simulations is given to demonstrate the effectiveness of the proposed approach.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
NASA Astrophysics Data System (ADS)
Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji
2017-12-01
Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.
Motamedi, Shervin; Roy, Chandrabhushan; Shamshirband, Shahaboddin; Hashim, Roslan; Petković, Dalibor; Song, Ki-Il
2015-08-01
Ultrasonic pulse velocity is affected by defects in material structure. This study applied soft computing techniques to predict the ultrasonic pulse velocity for various peats and cement content mixtures for several curing periods. First, this investigation constructed a process to simulate the ultrasonic pulse velocity with adaptive neuro-fuzzy inference system. Then, an ANFIS network with neurons was developed. The input and output layers consisted of four and one neurons, respectively. The four inputs were cement, peat, sand content (%) and curing period (days). The simulation results showed efficient performance of the proposed system. The ANFIS and experimental results were compared through the coefficient of determination and root-mean-square error. In conclusion, use of ANFIS network enhances prediction and generation of strength. The simulation results confirmed the effectiveness of the suggested strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Small-World Brain Networks Revisited
Bassett, Danielle S.; Bullmore, Edward T.
2016-01-01
It is nearly 20 years since the concept of a small-world network was first quantitatively defined, by a combination of high clustering and short path length; and about 10 years since this metric of complex network topology began to be widely applied to analysis of neuroimaging and other neuroscience data as part of the rapid growth of the new field of connectomics. Here, we review briefly the foundational concepts of graph theoretical estimation and generation of small-world networks. We take stock of some of the key developments in the field in the past decade and we consider in some detail the implications of recent studies using high-resolution tract-tracing methods to map the anatomical networks of the macaque and the mouse. In doing so, we draw attention to the important methodological distinction between topological analysis of binary or unweighted graphs, which have provided a popular but simple approach to brain network analysis in the past, and the topology of weighted graphs, which retain more biologically relevant information and are more appropriate to the increasingly sophisticated data on brain connectivity emerging from contemporary tract-tracing and other imaging studies. We conclude by highlighting some possible future trends in the further development of weighted small-worldness as part of a deeper and broader understanding of the topology and the functional value of the strong and weak links between areas of mammalian cortex. PMID:27655008
Local Higher-Order Graph Clustering
Yin, Hao; Benson, Austin R.; Leskovec, Jure; Gleich, David F.
2018-01-01
Local graph clustering methods aim to find a cluster of nodes by exploring a small region of the graph. These methods are attractive because they enable targeted clustering around a given seed node and are faster than traditional global graph clustering methods because their runtime does not depend on the size of the input graph. However, current local graph partitioning methods are not designed to account for the higher-order structures crucial to the network, nor can they effectively handle directed networks. Here we introduce a new class of local graph clustering methods that address these issues by incorporating higher-order network information captured by small subgraphs, also called network motifs. We develop the Motif-based Approximate Personalized PageRank (MAPPR) algorithm that finds clusters containing a seed node with minimal motif conductance, a generalization of the conductance metric for network motifs. We generalize existing theory to prove the fast running time (independent of the size of the graph) and obtain theoretical guarantees on the cluster quality (in terms of motif conductance). We also develop a theory of node neighborhoods for finding sets that have small motif conductance, and apply these results to the case of finding good seed nodes to use as input to the MAPPR algorithm. Experimental validation on community detection tasks in both synthetic and real-world networks, shows that our new framework MAPPR outperforms the current edge-based personalized PageRank methodology. PMID:29770258
Petri Nets with Fuzzy Logic (PNFL): Reverse Engineering and Parametrization
Küffner, Robert; Petri, Tobias; Windhager, Lukas; Zimmer, Ralf
2010-01-01
Background The recent DREAM4 blind assessment provided a particularly realistic and challenging setting for network reverse engineering methods. The in silico part of DREAM4 solicited the inference of cycle-rich gene regulatory networks from heterogeneous, noisy expression data including time courses as well as knockout, knockdown and multifactorial perturbations. Methodology and Principal Findings We inferred and parametrized simulation models based on Petri Nets with Fuzzy Logic (PNFL). This completely automated approach correctly reconstructed networks with cycles as well as oscillating network motifs. PNFL was evaluated as the best performer on DREAM4 in silico networks of size 10 with an area under the precision-recall curve (AUPR) of 81%. Besides topology, we inferred a range of additional mechanistic details with good reliability, e.g. distinguishing activation from inhibition as well as dependent from independent regulation. Our models also performed well on new experimental conditions such as double knockout mutations that were not included in the provided datasets. Conclusions The inference of biological networks substantially benefits from methods that are expressive enough to deal with diverse datasets in a unified way. At the same time, overly complex approaches could generate multiple different models that explain the data equally well. PNFL appears to strike the balance between expressive power and complexity. This also applies to the intuitive representation of PNFL models combining a straightforward graphical notation with colloquial fuzzy parameters. PMID:20862218
Calibration Testing of Network Tap Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popovsky, Barbara; Chee, Brian; Frincke, Deborah A.
2007-11-14
Abstract: Understanding the behavior of network forensic devices is important to support prosecutions of malicious conduct on computer networks as well as legal remedies for false accusations of network management negligence. Individuals who seek to establish the credibility of network forensic data must speak competently about how the data was gathered and the potential for data loss. Unfortunately, manufacturers rarely provide information about the performance of low-layer network devices at a level that will survive legal challenges. This paper proposes a first step toward an independent calibration standard by establishing a validation testing methodology for evaluating forensic taps against manufacturermore » specifications. The methodology and the theoretical analysis that led to its development are offered as a conceptual framework for developing a standard and to "operationalize" network forensic readiness. This paper also provides details of an exemplar test, testing environment, procedures and results.« less
Bhattacharyya, Moitrayee; Vishveshwara, Saraswathi
2011-07-01
In this article, we present a novel application of a quantum clustering (QC) technique to objectively cluster the conformations, sampled by molecular dynamics simulations performed on different ligand bound structures of the protein. We further portray each conformational population in terms of dynamically stable network parameters which beautifully capture the ligand induced variations in the ensemble in atomistic detail. The conformational populations thus identified by the QC method and verified by network parameters are evaluated for different ligand bound states of the protein pyrrolysyl-tRNA synthetase (DhPylRS) from D. hafniense. The ligand/environment induced re-distribution of protein conformational ensembles forms the basis for understanding several important biological phenomena such as allostery and enzyme catalysis. The atomistic level characterization of each population in the conformational ensemble in terms of the re-orchestrated networks of amino acids is a challenging problem, especially when the changes are minimal at the backbone level. Here we demonstrate that the QC method is sensitive to such subtle changes and is able to cluster MD snapshots which are similar at the side-chain interaction level. Although we have applied these methods on simulation trajectories of a modest time scale (20 ns each), we emphasize that our methodology provides a general approach towards an objective clustering of large-scale MD simulation data and may be applied to probe multistate equilibria at higher time scales, and to problems related to protein folding for any protein or protein-protein/RNA/DNA complex of interest with a known structure.
Calamante, Fernando; Masterton, Richard A J; Tournier, Jacques-Donald; Smith, Robert E; Willats, Lisa; Raffelt, David; Connelly, Alan
2013-04-15
MRI provides a powerful tool for studying the functional and structural connections in the brain non-invasively. The technique of functional connectivity (FC) exploits the intrinsic temporal correlations of slow spontaneous signal fluctuations to characterise brain functional networks. In addition, diffusion MRI fibre-tracking can be used to study the white matter structural connections. In recent years, there has been considerable interest in combining these two techniques to provide an overall structural-functional description of the brain. In this work we applied the recently proposed super-resolution track-weighted imaging (TWI) methodology to demonstrate how whole-brain fibre-tracking data can be combined with FC data to generate a track-weighted (TW) FC map of FC networks. The method was applied to data from 8 healthy volunteers, and illustrated with (i) FC networks obtained using a seeded connectivity-based analysis (seeding in the precuneus/posterior cingulate cortex, PCC, known to be part of the default mode network), and (ii) with FC networks generated using independent component analysis (in particular, the default mode, attention, visual, and sensory-motor networks). TW-FC maps showed high intensity in white matter structures connecting the nodes of the FC networks. For example, the cingulum bundles show the strongest TW-FC values in the PCC seeded-based analysis, due to their major role in the connection between medial frontal cortex and precuneus/posterior cingulate cortex; similarly the superior longitudinal fasciculus was well represented in the attention network, the optic radiations in the visual network, and the corticospinal tract and corpus callosum in the sensory-motor network. The TW-FC maps highlight the white matter connections associated with a given FC network, and their intensity in a given voxel reflects the functional connectivity of the part of the nodes of the network linked by the structural connections traversing that voxel. They therefore contain a different (and novel) image contrast from that of the images used to generate them. The results shown in this study illustrate the potential of the TW-FC approach for the fusion of structural and functional data into a single quantitative image. This technique could therefore have important applications in neuroscience and neurology, such as for voxel-based comparison studies. Copyright © 2012 Elsevier Inc. All rights reserved.
Structural methodologies for auditing SNOMED.
Wang, Yue; Halper, Michael; Min, Hua; Perl, Yehoshua; Chen, Yan; Spackman, Kent A
2007-10-01
SNOMED is one of the leading health care terminologies being used worldwide. As such, quality assurance is an important part of its maintenance cycle. Methodologies for auditing SNOMED based on structural aspects of its organization are presented. In particular, automated techniques for partitioning SNOMED into smaller groups of concepts based primarily on relationships patterns are defined. Two abstraction networks, the area taxonomy and p-area taxonomy, are derived from the partitions. The high-level views afforded by these abstraction networks form the basis for systematic auditing. The networks tend to highlight errors that manifest themselves as irregularities at the abstract level. They also support group-based auditing, where sets of purportedly similar concepts are focused on for review. The auditing methodologies are demonstrated on one of SNOMED's top-level hierarchies. Errors discovered during the auditing process are reported.
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Patnaik, Surya N.
2000-01-01
A preliminary aircraft engine design methodology is being developed that utilizes a cascade optimization strategy together with neural network and regression approximation methods. The cascade strategy employs different optimization algorithms in a specified sequence. The neural network and regression methods are used to approximate solutions obtained from the NASA Engine Performance Program (NEPP), which implements engine thermodynamic cycle and performance analysis models. The new methodology is proving to be more robust and computationally efficient than the conventional optimization approach of using a single optimization algorithm with direct reanalysis. The methodology has been demonstrated on a preliminary design problem for a novel subsonic turbofan engine concept that incorporates a wave rotor as a cycle-topping device. Computations of maximum thrust were obtained for a specific design point in the engine mission profile. The results (depicted in the figure) show a significant improvement in the maximum thrust obtained using the new methodology in comparison to benchmark solutions obtained using NEPP in a manual design mode.
NASA Astrophysics Data System (ADS)
Caetano, Marco Antonio Leonel; Yoneyama, Takashi
2015-07-01
This work concerns the study of the effects felt by a network as a whole when a specific node is perturbed. Many real world systems can be described by network models in which the interactions of the various agents can be represented as an edge of a graph. With a graph model in hand, it is possible to evaluate the effect of deleting some of its edges on the architecture and values of nodes of the network. Eventually a node may end up isolated from the rest of the network and an interesting problem is to have a quantitative measure of the impact of such an event. For instance, in the field of finance, the network models are very popular and the proposed methodology allows to carry out "what if" tests in terms of weakening the links between the economic agents, represented as nodes. The two main concepts employed in the proposed methodology are (i) the vibrational IC-Information Centrality, which can provide a measure of the relative importance of a particular node in a network and (ii) autocatalytic networks that can indicate the evolutionary trends of the network. Although these concepts were originally proposed in the context of other fields of knowledge, they were also found to be useful in analyzing financial networks. In order to illustrate the applicability of the proposed methodology, a case of study using the actual data comprising stock market indices of 12 countries is presented.
NASA Astrophysics Data System (ADS)
González, D. L., II; Angus, M. P.; Tetteh, I. K.; Bello, G. A.; Padmanabhan, K.; Pendse, S. V.; Srinivas, S.; Yu, J.; Semazzi, F.; Kumar, V.; Samatova, N. F.
2015-01-01
Decades of hypothesis-driven and/or first-principles research have been applied towards the discovery and explanation of the mechanisms that drive climate phenomena, such as western African Sahel summer rainfall~variability. Although connections between various climate factors have been theorized, not all of the key relationships are fully understood. We propose a data-driven approach to identify candidate players in this climate system, which can help explain underlying mechanisms and/or even suggest new relationships, to facilitate building a more comprehensive and predictive model of the modulatory relationships influencing a climate phenomenon of interest. We applied coupled heterogeneous association rule mining (CHARM), Lasso multivariate regression, and dynamic Bayesian networks to find relationships within a complex system, and explored means with which to obtain a consensus result from the application of such varied methodologies. Using this fusion of approaches, we identified relationships among climate factors that modulate Sahel rainfall. These relationships fall into two categories: well-known associations from prior climate knowledge, such as the relationship with the El Niño-Southern Oscillation (ENSO) and putative links, such as North Atlantic Oscillation, that invite further research.
Gonzalez, II, D. L.; Angus, M. P.; Tetteh, I. K.; ...
2015-01-13
Decades of hypothesis-driven and/or first-principles research have been applied towards the discovery and explanation of the mechanisms that drive climate phenomena, such as western African Sahel summer rainfall~variability. Although connections between various climate factors have been theorized, not all of the key relationships are fully understood. We propose a data-driven approach to identify candidate players in this climate system, which can help explain underlying mechanisms and/or even suggest new relationships, to facilitate building a more comprehensive and predictive model of the modulatory relationships influencing a climate phenomenon of interest. We applied coupled heterogeneous association rule mining (CHARM), Lasso multivariate regression,more » and dynamic Bayesian networks to find relationships within a complex system, and explored means with which to obtain a consensus result from the application of such varied methodologies. Using this fusion of approaches, we identified relationships among climate factors that modulate Sahel rainfall. As a result, these relationships fall into two categories: well-known associations from prior climate knowledge, such as the relationship with the El Niño–Southern Oscillation (ENSO) and putative links, such as North Atlantic Oscillation, that invite further research.« less
NASA Astrophysics Data System (ADS)
Franch, B.; Skakun, S.; Vermote, E.; Roger, J. C.
2017-12-01
Surface albedo is an essential parameter not only for developing climate models, but also for most energy balance studies. While climate models are usually applied at coarse resolution, the energy balance studies, which are mainly focused on agricultural applications, require a high spatial resolution. The albedo, estimated through the angular integration of the BRDF, requires an appropriate angular sampling of the surface. However, Sentinel-2A sampling characteristics, with nearly constant observation geometry and low illumination variation, prevent from deriving a surface albedo product. In this work, we apply an algorithm developed to derive a Landsat surface albedo to Sentinel-2A. It is based on the BRDF parameters estimated from the MODerate Resolution Imaging Spectroradiometer (MODIS) CMG surface reflectance product (M{O,Y}D09) using the VJB method (Vermote et al., 2009). Sentinel-2A unsupervised classification images are used to disaggregate the BRDF parameters to the Sentinel-2 spatial resolution. We test the results over five different sites of the US SURFRAD network and plot the results versus albedo field measurements. Additionally, we also test this methodology using Landsat-8 images.
The Accounting Network: How Financial Institutions React to Systemic Crisis
Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio
2016-01-01
The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies’ financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001–2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities’ heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis. PMID:27736865
The Accounting Network: How Financial Institutions React to Systemic Crisis.
Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio
2016-01-01
The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies' financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001-2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities' heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis.
A network-base analysis of CMIP5 "historical" experiments
NASA Astrophysics Data System (ADS)
Bracco, A.; Foudalis, I.; Dovrolis, C.
2012-12-01
In computer science, "complex network analysis" refers to a set of metrics, modeling tools and algorithms commonly used in the study of complex nonlinear dynamical systems. Its main premise is that the underlying topology or network structure of a system has a strong impact on its dynamics and evolution. By allowing to investigate local and non-local statistical interaction, network analysis provides a powerful, but only marginally explored, framework to validate climate models and investigate teleconnections, assessing their strength, range, and impacts on the climate system. In this work we propose a new, fast, robust and scalable methodology to examine, quantify, and visualize climate sensitivity, while constraining general circulation models (GCMs) outputs with observations. The goal of our novel approach is to uncover relations in the climate system that are not (or not fully) captured by more traditional methodologies used in climate science and often adopted from nonlinear dynamical systems analysis, and to explain known climate phenomena in terms of the network structure or its metrics. Our methodology is based on a solid theoretical framework and employs mathematical and statistical tools, exploited only tentatively in climate research so far. Suitably adapted to the climate problem, these tools can assist in visualizing the trade-offs in representing global links and teleconnections among different data sets. Here we present the methodology, and compare network properties for different reanalysis data sets and a suite of CMIP5 coupled GCM outputs. With an extensive model intercomparison in terms of the climate network that each model leads to, we quantify how each model reproduces major teleconnections, rank model performances, and identify common or specific errors in comparing model outputs and observations.
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
NASA Astrophysics Data System (ADS)
Zou, Yong; Donner, Reik V.; Kurths, Jürgen
2015-02-01
Long-range correlated processes are ubiquitous, ranging from climate variables to financial time series. One paradigmatic example for such processes is fractional Brownian motion (fBm). In this work, we highlight the potentials and conceptual as well as practical limitations when applying the recently proposed recurrence network (RN) approach to fBm and related stochastic processes. In particular, we demonstrate that the results of a previous application of RN analysis to fBm [Liu et al. Phys. Rev. E 89, 032814 (2014), 10.1103/PhysRevE.89.032814] are mainly due to an inappropriate treatment disregarding the intrinsic nonstationarity of such processes. Complementarily, we analyze some RN properties of the closely related stationary fractional Gaussian noise (fGn) processes and find that the resulting network properties are well-defined and behave as one would expect from basic conceptual considerations. Our results demonstrate that RN analysis can indeed provide meaningful results for stationary stochastic processes, given a proper selection of its intrinsic methodological parameters, whereas it is prone to fail to uniquely retrieve RN properties for nonstationary stochastic processes like fBm.
A neural network approach to lung nodule segmentation
NASA Astrophysics Data System (ADS)
Hu, Yaoxiu; Menon, Prahlad G.
2016-03-01
Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%+/-0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.
Optimization of an electromagnetic linear actuator using a network and a finite element model
NASA Astrophysics Data System (ADS)
Neubert, Holger; Kamusella, Alfred; Lienig, Jens
2011-03-01
Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.
Karpf, Christian; Krebs, Peter
2011-05-01
The management of sewer systems requires information about discharge and variability of typical wastewater sources in urban catchments. Especially the infiltration of groundwater and the inflow of surface water (I/I) are important for making decisions about the rehabilitation and operation of sewer networks. This paper presents a methodology to identify I/I and estimate its quantity. For each flow fraction in sewer networks, an individual model approach is formulated whose parameters are optimised by the method of least squares. This method was applied to estimate the contributions to the wastewater flow in the sewer system of the City of Dresden (Germany), where data availability is good. Absolute flows of I/I and their temporal variations are estimated. Further information on the characteristics of infiltration is gained by clustering and grouping sewer pipes according to the attributes construction year and groundwater influence and relating these resulting classes to infiltration behaviour. Further, it is shown that condition classes based on CCTV-data can be used to estimate the infiltration potential of sewer pipes. Copyright © 2011 Elsevier Ltd. All rights reserved.
The use of the bicycle compatibility index in identifying gaps and deficiencies in bicycle networks
NASA Astrophysics Data System (ADS)
Ilie, A.; Oprea, C.; Costescu, D.; Roşca, E.; Dinu, O.; Ghionea, F.
2016-11-01
Currently, no methodology is widely accepted by engineers, planners, or bicycle coordinators that allow them to determine how compatible a roadway is in providing efficient operation of both bicycles and motor vehicles. Previous studies reported a number of approaches to obtain an appropriate level of service; some authors developed the bicycle level of service (BLOS) and other authors developed the bicycle compatibility indexes (BCI). The level of service (BLOS) for a bicycle route represents an evaluation of safety and commodity perceived by a bicyclist reported to the motorized traffic, while running on the road surface. The bicycle compatibility index (BCI) is used by bicycle coordinators, transportation planners, traffic engineers to evaluate the capability of specific roadways to accommodate both motorists and bicyclists and to plan for and design roadways that are bicycle compatible. After applying BCI and BLOS models for the designed bicycle infrastructure network in the city of Dej, one can see that only few streets are Moderately Low compatible compared to the others with a high degree of compatibility that recommends to include them in the bicycle infrastructure network.
Milz, Patricia; Pascual-Marqui, Roberto D; Lehmann, Dietrich; Faber, Pascal L
2016-05-01
Functional states of the brain are constituted by the temporally attuned activity of spatially distributed neural networks. Such networks can be identified by independent component analysis (ICA) applied to frequency-dependent source-localized EEG data. This methodology allows the identification of networks at high temporal resolution in frequency bands of established location-specific physiological functions. EEG measurements are sensitive to neural activity changes in cortical areas of modality-specific processing. We tested effects of modality-specific processing on functional brain networks. Phasic modality-specific processing was induced via tasks (state effects) and tonic processing was assessed via modality-specific person parameters (trait effects). Modality-specific person parameters and 64-channel EEG were obtained from 70 male, right-handed students. Person parameters were obtained using cognitive style questionnaires, cognitive tests, and thinking modality self-reports. EEG was recorded during four conditions: spatial visualization, object visualization, verbalization, and resting. Twelve cross-frequency networks were extracted from source-localized EEG across six frequency bands using ICA. RMANOVAs, Pearson correlations, and path modelling examined effects of tasks and person parameters on networks. Results identified distinct state- and trait-dependent functional networks. State-dependent networks were characterized by decreased, trait-dependent networks by increased alpha activity in sub-regions of modality-specific pathways. Pathways of competing modalities showed opposing alpha changes. State- and trait-dependent alpha were associated with inhibitory and automated processing, respectively. Antagonistic alpha modulations in areas of competing modalities likely prevent intruding effects of modality-irrelevant processing. Considerable research suggested alpha modulations related to modality-specific states and traits. This study identified the distinct electrophysiological cortical frequency-dependent networks within which they operate.
A scoping review of indirect comparison methods and applications using individual patient data.
Veroniki, Areti Angeliki; Straus, Sharon E; Soobiah, Charlene; Elliott, Meghan J; Tricco, Andrea C
2016-04-27
Several indirect comparison methods, including network meta-analyses (NMAs), using individual patient data (IPD) have been developed to synthesize evidence from a network of trials. Although IPD indirect comparisons are published with increasing frequency in health care literature, there is no guidance on selecting the appropriate methodology and on reporting the methods and results. In this paper we examine the methods and reporting of indirect comparison methods using IPD. We searched MEDLINE, Embase, the Cochrane Library, and CINAHL from inception until October 2014. We included published and unpublished studies reporting a method, application, or review of indirect comparisons using IPD and at least three interventions. We identified 37 papers, including a total of 33 empirical networks. Of these, only 9 (27 %) IPD-NMAs reported the existence of a study protocol, whereas 3 (9 %) studies mentioned that protocols existed without providing a reference. The 33 empirical networks included 24 (73 %) IPD-NMAs and 9 (27 %) matching adjusted indirect comparisons (MAICs). Of the 21 (64 %) networks with at least one closed loop, 19 (90 %) were IPD-NMAs, 13 (68 %) of which evaluated the prerequisite consistency assumption, and only 5 (38 %) of the 13 IPD-NMAs used statistical approaches. The median number of trials included per network was 10 (IQR 4-19) (IPD-NMA: 15 [IQR 8-20]; MAIC: 2 [IQR 3-5]), and the median number of IPD trials included in a network was 3 (IQR 1-9) (IPD-NMA: 6 [IQR 2-11]; MAIC: 2 [IQR 1-2]). Half of the networks (17; 52 %) applied Bayesian hierarchical models (14 one-stage, 1 two-stage, 1 used IPD as an informative prior, 1 unclear-stage), including either IPD alone or with aggregated data (AD). Models for dichotomous and continuous outcomes were available (IPD alone or combined with AD), as were models for time-to-event data (IPD combined with AD). One in three indirect comparison methods modeling IPD adjusted results from different trials to estimate effects as if they had come from the same, randomized, population. Key methodological and reporting elements (e.g., evaluation of consistency, existence of study protocol) were often missing from an indirect comparison paper.
van Dam, Jesse C J; Schaap, Peter J; Martins dos Santos, Vitor A P; Suárez-Diez, María
2014-09-26
Different methods have been developed to infer regulatory networks from heterogeneous omics datasets and to construct co-expression networks. Each algorithm produces different networks and efforts have been devoted to automatically integrate them into consensus sets. However each separate set has an intrinsic value that is diluted and partly lost when building a consensus network. Here we present a methodology to generate co-expression networks and, instead of a consensus network, we propose an integration framework where the different networks are kept and analysed with additional tools to efficiently combine the information extracted from each network. We developed a workflow to efficiently analyse information generated by different inference and prediction methods. Our methodology relies on providing the user the means to simultaneously visualise and analyse the coexisting networks generated by different algorithms, heterogeneous datasets, and a suite of analysis tools. As a show case, we have analysed the gene co-expression networks of Mycobacterium tuberculosis generated using over 600 expression experiments. Regarding DNA damage repair, we identified SigC as a key control element, 12 new targets for LexA, an updated LexA binding motif, and a potential mismatch repair system. We expanded the DevR regulon with 27 genes while identifying 9 targets wrongly assigned to this regulon. We discovered 10 new genes linked to zinc uptake and a new regulatory mechanism for ZuR. The use of co-expression networks to perform system level analysis allows the development of custom made methodologies. As show cases we implemented a pipeline to integrate ChIP-seq data and another method to uncover multiple regulatory layers. Our workflow is based on representing the multiple types of information as network representations and presenting these networks in a synchronous framework that allows their simultaneous visualization while keeping specific associations from the different networks. By simultaneously exploring these networks and metadata, we gained insights into regulatory mechanisms in M. tuberculosis that could not be obtained through the separate analysis of each data type.
Actor-Network Theory and methodology: Just what does it mean to say that nonhumans have agency?
Sayes, Edwin
2014-02-01
Actor-Network Theory is a controversial social theory. In no respect is this more so than the role it 'gives' to nonhumans: nonhumans have agency, as Latour provocatively puts it. This article aims to interrogate the multiple layers of this declaration to understand what it means to assert with Actor-Network Theory that nonhumans exercise agency. The article surveys a wide corpus of statements by the position's leading figures and emphasizes the wider methodological framework in which these statements are embedded. With this work done, readers will then be better placed to reject or accept the Actor-Network position - understanding more precisely what exactly it is at stake in this decision.
Short-term forecasting of turbidity in trunk main networks.
Meyers, Gregory; Kapelan, Zoran; Keedwell, Edward
2017-11-01
Water discolouration is an increasingly important and expensive issue due to rising customer expectations, tighter regulatory demands and ageing Water Distribution Systems (WDSs) in the UK and abroad. This paper presents a new turbidity forecasting methodology capable of aiding operational staff and enabling proactive management strategies. The turbidity forecasting methodology developed here is completely data-driven and does not require hydraulic or water quality network model that is expensive to build and maintain. The methodology is tested and verified on a real trunk main network with observed turbidity measurement data. Results obtained show that the methodology can detect if discolouration material is mobilised, estimate if sufficient turbidity will be generated to exceed a preselected threshold and approximate how long the material will take to reach the downstream meter. Classification based forecasts of turbidity can be reliably made up to 5 h ahead although at the expense of increased false alarm rates. The methodology presented here could be used as an early warning system that can enable a multitude of cost beneficial proactive management strategies to be implemented as an alternative to expensive trunk mains cleaning programs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
...: Data collection to understand how NIH programs apply methodologies to improve their research programs... research programs apply methodologies to improve their organizational effectiveness. The degree of an...; 30-Day Comment Request; Data Collection To Understand How NIH Programs Apply Methodologies To Improve...
Inference of gene regulatory networks from time series by Tsallis entropy
2011-01-01
Background The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 ≤ q ≤ 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/. PMID:21545720
NASA Astrophysics Data System (ADS)
Muñoz, Randy; Paredes, Javier; Huggel, Christian; Drenkhan, Fabian; García, Javier
2017-04-01
The availability and consistency of data is a determining factor for the reliability of any hydrological model and simulated results. Unfortunately, there are many regions worldwide where data is not available in the desired quantity and quality. The Santa River basin (SRB), located within a complex topographic and climatic setting in the tropical Andes of Peru is a clear example of this challenging situation. A monitoring network of in-situ stations in the SRB recorded series of hydro-meteorological variables which finally ceased to operate in 1999. In the following years, several researchers evaluated and completed many of these series. This database was used by multiple research and policy-oriented projects in the SRB. However, hydroclimatic information remains limited, making it difficult to perform research, especially when dealing with the assessment of current and future water resources. In this context, here the evaluation of different methodologies to interpolate temperature and precipitation data at a monthly time step as well as ice volume data in glacierized basins with limited data is presented. The methodologies were evaluated for the Quillcay River, a tributary of the SRB, where the hydro-meteorological data is available from nearby monitoring stations since 1983. The study period was 1983 - 1999 with a validation period among 1993 - 1999. For temperature series the aim was to extend the observed data and interpolate it. Data from Reanalysis NCEP was used to extend the observed series: 1) using a simple correlation with multiple field stations, or 2) applying the altitudinal correction proposed in previous studies. The interpolation then was applied as a function of altitude. Both methodologies provide very close results, by parsimony simple correlation is shown as a viable choice. For precipitation series, the aim was to interpolate observed data. Two methodologies were evaluated: 1) Inverse Distance Weighting whose results underestimate the amount of precipitation in high-altitudinal zones, and 2) ordinary Kriging (OK) whose variograms were calculated with the multi-annual monthly mean precipitation applying them to the whole study period. OK leads to better results in both low and high altitudinal zones. For ice volume, the aim was to estimate values from historical data: 1) with the GlabTop algorithm which needs digital elevation models, but these are available in an appropriate scale since 2009, 2) with a widely applied but controversially discussed glacier area-volume relation whose parameters were calibrated with results from the GlabTop model. Both methodologies provide reasonable results, but for historical data, the area-volume scaling only requires the glacial area easy to calculate from satellite images since 1986. In conclusion, the simple correlation, the OK and the calibrated relation for ice volume showed the best ways to interpolate glacio-climatic information. However, these methods must be carefully applied and revisited for the specific situation with high complexity. This is a first step in order to identify the most appropriate methods to interpolate and extend observed data in glacierized basins with limited information. New research should be done evaluating another methodologies and meteorological data in order to improve hydrological models and water management policies.
Assessment of Mixed-Layer Height Estimation from Single-wavelength Ceilometer Profiles
Knepp, Travis N.; Szykman, James J.; Long, Russell; Duvall, Rachelle M.; Krug, Jonathan; Beaver, Melinda; Cavender, Kevin; Kronmiller, Keith; Wheeler, Michael; Delgado, Ruben; Hoff, Raymond; Berkoff, Timothy; Olson, Erik; Clark, Richard; Wolfe, Daniel; Van Gilst, David; Neil, Doreen
2018-01-01
Differing boundary/mixed-layer height measurement methods were assessed in moderately-polluted and clean environments, with a focus on the Vaisala CL51 ceilometer. This intercomparison was performed as part of ongoing measurements at the Chemistry And Physics of the Atmospheric Boundary Layer Experiment (CAPABLE) site in Hampton, Virginia and during the 2014 Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) field campaign that took place in and around Denver, Colorado. We analyzed CL51 data that were collected via two different methods (BLView software, which applied correction factors, and simple terminal emulation logging) to determine the impact of data collection methodology. Further, we evaluated the STRucture of the ATmosphere (STRAT) algorithm as an open-source alternative to BLView (note that the current work presents an evaluation of the BLView and STRAT algorithms and does not intend to act as a validation of either). Filtering criteria were defined according to the change in mixed-layer height (MLH) distributions for each instrument and algorithm and were applied throughout the analysis to remove high-frequency fluctuations from the MLH retrievals. Of primary interest was determining how the different data-collection methodologies and algorithms compare to each other and to radiosonde-derived boundary-layer heights when deployed as part of a larger instrument network. We determined that data-collection methodology is not as important as the processing algorithm and that much of the algorithm differences might be driven by impacts of local meteorology and precipitation events that pose algorithm difficulties. The results of this study show that a common processing algorithm is necessary for LIght Detection And Ranging (LIDAR)-based MLH intercomparisons, and ceilometer-network operation and that sonde-derived boundary layer heights are higher (10–15% at mid-day) than LIDAR-derived mixed-layer heights. We show that averaging the retrieved MLH to 1-hour resolution (an appropriate time scale for a priori data model initialization) significantly improved correlation between differing instruments and differing algorithms. PMID:29682087
[Consensus paper on the terminological differentiation of various aspect of body experience].
Röhricht, Frank; Seidler, Klaus-Peter; Joraschky, Peter; Borkenhagen, Ada; Lausberg, Hedda; Lemche, Erwin; Loew, Thomas; Porsch, Udo; Schreiber-Willnow, Karin; Tritt, Karin
2005-01-01
In the past, phenomenological research on subjective body experience was characterised by vaguely defined terminology and methodological shortcomings. The term "body image" has been applied heterogeneously in literature in order to describe a variety of bodily phenomena. In this paper, the German terminology applied to the phenomenology of body experiences is described systematically. In developing a systematic terminology the authors refer to scientific evidence as well as recent reviews, and closely adhere to definitions commonly used in English literature. Different perspectives are utilised, particularly anthropological concepts and theories from developmental and self-psychology. Distinct aspects of body experience are described within the context of a network of external determinants and along a continuum between somatic and mental anchor points. Applying the term "body experience" as umbrella term, different aspects are defined: perceptive (body schema/-perceive), affective (body-cathexis), cognitive-evaluative (body-image, body-ego) and body-consciousness. It is emphasized, that the distinct description of functional levels has to be taken as an approximation of the reality of integrated body experience.
Analysis of the interaction between experimental and applied behavior analysis.
Virues-Ortega, Javier; Hurtado-Parrado, Camilo; Cox, Alison D; Pear, Joseph J
2014-01-01
To study the influences between basic and applied research in behavior analysis, we analyzed the coauthorship interactions of authors who published in JABA and JEAB from 1980 to 2010. We paid particular attention to authors who published in both JABA and JEAB (dual authors) as potential agents of cross-field interactions. We present a comprehensive analysis of dual authors' coauthorship interactions using social networks methodology and key word analysis. The number of dual authors more than doubled (26 to 67) and their productivity tripled (7% to 26% of JABA and JEAB articles) between 1980 and 2010. Dual authors stood out in terms of number of collaborators, number of publications, and ability to interact with multiple groups within the field. The steady increase in JEAB and JABA interactions through coauthors and the increasing range of topics covered by dual authors provide a basis for optimism regarding the progressive integration of basic and applied behavior analysis. © Society for the Experimental Analysis of Behavior.
NASA Astrophysics Data System (ADS)
Gjaja, Marin N.
1997-11-01
Neural networks for supervised and unsupervised learning are developed and applied to problems in remote sensing, continuous map learning, and speech perception. Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART networks synthesize fuzzy logic and neural networks, and supervised ARTMAP networks incorporate ART modules for prediction and classification. New ART and ARTMAP methods resulting from analyses of data structure, parameter specification, and category selection are developed. Architectural modifications providing flexibility for a variety of applications are also introduced and explored. A new methodology for automatic mapping from Landsat Thematic Mapper (TM) and terrain data, based on fuzzy ARTMAP, is developed. System capabilities are tested on a challenging remote sensing problem, prediction of vegetation classes in the Cleveland National Forest from spectral and terrain features. After training at the pixel level, performance is tested at the stand level, using sites not seen during training. Results are compared to those of maximum likelihood classifiers, back propagation neural networks, and K-nearest neighbor algorithms. Best performance is obtained using a hybrid system based on a convex combination of fuzzy ARTMAP and maximum likelihood predictions. This work forms the foundation for additional studies exploring fuzzy ARTMAP's capability to estimate class mixture composition for non-homogeneous sites. Exploratory simulations apply ARTMAP to the problem of learning continuous multidimensional mappings. A novel system architecture retains basic ARTMAP properties of incremental and fast learning in an on-line setting while adding components to solve this class of problems. The perceptual magnet effect is a language-specific phenomenon arising early in infant speech development that is characterized by a warping of speech sound perception. An unsupervised neural network model is proposed that embodies two principal hypotheses supported by experimental data--that sensory experience guides language-specific development of an auditory neural map and that a population vector can predict psychological phenomena based on map cell activities. Model simulations show how a nonuniform distribution of map cell firing preferences can develop from language-specific input and give rise to the magnet effect.
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic
NASA Technical Reports Server (NTRS)
Armstrong, Curtis D.; Humphreys, William M.
2003-01-01
We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.
Performance evaluation of a distance learning program.
Dailey, D J; Eno, K R; Brinkley, J F
1994-01-01
This paper presents a performance metric which uses a single number to characterize the response time for a non-deterministic client-server application operating over the Internet. When applied to a Macintosh-based distance learning application called the Digital Anatomist Browser, the metric allowed us to observe that "A typical student doing a typical mix of Browser commands on a typical data set will experience the same delay if they use a slow Macintosh on a local network or a fast Macintosh on the other side of the country accessing the data over the Internet." The methodology presented is applicable to other client-server applications that are rapidly appearing on the Internet.
Whole cell entrapment techniques.
Trelles, Jorge A; Rivero, Cintia W
2013-01-01
Microbial whole cells are efficient, ecological, and low-cost catalysts that have been successfully applied in the pharmaceutical, environmental, and alimentary industries, among others. Microorganism immobilization is a good way to carry out the bioprocess under preparative conditions. The main advantages of this methodology lie in their high operational stability, easy upstream separation and bioprocess scale-up feasibility. Cell entrapment is the most widely used technique for whole cell immobilization. This technique-in which the cells are included within a rigid network-is porous enough to allow the diffusion of substrates and products, protects the selected microorganism from the reaction medium, and has high immobilization efficiency (100 % in most cases).
HOLA: Human-like Orthogonal Network Layout.
Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael
2016-01-01
Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.
Graphical tools for network meta-analysis in STATA.
Chaimani, Anna; Higgins, Julian P T; Mavridis, Dimitris; Spyridonos, Panagiota; Salanti, Georgia
2013-01-01
Network meta-analysis synthesizes direct and indirect evidence in a network of trials that compare multiple interventions and has the potential to rank the competing treatments according to the studied outcome. Despite its usefulness network meta-analysis is often criticized for its complexity and for being accessible only to researchers with strong statistical and computational skills. The evaluation of the underlying model assumptions, the statistical technicalities and presentation of the results in a concise and understandable way are all challenging aspects in the network meta-analysis methodology. In this paper we aim to make the methodology accessible to non-statisticians by presenting and explaining a series of graphical tools via worked examples. To this end, we provide a set of STATA routines that can be easily employed to present the evidence base, evaluate the assumptions, fit the network meta-analysis model and interpret its results.
Graphical Tools for Network Meta-Analysis in STATA
Chaimani, Anna; Higgins, Julian P. T.; Mavridis, Dimitris; Spyridonos, Panagiota; Salanti, Georgia
2013-01-01
Network meta-analysis synthesizes direct and indirect evidence in a network of trials that compare multiple interventions and has the potential to rank the competing treatments according to the studied outcome. Despite its usefulness network meta-analysis is often criticized for its complexity and for being accessible only to researchers with strong statistical and computational skills. The evaluation of the underlying model assumptions, the statistical technicalities and presentation of the results in a concise and understandable way are all challenging aspects in the network meta-analysis methodology. In this paper we aim to make the methodology accessible to non-statisticians by presenting and explaining a series of graphical tools via worked examples. To this end, we provide a set of STATA routines that can be easily employed to present the evidence base, evaluate the assumptions, fit the network meta-analysis model and interpret its results. PMID:24098547
Bolia, Robert S; Nelson, W Todd
2007-05-01
The recently promulgated doctrine of network-centric warfare suggests that increases in shared situation awareness and self-synchronization will be emergent properties of densely connected military networks. What it fails to say is how these enhancements are to be measured. The present article frames the discussion as a question of how to characterize team performance, and considers such performance in the context of its hypothetical components: situation awareness, workload, and error. This examination concludes that reliable measures of these constructs are lacking for teams, even when they exist for individual operators, and that this is due to philosophical and/or methodological flaws in their conceptual development. Additional research is recommended to overcome these deficiencies, as well as consideration of novel multidisciplinary approaches that draw on methodologies employed in the social, physical, and biological sciences.
Willis, Cameron; Kernoghan, Alison; Riley, Barbara; Popp, Janice; Best, Allan; Milward, H Brinton
2015-11-19
We conducted a mixed methods study from June 2014 to March 2015 to assess the perspectives of stakeholders in networks that adopt a population approach for chronic disease prevention (CDP). The purpose of the study was to identify important and feasible outcome measures for monitoring network performance. Participants from CDP networks in Canada completed an online concept mapping exercise, which was followed by interviews with network stakeholders to further understand the findings. Nine concepts were considered important outcomes of CDP networks: enhanced learning, improved use of resources, enhanced or increased relationships, improved collaborative action, network cohesion, improved system outcomes, improved population health outcomes, improved practice and policy planning, and improved intersectoral engagement. Three themes emerged from participant interviews related to measurement of the identified concepts: the methodological difficulties in measuring network outcomes, the dynamic nature of network evolution and function and implications for outcome assessment, and the challenge of measuring multisectoral engagement in CDP networks. Results from this study provide initial insights into concepts that can be used to describe the outcomes of networks for CDP and may offer foundations for strengthening network outcome-monitoring strategies and methodologies.
Kernoghan, Alison; Riley, Barbara; Popp, Janice; Best, Allan; Milward, H. Brinton
2015-01-01
Introduction We conducted a mixed methods study from June 2014 to March 2015 to assess the perspectives of stakeholders in networks that adopt a population approach for chronic disease prevention (CDP). The purpose of the study was to identify important and feasible outcome measures for monitoring network performance. Methods Participants from CDP networks in Canada completed an online concept mapping exercise, which was followed by interviews with network stakeholders to further understand the findings. Results Nine concepts were considered important outcomes of CDP networks: enhanced learning, improved use of resources, enhanced or increased relationships, improved collaborative action, network cohesion, improved system outcomes, improved population health outcomes, improved practice and policy planning, and improved intersectoral engagement. Three themes emerged from participant interviews related to measurement of the identified concepts: the methodological difficulties in measuring network outcomes, the dynamic nature of network evolution and function and implications for outcome assessment, and the challenge of measuring multisectoral engagement in CDP networks. Conclusion Results from this study provide initial insights into concepts that can be used to describe the outcomes of networks for CDP and may offer foundations for strengthening network outcome-monitoring strategies and methodologies. PMID:26583571
Sanz-Cabanillas, Juan Luis; Ruano, Juan; Gomez-Garcia, Francisco; Alcalde-Mellado, Patricia; Gay-Mimbrera, Jesus; Aguilar-Luque, Macarena; Maestre-Lopez, Beatriz; Gonzalez-Padilla, Marcelino; Carmona-Fernandez, Pedro J; Velez Garcia-Nieto, Antonio; Isla-Tejera, Beatriz
2017-01-01
Moderate-to-severe psoriasis is associated with significant comorbidity, an impaired quality of life, and increased medical costs, including those associated with treatments. Systematic reviews (SRs) and meta-analyses (MAs) of randomized clinical trials are considered two of the best approaches to the summarization of high-quality evidence. However, methodological bias can reduce the validity of conclusions from these types of studies and subsequently impair the quality of decision making. As co-authorship is among the most well-documented forms of research collaboration, the present study aimed to explore whether authors' collaboration methods might influence the methodological quality of SRs and MAs of psoriasis. Methodological quality was assessed by two raters who extracted information from full articles. After calculating total and per-item Assessment of Multiple Systematic Reviews (AMSTAR) scores, reviews were classified as low (0-4), medium (5-8), or high (9-11) quality. Article metadata and journal-related bibliometric indices were also obtained. A total of 741 authors from 520 different institutions and 32 countries published 220 reviews that were classified as high (17.2%), moderate (55%), or low (27.7%) methodological quality. The high methodological quality subnetwork was larger but had a lower connection density than the low and moderate methodological quality subnetworks; specifically, the former contained relatively fewer nodes (authors and reviews), reviews by authors, and collaborators per author. Furthermore, the high methodological quality subnetwork was highly compartmentalized, with several modules representing few poorly interconnected communities. In conclusion, structural differences in author-paper affiliation network may influence the methodological quality of SRs and MAs on psoriasis. As the author-paper affiliation network structure affects study quality in this research field, authors who maintain an appropriate balance between scientific quality and productivity are more likely to develop higher quality reviews.
SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach
NASA Astrophysics Data System (ADS)
Colace, Francesco; de Santo, Massimo; Ferrandino, Salvatore
The last decade has witnessed an intense spread of computer networks that has been further accelerated with the introduction of wireless networks. Simultaneously with, this growth has increased significantly the problems of network management. Especially in small companies, where there is no provision of personnel assigned to these tasks, the management of such networks is often complex and malfunctions can have significant impacts on their businesses. A possible solution is the adoption of Simple Network Management Protocol. Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to manage network performance, find and solve network problems, and plan for network growth. SNMP has a big disadvantage: its simple design means that the information it deals with is neither detailed nor well organized enough to deal with the expanding modern networking requirements. Over the past years much efforts has been given to improve the lack of Simple Network Management Protocol and new frameworks has been developed: A promising approach involves the use of Ontology. This is the starting point of this paper where a novel approach to the network management based on the use of the Slow Intelligence System methodologies and Ontology based techniques is proposed. Slow Intelligence Systems is a general-purpose systems characterized by being able to improve performance over time through a process involving enumeration, propagation, adaptation, elimination and concentration. Therefore, the proposed approach aims to develop a system able to acquire, according to an SNMP standard, information from the various hosts that are in the managed networks and apply solutions in order to solve problems. To check the feasibility of this model first experimental results in a real scenario are showed.
Kreula, Sanna M.; Kaewphan, Suwisa; Ginter, Filip
2018-01-01
The increasing move towards open access full-text scientific literature enhances our ability to utilize advanced text-mining methods to construct information-rich networks that no human will be able to grasp simply from ‘reading the literature’. The utility of text-mining for well-studied species is obvious though the utility for less studied species, or those with no prior track-record at all, is not clear. Here we present a concept for how advanced text-mining can be used to create information-rich networks even for less well studied species and apply it to generate an open-access gene-gene association network resource for Synechocystis sp. PCC 6803, a representative model organism for cyanobacteria and first case-study for the methodology. By merging the text-mining network with networks generated from species-specific experimental data, network integration was used to enhance the accuracy of predicting novel interactions that are biologically relevant. A rule-based algorithm (filter) was constructed in order to automate the search for novel candidate genes with a high degree of likely association to known target genes by (1) ignoring established relationships from the existing literature, as they are already ‘known’, and (2) demanding multiple independent evidences for every novel and potentially relevant relationship. Using selected case studies, we demonstrate the utility of the network resource and filter to (i) discover novel candidate associations between different genes or proteins in the network, and (ii) rapidly evaluate the potential role of any one particular gene or protein. The full network is provided as an open-source resource. PMID:29844966
Attainable region analysis for continuous production of second generation bioethanol
2013-01-01
Background Despite its semi-commercial status, ethanol production from lignocellulosics presents many complexities not yet fully solved. Since the pretreatment stage has been recognized as a complex and yield-determining step, it has been extensively studied. However, economic success of the production process also requires optimization of the biochemical conversion stage. This work addresses the search of bioreactor configurations with improved residence times for continuous enzymatic saccharification and fermentation operations. Instead of analyzing each possible configuration through simulation, we apply graphical methods to optimize the residence time of reactor networks composed of steady-state reactors. Although this can be easily made for processes described by a single kinetic expression, reactions under analysis do not exhibit this feature. Hence, the attainable region method, able to handle multiple species and its reactions, was applied for continuous reactors. Additionally, the effects of the sugars contained in the pretreatment liquor over the enzymatic hydrolysis and simultaneous saccharification and fermentation (SSF) were assessed. Results We obtained candidate attainable regions for separate enzymatic hydrolysis and fermentation (SHF) and SSF operations, both fed with pretreated corn stover. Results show that, despite the complexity of the reaction networks and underlying kinetics, the reactor networks that minimize the residence time can be constructed by using plug flow reactors and continuous stirred tank reactors. Regarding the effect of soluble solids in the feed stream to the reactor network, for SHF higher glucose concentration and yield are achieved for enzymatic hydrolysis with washed solids. Similarly, for SSF, higher yields and bioethanol titers are obtained using this substrate. Conclusions In this work, we demonstrated the capabilities of the attainable region analysis as a tool to assess the optimal reactor network with minimum residence time applied to the SHF and SSF operations for lignocellulosic ethanol production. The methodology can be readily modified to evaluate other kinetic models of different substrates, enzymes and microorganisms when available. From the obtained results, the most suitable reactor configuration considering residence time and rheological aspects is a continuous stirred tank reactor followed by a plug flow reactor (both in SSF mode) using washed solids as substrate. PMID:24286451
Attainable region analysis for continuous production of second generation bioethanol.
Scott, Felipe; Conejeros, Raúl; Aroca, Germán
2013-11-29
Despite its semi-commercial status, ethanol production from lignocellulosics presents many complexities not yet fully solved. Since the pretreatment stage has been recognized as a complex and yield-determining step, it has been extensively studied. However, economic success of the production process also requires optimization of the biochemical conversion stage. This work addresses the search of bioreactor configurations with improved residence times for continuous enzymatic saccharification and fermentation operations. Instead of analyzing each possible configuration through simulation, we apply graphical methods to optimize the residence time of reactor networks composed of steady-state reactors. Although this can be easily made for processes described by a single kinetic expression, reactions under analysis do not exhibit this feature. Hence, the attainable region method, able to handle multiple species and its reactions, was applied for continuous reactors. Additionally, the effects of the sugars contained in the pretreatment liquor over the enzymatic hydrolysis and simultaneous saccharification and fermentation (SSF) were assessed. We obtained candidate attainable regions for separate enzymatic hydrolysis and fermentation (SHF) and SSF operations, both fed with pretreated corn stover. Results show that, despite the complexity of the reaction networks and underlying kinetics, the reactor networks that minimize the residence time can be constructed by using plug flow reactors and continuous stirred tank reactors. Regarding the effect of soluble solids in the feed stream to the reactor network, for SHF higher glucose concentration and yield are achieved for enzymatic hydrolysis with washed solids. Similarly, for SSF, higher yields and bioethanol titers are obtained using this substrate. In this work, we demonstrated the capabilities of the attainable region analysis as a tool to assess the optimal reactor network with minimum residence time applied to the SHF and SSF operations for lignocellulosic ethanol production. The methodology can be readily modified to evaluate other kinetic models of different substrates, enzymes and microorganisms when available. From the obtained results, the most suitable reactor configuration considering residence time and rheological aspects is a continuous stirred tank reactor followed by a plug flow reactor (both in SSF mode) using washed solids as substrate.
Auditing complex concepts of SNOMED using a refined hierarchical abstraction network.
Wang, Yue; Halper, Michael; Wei, Duo; Gu, Huanying; Perl, Yehoshua; Xu, Junchuan; Elhanan, Gai; Chen, Yan; Spackman, Kent A; Case, James T; Hripcsak, George
2012-02-01
Auditors of a large terminology, such as SNOMED CT, face a daunting challenge. To aid them in their efforts, it is essential to devise techniques that can automatically identify concepts warranting special attention. "Complex" concepts, which by their very nature are more difficult to model, fall neatly into this category. A special kind of grouping, called a partial-area, is utilized in the characterization of complex concepts. In particular, the complex concepts that are the focus of this work are those appearing in intersections of multiple partial-areas and are thus referred to as overlapping concepts. In a companion paper, an automatic methodology for identifying and partitioning the entire collection of overlapping concepts into disjoint, singly-rooted groups, that are more manageable to work with and comprehend, has been presented. The partitioning methodology formed the foundation for the development of an abstraction network for the overlapping concepts called a disjoint partial-area taxonomy. This new disjoint partial-area taxonomy offers a collection of semantically uniform partial-areas and is exploited herein as the basis for a novel auditing methodology. The review of the overlapping concepts is done in a top-down order within semantically uniform groups. These groups are themselves reviewed in a top-down order, which proceeds from the less complex to the more complex overlapping concepts. The results of applying the methodology to SNOMED's Specimen hierarchy are presented. Hypotheses regarding error ratios for overlapping concepts and between different kinds of overlapping concepts are formulated. Two phases of auditing the Specimen hierarchy for two releases of SNOMED are reported on. With the use of the double bootstrap and Fisher's exact test (two-tailed), the auditing of concepts and especially roots of overlapping partial-areas is shown to yield a statistically significant higher proportion of errors. Copyright © 2011 Elsevier Inc. All rights reserved.
Auditing Complex Concepts of SNOMED using a Refined Hierarchical Abstraction Network
Wang, Yue; Halper, Michael; Wei, Duo; Gu, Huanying; Perl, Yehoshua; Xu, Junchuan; Elhanan, Gai; Chen, Yan; Spackman, Kent A.; Case, James T.; Hripcsak, George
2012-01-01
Auditors of a large terminology, such as SNOMED CT, face a daunting challenge. To aid them in their efforts, it is essential to devise techniques that can automatically identify concepts warranting special attention. “Complex” concepts, which by their very nature are more difficult to model, fall neatly into this category. A special kind of grouping, called a partial-area, is utilized in the characterization of complex concepts. In particular, the complex concepts that are the focus of this work are those appearing in intersections of multiple partial-areas and are thus referred to as overlapping concepts. In a companion paper, an automatic methodology for identifying and partitioning the entire collection of overlapping concepts into disjoint, singly-rooted groups, that are more manageable to work with and comprehend, has been presented. The partitioning methodology formed the foundation for the development of an abstraction network for the overlapping concepts called a disjoint partial-area taxonomy. This new disjoint partial-area taxonomy offers a collection of semantically uniform partial-areas and is exploited herein as the basis for a novel auditing methodology. The review of the overlapping concepts is done in a top-down order within semantically uniform groups. These groups are themselves reviewed in a top-down order, which proceeds from the less complex to the more complex overlapping concepts. The results of applying the methodology to SNOMED’s Specimen hierarchy are presented. Hypotheses regarding error ratios for overlapping concepts and between different kinds of overlapping concepts are formulated. Two phases of auditing the Specimen hierarchy for two releases of SNOMED are reported on. With the use of the double bootstrap and Fisher’s exact test (two-tailed), the auditing of concepts and especially roots of overlapping partial-areas is shown to yield a statistically significant higher proportion of errors. PMID:21907827
Hybrid analysis for indicating patients with breast cancer using temperature time series.
Silva, Lincoln F; Santos, Alair Augusto S M D; Bravo, Renato S; Silva, Aristófanes C; Muchaluat-Saade, Débora C; Conci, Aura
2016-07-01
Breast cancer is the most common cancer among women worldwide. Diagnosis and treatment in early stages increase cure chances. The temperature of cancerous tissue is generally higher than that of healthy surrounding tissues, making thermography an option to be considered in screening strategies of this cancer type. This paper proposes a hybrid methodology for analyzing dynamic infrared thermography in order to indicate patients with risk of breast cancer, using unsupervised and supervised machine learning techniques, which characterizes the methodology as hybrid. The dynamic infrared thermography monitors or quantitatively measures temperature changes on the examined surface, after a thermal stress. In the dynamic infrared thermography execution, a sequence of breast thermograms is generated. In the proposed methodology, this sequence is processed and analyzed by several techniques. First, the region of the breasts is segmented and the thermograms of the sequence are registered. Then, temperature time series are built and the k-means algorithm is applied on these series using various values of k. Clustering formed by k-means algorithm, for each k value, is evaluated using clustering validation indices, generating values treated as features in the classification model construction step. A data mining tool was used to solve the combined algorithm selection and hyperparameter optimization (CASH) problem in classification tasks. Besides the classification algorithm recommended by the data mining tool, classifiers based on Bayesian networks, neural networks, decision rules and decision tree were executed on the data set used for evaluation. Test results support that the proposed analysis methodology is able to indicate patients with breast cancer. Among 39 tested classification algorithms, K-Star and Bayes Net presented 100% classification accuracy. Furthermore, among the Bayes Net, multi-layer perceptron, decision table and random forest classification algorithms, an average accuracy of 95.38% was obtained. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Moglia, Magnus; Sharma, Ashok K.; Maheepala, Shiroma
2012-07-01
SummaryPlanning of regional and urban water resources, and in particular with Integrated Urban Water Management approaches, often considers inter-relationships between human uses of water, the health of the natural environment as well as the cost of various management strategies. Decision makers hence typically need to consider a combination of social, environmental and economic goals. The types of strategies employed can include water efficiency measures, water sensitive urban design, stormwater management, or catchment management. Therefore, decision makers need to choose between different scenarios and to evaluate them against a number of criteria. This type of problem has a discipline devoted to it, i.e. Multi-Criteria Decision Analysis, which has often been applied in water management contexts. This paper describes the application of Subjective Logic in a basic Bayesian Network to a Multi-Criteria Decision Analysis problem. By doing this, it outlines a novel methodology that explicitly incorporates uncertainty and information reliability. The application of the methodology to a known case study context allows for exploration. By making uncertainty and reliability of assessments explicit, it allows for assessing risks of various options, and this may help in alleviating cognitive biases and move towards a well formulated risk management policy.
Rizo-Decelis, L D; Pardo-Igúzquiza, E; Andreo, B
2017-12-15
In order to treat and evaluate the available data of water quality and fully exploit monitoring results (e.g. characterize regional patterns, optimize monitoring networks, infer conditions at unmonitored locations, etc.), it is crucial to develop improved and efficient methodologies. Accordingly, estimation of water quality along fluvial ecosystems is a frequent task in environment studies. In this work, a particular case of this problem is examined, namely, the estimation of water quality along a main stem of a large basin (where most anthropic activity takes place), from observational data measured along this river channel. We adapted topological kriging to this case, where each watershed contains all the watersheds of the upstream observed data ("nested support effect"). Data analysis was additionally extended by taking into account the upstream distance to the closest contamination hotspot as an external drift. We propose choosing the best estimation method by cross-validation. The methodological approach in spatial variability modeling may be used for optimizing the water quality monitoring of a given watercourse. The methodology presented is applied to 28 water quality variables measured along the Santiago River in Western Mexico. Copyright © 2017 Elsevier B.V. All rights reserved.
Srinivasan, Srikant; Broderick, Scott R; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B; Saxena, Surendra K; LeBeau, James M; Rajan, Krishna
2015-12-18
A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.
Assessment of SRS ambient air monitoring network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, K.; Jannik, T.
Three methodologies have been used to assess the effectiveness of the existing ambient air monitoring system in place at the Savannah River Site in Aiken, SC. Effectiveness was measured using two metrics that have been utilized in previous quantification of air-monitoring network performance; frequency of detection (a measurement of how frequently a minimum number of samplers within the network detect an event), and network intensity (a measurement of how consistent each sampler within the network is at detecting events). In addition to determining the effectiveness of the current system, the objective of performing this assessment was to determine what, ifmore » any, changes could make the system more effective. Methodologies included 1) the Waite method of determining sampler distribution, 2) the CAP88- PC annual dose model, and 3) a puff/plume transport model used to predict air concentrations at sampler locations. Data collected from air samplers at SRS in 2015 compared with predicted data resulting from the methodologies determined that the frequency of detection for the current system is 79.2% with sampler efficiencies ranging from 5% to 45%, and a mean network intensity of 21.5%. One of the air monitoring stations had an efficiency of less than 10%, and detected releases during just one sampling period of the entire year, adding little to the overall network intensity. By moving or removing this sampler, the mean network intensity increased to about 23%. Further work in increasing the network intensity and simulating accident scenarios to further test the ambient air system at SRS is planned« less
Exploring Educational and Cultural Adaptation through Social Networking Sites
ERIC Educational Resources Information Center
Ryan, Sherry D.; Magro, Michael J.; Sharp, Jason H.
2011-01-01
Social networking sites have seen tremendous growth and are widely used around the world. Nevertheless, the use of social networking sites in educational contexts is an under explored area. This paper uses a qualitative methodology, autoethnography, to investigate how social networking sites, specifically Facebook[TM], can help first semester…
Mapping the Field of Educational Administration Research: A Journal Citation Network Analysis
ERIC Educational Resources Information Center
Wang, Yinying; Bowers, Alex J.
2016-01-01
Purpose: The purpose of this paper is to uncover how knowledge is exchanged and disseminated in the educational administration research literature through the journal citation network. Design/ Methodology/Approach: Drawing upon social network theory and citation network studies in other disciplines, the authors constructed an educational…
Network planning under uncertainties
NASA Astrophysics Data System (ADS)
Ho, Kwok Shing; Cheung, Kwok Wai
2008-11-01
One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a generic framework for solving the network planning problem under uncertainties. In addition to reviewing the various network planning problems involving uncertainties, we also propose that a unified framework based on robust optimization can be used to solve a rather large segment of network planning problem under uncertainties. Robust optimization is first introduced in the operations research literature and is a framework that incorporates information about the uncertainty sets for the parameters in the optimization model. Even though robust optimization is originated from tackling the uncertainty in the optimization process, it can serve as a comprehensive and suitable framework for tackling generic network planning problems under uncertainties. In this paper, we begin by explaining the main ideas behind the robust optimization approach. Then we demonstrate the capabilities of the proposed framework by giving out some examples of how the robust optimization framework can be applied to the current common network planning problems under uncertain environments. Next, we list some practical considerations for solving the network planning problem under uncertainties with the proposed framework. Finally, we conclude this article with some thoughts on the future directions for applying this framework to solve other network planning problems.
Controlling allosteric networks in proteins
NASA Astrophysics Data System (ADS)
Dokholyan, Nikolay
2013-03-01
We present a novel methodology based on graph theory and discrete molecular dynamics simulations for delineating allosteric pathways in proteins. We use this methodology to uncover the structural mechanisms responsible for coupling of distal sites on proteins and utilize it for allosteric modulation of proteins. We will present examples where inference of allosteric networks and its rewiring allows us to ``rescue'' cystic fibrosis transmembrane conductance regulator (CFTR), a protein associated with fatal genetic disease cystic fibrosis. We also use our methodology to control protein function allosterically. We design a novel protein domain that can be inserted into identified allosteric site of target protein. Using a drug that binds to our domain, we alter the function of the target protein. We successfully tested this methodology in vitro, in living cells and in zebrafish. We further demonstrate transferability of our allosteric modulation methodology to other systems and extend it to become ligh-activatable.
De Brún, Aoife; McAuliffe, Eilish
2018-03-13
Health systems research recognizes the complexity of healthcare, and the interacting and interdependent nature of components of a health system. To better understand such systems, innovative methods are required to depict and analyze their structures. This paper describes social network analysis as a methodology to depict, diagnose, and evaluate health systems and networks therein. Social network analysis is a set of techniques to map, measure, and analyze social relationships between people, teams, and organizations. Through use of a case study exploring support relationships among senior managers in a newly established hospital group, this paper illustrates some of the commonly used network- and node-level metrics in social network analysis, and demonstrates the value of these maps and metrics to understand systems. Network analysis offers a valuable approach to health systems and services researchers as it offers a means to depict activity relevant to network questions of interest, to identify opinion leaders, influencers, clusters in the network, and those individuals serving as bridgers across clusters. The strengths and limitations inherent in the method are discussed, and the applications of social network analysis in health services research are explored.
Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Kurtz, Nolan Scot
2014-09-01
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less
A novel integrated framework and improved methodology of computer-aided drug design.
Chen, Calvin Yu-Chian
2013-01-01
Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.
Zhang, Qin
2015-07-01
Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.
Pey, Jon; Rubio, Angel; Theodoropoulos, Constantinos; Cascante, Marta; Planes, Francisco J
2012-07-01
Constraints-based modeling is an emergent area in Systems Biology that includes an increasing set of methods for the analysis of metabolic networks. In order to refine its predictions, the development of novel methods integrating high-throughput experimental data is currently a key challenge in the field. In this paper, we present a novel set of constraints that integrate tracer-based metabolomics data from Isotope Labeling Experiments and metabolic fluxes in a linear fashion. These constraints are based on Elementary Carbon Modes (ECMs), a recently developed concept that generalizes Elementary Flux Modes at the carbon level. To illustrate the effect of our ECMs-based constraints, a Flux Variability Analysis approach was applied to a previously published metabolic network involving the main pathways in the metabolism of glucose. The addition of our ECMs-based constraints substantially reduced the under-determination resulting from a standard application of Flux Variability Analysis, which shows a clear progress over the state of the art. In addition, our approach is adjusted to deal with combinatorial explosion of ECMs in genome-scale metabolic networks. This extension was applied to infer the maximum biosynthetic capacity of non-essential amino acids in human metabolism. Finally, as linearity is the hallmark of our approach, its importance is discussed at a methodological, computational and theoretical level and illustrated with a practical application in the field of Isotope Labeling Experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
Process mapping as a tool for home health network analysis.
Pluto, Delores M; Hirshorn, Barbara A
2003-01-01
Process mapping is a qualitative tool that allows service providers, policy makers, researchers, and other concerned stakeholders to get a "bird's eye view" of a home health care organizational network or a very focused, in-depth view of a component of such a network. It can be used to share knowledge about community resources directed at the older population, identify gaps in resource availability and access, and promote on-going collaborative interactions that encourage systemic policy reassessment and programmatic refinement. This article is a methodological description of process mapping, which explores its utility as a practice and research tool, illustrates its use in describing service-providing networks, and discusses some of the issues that are key to successfully using this methodology.
Interference Alignment With Partial CSI Feedback in MIMO Cellular Networks
NASA Astrophysics Data System (ADS)
Rao, Xiongbin; Lau, Vincent K. N.
2014-04-01
Interference alignment (IA) is a linear precoding strategy that can achieve optimal capacity scaling at high SNR in interference networks. However, most existing IA designs require full channel state information (CSI) at the transmitters, which would lead to significant CSI signaling overhead. There are two techniques, namely CSI quantization and CSI feedback filtering, to reduce the CSI feedback overhead. In this paper, we consider IA processing with CSI feedback filtering in MIMO cellular networks. We introduce a novel metric, namely the feedback dimension, to quantify the first order CSI feedback cost associated with the CSI feedback filtering. The CSI feedback filtering poses several important challenges in IA processing. First, there is a hidden partial CSI knowledge constraint in IA precoder design which cannot be handled using conventional IA design methodology. Furthermore, existing results on the feasibility conditions of IA cannot be applied due to the partial CSI knowledge. Finally, it is very challenging to find out how much CSI feedback is actually needed to support IA processing. We shall address the above challenges and propose a new IA feasibility condition under partial CSIT knowledge in MIMO cellular networks. Based on this, we consider the CSI feedback profile design subject to the degrees of freedom requirements, and we derive closed-form trade-off results between the CSI feedback cost and IA performance in MIMO cellular networks.
NASA Astrophysics Data System (ADS)
Mukherjee, A. D.; Brown, S. G.; McCarthy, M. C.
2017-12-01
A new generation of low cost air quality sensors have the potential to provide valuable information on the spatial-temporal variability of air pollution - if the measurements have sufficient quality. This study examined the performance of a particulate matter sensor model, the AirBeam (HabitatMap Inc., Brooklyn, NY), over a three month period in the urban environment of Sacramento, California. Nineteen AirBeam sensors were deployed at a regulatory air monitoring site collocated with meteorology measurements and as a local network over an 80 km2 domain in Sacramento, CA. This study presents the methodology to evaluate the precision, accuracy, and reliability of the sensors over a range of meteorological and aerosol conditions. The sensors demonstrated a robust degree of precision during collocated measurement periods (R2 = 0.98 - 0.99) and a moderate degree of correlation against a Beta Attenuation Monitor PM2.5 monitor (R2 0.6). A normalization correction is applied during the study period so that each AirBeam sensor in the network reports a comparable value. The role of the meteorological environment on the accuracy of the sensor measurements is investigated, along with the possibility of improving the measurements through a meteorology weighted correction. The data quality of the network of sensors is examined, and the spatial variability of particulate matter through the study domain derived from the sensor network is presented.
DNA-nanoparticle assemblies go organic: macroscopic polymeric materials with nanosized features.
Mentovich, Elad D; Livanov, Konstantin; Prusty, Deepak K; Sowwan, Mukules; Richter, Shachar
2012-05-30
One of the goals in the field of structural DNA nanotechnology is the use of DNA to build up 2- and 3-D nanostructures. The research in this field is motivated by the remarkable structural features of DNA as well as by its unique and reversible recognition properties. Nucleic acids can be used alone as the skeleton of a broad range of periodic nanopatterns and nanoobjects and in addition, DNA can serve as a linker or template to form DNA-hybrid structures with other materials. This approach can be used for the development of new detection strategies as well as nanoelectronic structures and devices. Here we present a new method for the generation of unprecedented all-organic conjugated-polymer nanoparticle networks guided by DNA, based on a hierarchical self-assembly process. First, microphase separation of amphiphilic block copolymers induced the formation of spherical nanoobjects. As a second ordering concept, DNA base pairing has been employed for the controlled spatial definition of the conjugated-polymer particles within the bulk material. These networks offer the flexibility and the diversity of soft polymeric materials. Thus, simple chemical methodologies could be applied in order to tune the network's electrical, optical and mechanical properties. One- two- and three-dimensional networks have been successfully formed. Common to all morphologies is the integrity of the micelles consisting of DNA block copolymer (DBC), which creates an all-organic engineered network.
Distinctive fingerprints of erosional regimes in terrestrial channel networks
NASA Astrophysics Data System (ADS)
Grau Galofre, A.; Jellinek, M.
2017-12-01
Satellite imagery and digital elevation maps capture the large scale morphology of channel networks attributed to long term erosional processes, such as fluvial, glacial, groundwater sapping and subglacial erosion. Characteristic morphologies associated with each of these styles of erosion have been studied in detail, but there exists a knowledge gap related to their parameterization and quantification. This knowledge gap prevents a rigorous analysis of the dominant processes that shaped a particular landscape, and a comparison across styles of erosion. To address this gap, we use previous morphological descriptions of glaciers, rivers, sapping valleys and tunnel valleys to identify and measure quantitative metrics diagnostic of these distinctive styles of erosion. From digital elevation models, we identify four geometric metrics: The minimum channel width, channel aspect ratio (longest length to channel width at the outlet), presence of undulating longitudinal profiles, and tributary junction angle. We also parameterize channel network complexity in terms of its stream order and fractal dimension. We then perform a statistical classification of the channel networks using a Principal Component Analysis on measurements of these six metrics on a dataset of 70 channelized systems. We show that rivers, glaciers, groundwater seepage and subglacial meltwater erode the landscape in rigorously distinguishable ways. Our methodology can more generally be applied to identify the contributions of different processes involved in carving a channel network. In particular, we are able to identify transitions from fluvial to glaciated landscapes or vice-versa.
Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.
An applied methodology for assessment of the sustainability of biomass district heating systems
NASA Astrophysics Data System (ADS)
Vallios, Ioannis; Tsoutsos, Theocharis; Papadakis, George
2016-03-01
In order to maximise the share of biomass in the energy supplying system, the designers should adopt the appropriate changes to the traditional systems and become more familiar with the design details of the biomass heating systems. The aim of this study is to present the development of methodology and its associated implementation in software that is useful for the design of biomass thermal conversion systems linked with district heating (DH) systems, taking into consideration the types of building structures and urban settlement layout around the plant. The methodology is based on a completely parametric logic, providing an impact assessment of variations in one or more technical and/or economic parameters and thus, facilitating a quick conclusion on the viability of this particular energy system. The essential energy parameters are presented and discussed for the design of biomass power and heat production system which are in connection with DH network, as well as for its environmental and economic evaluation (i.e. selectivity and viability of the relevant investment). Emphasis has been placed upon the technical parameters of biomass logistics, energy system's design, the economic details of the selected technology (integrated cogeneration combined cycle or direct combustion boiler), the DH network and peripheral equipment (thermal substations) and the greenhouse gas emissions. The purpose of this implementation is the assessment of the pertinent investment financial viability taking into account the available biomass feedstock, the economical and market conditions, and the capital/operating costs. As long as biomass resources (forest wood and cultivation products) are available and close to the settlement, disposal and transportation costs of biomass, remain low assuring the sustainability of such energy systems.
PAI-OFF: A new proposal for online flood forecasting in flash flood prone catchments
NASA Astrophysics Data System (ADS)
Schmitz, G. H.; Cullmann, J.
2008-10-01
SummaryThe Process Modelling and Artificial Intelligence for Online Flood Forecasting (PAI-OFF) methodology combines the reliability of physically based, hydrologic/hydraulic modelling with the operational advantages of artificial intelligence. These operational advantages are extremely low computation times and straightforward operation. The basic principle of the methodology is to portray process models by means of ANN. We propose to train ANN flood forecasting models with synthetic data that reflects the possible range of storm events. To this end, establishing PAI-OFF requires first setting up a physically based hydrologic model of the considered catchment and - optionally, if backwater effects have a significant impact on the flow regime - a hydrodynamic flood routing model of the river reach in question. Both models are subsequently used for simulating all meaningful and flood relevant storm scenarios which are obtained from a catchment specific meteorological data analysis. This provides a database of corresponding input/output vectors which is then completed by generally available hydrological and meteorological data for characterizing the catchment state prior to each storm event. This database subsequently serves for training both a polynomial neural network (PoNN) - portraying the rainfall-runoff process - and a multilayer neural network (MLFN), which mirrors the hydrodynamic flood wave propagation in the river. These two ANN models replace the hydrological and hydrodynamic model in the operational mode. After presenting the theory, we apply PAI-OFF - essentially consisting of the coupled "hydrologic" PoNN and "hydrodynamic" MLFN - to the Freiberger Mulde catchment in the Erzgebirge (Ore-mountains) in East Germany (3000 km 2). Both the demonstrated computational efficiency and the prediction reliability underline the potential of the new PAI-OFF methodology for online flood forecasting.
Two-stage damage diagnosis based on the distance between ARMA models and pre-whitening filters
NASA Astrophysics Data System (ADS)
Zheng, H.; Mita, A.
2007-10-01
This paper presents a two-stage damage diagnosis strategy for damage detection and localization. Auto-regressive moving-average (ARMA) models are fitted to time series of vibration signals recorded by sensors. In the first stage, a novel damage indicator, which is defined as the distance between ARMA models, is applied to damage detection. This stage can determine the existence of damage in the structure. Such an algorithm uses output only and does not require operator intervention. Therefore it can be embedded in the sensor board of a monitoring network. In the second stage, a pre-whitening filter is used to minimize the cross-correlation of multiple excitations. With this technique, the damage indicator can further identify the damage location and severity when the damage has been detected in the first stage. The proposed methodology is tested using simulation and experimental data. The analysis results clearly illustrate the feasibility of the proposed two-stage damage diagnosis methodology.
Cairelli, Michael J.; Miller, Christopher M.; Fiszman, Marcelo; Workman, T. Elizabeth; Rindflesch, Thomas C.
2013-01-01
Applying the principles of literature-based discovery (LBD), we elucidate the paradox that obesity is beneficial in critical care despite contributing to disease generally. Our approach enhances a previous extension to LBD, called “discovery browsing,” and is implemented using Semantic MEDLINE, which summarizes the results of a PubMed search into an interactive graph of semantic predications. The methodology allows a user to construct argumentation underpinning an answer to a biomedical question by engaging the user in an iterative process between system output and user knowledge. Components of the Semantic MEDLINE output graph identified as “interesting” by the user both contribute to subsequent searches and are constructed into a logical chain of relationships constituting an explanatory network in answer to the initial question. Based on this methodology we suggest that phthalates leached from plastic in critical care interventions activate PPAR gamma, which is anti-inflammatory and abundant in obese patients. PMID:24551329
Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations
NASA Technical Reports Server (NTRS)
Chanchio, Kasidit; Sun, Xian-He
1996-01-01
This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.
Experienced quality factors: qualitative evaluation approach to audiovisual quality
NASA Astrophysics Data System (ADS)
Jumisko-Pyykkö, Satu; Häkkinen, Jukka; Nyman, Göte
2007-02-01
Subjective evaluation is used to identify impairment factors of multimedia quality. The final quality is often formulated via quantitative experiments, but this approach has its constraints, as subject's quality interpretations, experiences and quality evaluation criteria are disregarded. To identify these quality evaluation factors, this study examined qualitatively the criteria participants used to evaluate audiovisual video quality. A semi-structured interview was conducted with 60 participants after a subjective audiovisual quality evaluation experiment. The assessment compared several, relatively low audio-video bitrate ratios with five different television contents on mobile device. In the analysis, methodological triangulation (grounded theory, Bayesian networks and correspondence analysis) was applied to approach the qualitative quality. The results showed that the most important evaluation criteria were the factors of visual quality, contents, factors of audio quality, usefulness - followability and audiovisual interaction. Several relations between the quality factors and the similarities between the contents were identified. As a research methodological recommendation, the focus on content and usage related factors need to be further examined to improve the quality evaluation experiments.
Regional Research Networking: A Stimulus to Research Collaboration and Research Productivity.
ERIC Educational Resources Information Center
McElmurry, Beverly J.; Minckley, Barbara B.
1986-01-01
Models for collegial networking as a means of increasing the participants' scholarly productivity are presented. A Midwestern historical methodology research interest group is described as an example of the long-term benefits of forming networks of scholars. (MSE)
Feature extraction for ultrasonic sensor based defect detection in ceramic components
NASA Astrophysics Data System (ADS)
Kesharaju, Manasa; Nagarajah, Romesh
2014-02-01
High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.
Proceedings of the Conference on Moments and Signal
NASA Astrophysics Data System (ADS)
Purdue, P.; Solomon, H.
1992-09-01
The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.
Social interaction in management group meetings: a case study of Finnish hospital.
Laapotti, Tomi; Mikkola, Leena
2016-06-20
Purpose - The purpose of this paper is to understand the role of management group meetings (MGMs) in hospital organization by examining the social interaction in these meetings. Design/methodology/approach - This case study approaches social interaction from a structuration point of view. Social network analysis and qualitative content analysis are applied. Findings - The findings show that MGMs are mainly forums for information sharing. Meetings are not held for problem solving or decision making, and operational coordinating is limited. Meeting interaction is very much focused on the chair, and most of the discussion takes place between the chair and one other member, not between members. The organizational structures are maintained and reproduced in the meeting interaction, and they appear to limit discussion. Meetings appear to fulfil their goals as a part of the organization's information structure and to some extent as an instrument for management. The significance of the relational side of MGMs was recognized. Research limitations/implications - The results of this study provide a basis for future research on hospital MGMs with wider datasets and other methodologies. Especially the relational role of MGMs needs more attention. Practical implications - The goals of MGMs should be reviewed and MG members should be made aware of meeting interaction structures. Originality/value - The paper provides new knowledge about interaction networks in hospital MGMs, and describes the complexity of the importance of MGMs for hospitals.
NASA Astrophysics Data System (ADS)
Rezrazi, Ahmed; Hanini, Salah; Laidi, Maamar
2016-02-01
The right design and the high efficiency of solar energy systems require accurate information on the availability of solar radiation. Due to the cost of purchase and maintenance of the radiometers, these data are not readily available. Therefore, there is a need to develop alternative ways of generating such data. Artificial neural networks (ANNs) are excellent and effective tools for learning, pinpointing or generalising data regularities, as they have the ability to model nonlinear functions; they can also cope with complex `noisy' data. The main objective of this paper is to show how to reach an optimal model of ANNs for applying in prediction of solar radiation. The measured data of the year 2007 in Ghardaïa city (Algeria) are used to demonstrate the optimisation methodology. The performance evaluation and the comparison of results of ANN models with measured data are made on the basis of mean absolute percentage error (MAPE). It is found that MAPE in the ANN optimal model reaches 1.17 %. Also, this model yields a root mean square error (RMSE) of 14.06 % and an MBE of 0.12. The accuracy of the outputs exceeded 97 % and reached up 99.29 %. Results obtained indicate that the optimisation strategy satisfies practical requirements. It can successfully be generalised for any location in the world and be used in other fields than solar radiation estimation.
Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J
2009-01-01
Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753
Self organising hypothesis networks: a new approach for representing and structuring SAR knowledge
2014-01-01
Background Combining different sources of knowledge to build improved structure activity relationship models is not easy owing to the variety of knowledge formats and the absence of a common framework to interoperate between learning techniques. Most of the current approaches address this problem by using consensus models that operate at the prediction level. We explore the possibility to directly combine these sources at the knowledge level, with the aim to harvest potentially increased synergy at an earlier stage. Our goal is to design a general methodology to facilitate knowledge discovery and produce accurate and interpretable models. Results To combine models at the knowledge level, we propose to decouple the learning phase from the knowledge application phase using a pivot representation (lingua franca) based on the concept of hypothesis. A hypothesis is a simple and interpretable knowledge unit. Regardless of its origin, knowledge is broken down into a collection of hypotheses. These hypotheses are subsequently organised into hierarchical network. This unification permits to combine different sources of knowledge into a common formalised framework. The approach allows us to create a synergistic system between different forms of knowledge and new algorithms can be applied to leverage this unified model. This first article focuses on the general principle of the Self Organising Hypothesis Network (SOHN) approach in the context of binary classification problems along with an illustrative application to the prediction of mutagenicity. Conclusion It is possible to represent knowledge in the unified form of a hypothesis network allowing interpretable predictions with performances comparable to mainstream machine learning techniques. This new approach offers the potential to combine knowledge from different sources into a common framework in which high level reasoning and meta-learning can be applied; these latter perspectives will be explored in future work. PMID:24959206
Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.
Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang
2016-11-01
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Wang, Jiang; Yang, Chen; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2016-10-01
In this paper, EEG series are applied to construct functional connections with the correlation between different regions in order to investigate the nonlinear characteristic and the cognitive function of the brain with Alzheimer's disease (AD). First, limited penetrable visibility graph (LPVG) and phase space method map single EEG series into networks, and investigate the underlying chaotic system dynamics of AD brain. Topological properties of the networks are extracted, such as average path length and clustering coefficient. It is found that the network topology of AD in several local brain regions are different from that of the control group with no statistically significant difference existing all over the brain. Furthermore, in order to detect the abnormality of AD brain as a whole, functional connections among different brain regions are reconstructed based on similarity of clustering coefficient sequence (CCSS) of EEG series in the four frequency bands (delta, theta, alpha, and beta), which exhibit obvious small-world properties. Graph analysis demonstrates that for both methodologies, the functional connections between regions of AD brain decrease, particularly in the alpha frequency band. AD causes the graph index complexity of the functional network decreased, the small-world properties weakened, and the vulnerability increased. The obtained results show that the brain functional network constructed by LPVG and phase space method might be more effective to distinguish AD from the normal control than the analysis of single series, which is helpful for revealing the underlying pathological mechanism of the disease.
Wen, Dingqiao; Yu, Yun; Hahn, Matthew W.; Nakhleh, Luay
2016-01-01
The role of hybridization and subsequent introgression has been demonstrated in an increasing number of species. Recently, Fontaine et al. (Science, 347, 2015, 1258524) conducted a phylogenomic analysis of six members of the Anopheles gambiae species complex. Their analysis revealed a reticulate evolutionary history and pointed to extensive introgression on all four autosomal arms. The study further highlighted the complex evolutionary signals that the co-occurrence of incomplete lineage sorting (ILS) and introgression can give rise to in phylogenomic analyses. While tree-based methodologies were used in the study, phylogenetic networks provide a more natural model to capture reticulate evolutionary histories. In this work, we reanalyse the Anopheles data using a recently devised framework that combines the multispecies coalescent with phylogenetic networks. This framework allows us to capture ILS and introgression simultaneously, and forms the basis for statistical methods for inferring reticulate evolutionary histories. The new analysis reveals a phylogenetic network with multiple hybridization events, some of which differ from those reported in the original study. To elucidate the extent and patterns of introgression across the genome, we devise a new method that quantifies the use of reticulation branches in the phylogenetic network by each genomic region. Applying the method to the mosquito data set reveals the evolutionary history of all the chromosomes. This study highlights the utility of ‘network thinking’ and the new insights it can uncover, in particular in phylogenomic analyses of large data sets with extensive gene tree incongruence. PMID:26808290
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz-Padillo, Alejandro, E-mail: aruizp@correo.ugr.es; Civil Engineering Department, University of Granada, Av. Fuentenueva s/n, 18071 Granada; Ruiz, Diego P., E-mail: druiz@ugr.es
Road traffic noise is one of the most significant environmental impacts generated by transport systems. To this regard, the recent implementation of the European Environmental Noise Directive by Public Administrations of the European Union member countries has led to various noise action plans (NAPs) for reducing the noise exposure of EU inhabitants. Every country or administration is responsible for applying criteria based on their own experience or expert knowledge, but there is no regulated process for the prioritization of technical measures within these plans. This paper proposes a multi-criteria decision methodology for the selection of suitable alternatives against traffic noisemore » in each of the road stretches included in the NAPs. The methodology first defines the main criteria and alternatives to be considered. Secondly, it determines the relative weights for the criteria and sub-criteria using the fuzzy extended analytical hierarchy process as applied to the results from an expert panel, thereby allowing expert knowledge to be captured in an automated way. A final step comprises the use of discrete multi-criteria analysis methods such as weighted sum, ELECTRE and TOPSIS, to rank the alternatives by suitability. To illustrate an application of the proposed methodology, this paper describes its implementation in a complex real case study: the selection of optimal technical solutions against traffic noise in the top priority road stretch included in the revision of the NAP of the regional road network in the province of Almeria (Spain).« less
MaxEnt analysis of a water distribution network in Canberra, ACT, Australia
NASA Astrophysics Data System (ADS)
Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael; Noack, Bernd R.
2015-01-01
A maximum entropy (MaxEnt) method is developed to infer the state of a pipe flow network, for situations in which there is insufficient information to form a closed equation set. This approach substantially extends existing deterministic methods for the analysis of engineered flow networks (e.g. Newton's method or the Hardy Cross scheme). The network is represented as an undirected graph structure, in which the uncertainty is represented by a continuous relative entropy on the space of internal and external flow rates. The head losses (potential differences) on the network are treated as dependent variables, using specified pipe-flow resistance functions. The entropy is maximised subject to "observable" constraints on the mean values of certain flow rates and/or potential differences, and also "physical" constraints arising from the frictional properties of each pipe and from Kirchhoff's nodal and loop laws. A numerical method is developed in Matlab for solution of the integral equation system, based on multidimensional quadrature. Several nonlinear resistance functions (e.g. power-law and Colebrook) are investigated, necessitating numerical solution of the implicit Lagrangian by a double iteration scheme. The method is applied to a 1123-node, 1140-pipe water distribution network for the suburb of Torrens in the Australian Capital Territory, Australia, using network data supplied by water authority ACTEW Corporation Limited. A number of different assumptions are explored, including various network geometric representations, prior probabilities and constraint settings, yielding useful predictions of network demand and performance. We also propose this methodology be used in conjunction with in-flow monitoring systems, to obtain better inferences of user consumption without large investments in monitoring equipment and maintenance.
Architecture and biological applications of artificial neural networks: a tuberculosis perspective.
Darsey, Jerry A; Griffin, William O; Joginipelli, Sravanthi; Melapu, Venkata Kiran
2015-01-01
Advancement of science and technology has prompted researchers to develop new intelligent systems that can solve a variety of problems such as pattern recognition, prediction, and optimization. The ability of the human brain to learn in a fashion that tolerates noise and error has attracted many researchers and provided the starting point for the development of artificial neural networks: the intelligent systems. Intelligent systems can acclimatize to the environment or data and can maximize the chances of success or improve the efficiency of a search. Due to massive parallelism with large numbers of interconnected processers and their ability to learn from the data, neural networks can solve a variety of challenging computational problems. Neural networks have the ability to derive meaning from complicated and imprecise data; they are used in detecting patterns, and trends that are too complex for humans, or other computer systems. Solutions to the toughest problems will not be found through one narrow specialization; therefore we need to combine interdisciplinary approaches to discover the solutions to a variety of problems. Many researchers in different disciplines such as medicine, bioinformatics, molecular biology, and pharmacology have successfully applied artificial neural networks. This chapter helps the reader in understanding the basics of artificial neural networks, their applications, and methodology; it also outlines the network learning process and architecture. We present a brief outline of the application of neural networks to medical diagnosis, drug discovery, gene identification, and protein structure prediction. We conclude with a summary of the results from our study on tuberculosis data using neural networks, in diagnosing active tuberculosis, and predicting chronic vs. infiltrative forms of tuberculosis.
Application of Network and Decision Theory to Routing Problems.
1982-03-01
special thanks to Major Hal Carter, faculty member, for his help in getting the authors to understand one of the underlying algorithms in the methodology...61 26. General Methodology Flowchart .......... .. 64 27. Least Cost/Time Path Algorithm Flowchart . . 65 28. Possible Redundant Arc of Time...minimum time to travel. This was neces- sary because: 1. The DTN designers did not have a procedure to do so. 2. The various network algorithms to
NASA Astrophysics Data System (ADS)
Franke, Jasper G.; Werner, Johannes P.; Donner, Reik V.
2017-11-01
Obtaining reliable reconstructions of long-term atmospheric circulation changes in the North Atlantic region presents a persistent challenge to contemporary paleoclimate research, which has been addressed by a multitude of recent studies. In order to contribute a novel methodological aspect to this active field, we apply here evolving functional network analysis, a recently developed tool for studying temporal changes of the spatial co-variability structure of the Earth's climate system, to a set of Late Holocene paleoclimate proxy records covering the last two millennia. The emerging patterns obtained by our analysis are related to long-term changes in the dominant mode of atmospheric circulation in the region, the North Atlantic Oscillation (NAO). By comparing the time-dependent inter-regional linkage structures of the obtained functional paleoclimate network representations to a recent multi-centennial NAO reconstruction, we identify co-variability between southern Greenland, Svalbard, and Fennoscandia as being indicative of a positive NAO phase, while connections from Greenland and Fennoscandia to central Europe are more pronounced during negative NAO phases. By drawing upon this correspondence, we use some key parameters of the evolving network structure to obtain a qualitative reconstruction of the NAO long-term variability over the entire Common Era (last 2000 years) using a linear regression model trained upon the existing shorter reconstruction.
Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J; Inzé, Dirk; Van de Peer, Yves
2013-03-01
Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein-protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies.
NASA Astrophysics Data System (ADS)
Ullah, H.; Ahmed, E.; Ikram, M.
2013-08-01
We report a pilot method, i.e., speckle variance (SV) and structured optical coherence tomography to visualize normal and malignant blood microvasculature in three and two dimensions and to monitor the glucose levels in blood by analyzing the Brownian motion of the red blood cells. The technique was applied on nude live mouse's skin and the obtained images depict the enhanced intravasculature network forum up to the depth of ˜2 mm with axial resolution of ˜8 μm. Microscopic images have also been obtained for both types of blood vessels to observe the tumor spatially. Our SV-OCT methodologies and results give satisfactory techniques in real time imaging and can potentially be applied during therapeutic techniques such as photodynamic therapy as well as to quantify the higher glucose levels injected intravenously to animal by determining the translation diffusion coefficient.
Discontinuity Detection in the Shield Metal Arc Welding Process
Cocota, José Alberto Naves; Garcia, Gabriel Carvalho; da Costa, Adilson Rodrigues; de Lima, Milton Sérgio Fernandes; Rocha, Filipe Augusto Santos; Freitas, Gustavo Medeiros
2017-01-01
This work proposes a new methodology for the detection of discontinuities in the weld bead applied in Shielded Metal Arc Welding (SMAW) processes. The detection system is based on two sensors—a microphone and piezoelectric—that acquire acoustic emissions generated during the welding. The feature vectors extracted from the sensor dataset are used to construct classifier models. The approaches based on Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers are able to identify with a high accuracy the three proposed weld bead classes: desirable weld bead, shrinkage cavity and burn through discontinuities. Experimental results illustrate the system’s high accuracy, greater than 90% for each class. A novel Hierarchical Support Vector Machine (HSVM) structure is proposed to make feasible the use of this system in industrial environments. This approach presented 96.6% overall accuracy. Given the simplicity of the equipment involved, this system can be applied in the metal transformation industries. PMID:28489045
Discontinuity Detection in the Shield Metal Arc Welding Process.
Cocota, José Alberto Naves; Garcia, Gabriel Carvalho; da Costa, Adilson Rodrigues; de Lima, Milton Sérgio Fernandes; Rocha, Filipe Augusto Santos; Freitas, Gustavo Medeiros
2017-05-10
This work proposes a new methodology for the detection of discontinuities in the weld bead applied in Shielded Metal Arc Welding (SMAW) processes. The detection system is based on two sensors-a microphone and piezoelectric-that acquire acoustic emissions generated during the welding. The feature vectors extracted from the sensor dataset are used to construct classifier models. The approaches based on Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers are able to identify with a high accuracy the three proposed weld bead classes: desirable weld bead, shrinkage cavity and burn through discontinuities. Experimental results illustrate the system's high accuracy, greater than 90% for each class. A novel Hierarchical Support Vector Machine (HSVM) structure is proposed to make feasible the use of this system in industrial environments. This approach presented 96.6% overall accuracy. Given the simplicity of the equipment involved, this system can be applied in the metal transformation industries.
Control Theoretic Modeling for Uncertain Cultural Attitudes and Unknown Adversarial Intent
2009-02-01
Constructive computational tools. 15. SUBJECT TERMS social learning, social networks , multiagent systems, game theory 16. SECURITY CLASSIFICATION OF: a...over- reactionary behaviors; 3) analysis of rational social learning in networks : analysis of belief propagation in social networks in various...general methodology as a predictive device for social network formation and for communication network formation with constraints on the lengths of
Facilitating the Development of School-Based Learning Networks
ERIC Educational Resources Information Center
Kubiak, Chris; Bertram, Joan
2010-01-01
Purpose: This paper aims to contribute to the knowledge base on leading and facilitating the growth of school improvement networks by describing the activities and challenges faced by network leaders. Design/methodology/approach: A total of 19 co-leaders from 12 networks were interviewed using a semi-structured schedule about the growth of their…
Networked Improvement Communities: The Discipline of Improvement Science Meets the Power of Networks
ERIC Educational Resources Information Center
LeMahieu, Paul G.; Grunow, Alicia; Baker, Laura; Nordstrum, Lee E.; Gomez, Louis M.
2017-01-01
Purpose: The purpose of this paper is to delineate an approach to quality assurance in education called networked improvement communities (NICs) that focused on integrating the methodologies of improvement science with few of the networks. Quality improvement, the science and practice of continuously improving programs, practices, processes,…
Creating, generating and comparing random network models with NetworkRandomizer.
Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni
2016-01-01
Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.
Interfacing Network Simulations and Empirical Data
2009-05-01
contraceptive innovations in the Cameroon. He found that real-world adoption rates did not follow simulation models when the network relationships were...Analysis of the Coevolution of Adolescents ’ Friendship Networks, Taste in Music, and Alcohol Consumption. Methodology, 2: 48-56. Tichy, N.M., Tushman
Reliability Modeling of Microelectromechanical Systems Using Neural Networks
NASA Technical Reports Server (NTRS)
Perera. J. Sebastian
2000-01-01
Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.
Konchak, Chad; Prasad, Kislaya
2012-01-01
Objectives To develop a methodology for integrating social networks into traditional cost-effectiveness analysis (CEA) studies. This will facilitate the economic evaluation of treatment policies in settings where health outcomes are subject to social influence. Design This is a simulation study based on a Markov model. The lifetime health histories of a cohort are simulated, and health outcomes compared, under alternative treatment policies. Transition probabilities depend on the health of others with whom there are shared social ties. Setting The methodology developed is shown to be applicable in any healthcare setting where social ties affect health outcomes. The example of obesity prevention is used for illustration under the assumption that weight changes are subject to social influence. Main outcome measures Incremental cost-effectiveness ratio (ICER). Results When social influence increases, treatment policies become more cost effective (have lower ICERs). The policy of only treating individuals who span multiple networks can be more cost effective than the policy of treating everyone. This occurs when the network is more fragmented. Conclusions (1) When network effects are accounted for, they result in very different values of incremental cost-effectiveness ratios (ICERs). (2) Treatment policies can be devised to take network structure into account. The integration makes it feasible to conduct a cost-benefit evaluation of such policies. PMID:23117559
Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.
Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo
2017-11-05
Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.
Methodology for Designing Operational Banking Risks Monitoring System
NASA Astrophysics Data System (ADS)
Kostjunina, T. N.
2018-05-01
The research looks at principles of designing an information system for monitoring operational banking risks. A proposed design methodology enables one to automate processes of collecting data on information security incidents in the banking network, serving as the basis for an integrated approach to the creation of an operational risk management system. The system can operate remotely ensuring tracking and forecasting of various operational events in the bank network. A structure of a content management system is described.
Criticism of generally accepted fundamentals and methodologies of traffic and transportation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerner, Boris S.
It is explained why the set of the fundamental empirical features of traffic breakdown (a transition from free flow to congested traffic) should be the empirical basis for any traffic and transportation theory that can be reliable used for control and optimization in traffic networks. It is shown that generally accepted fundamentals and methodologies of traffic and transportation theory are not consistent with the set of the fundamental empirical features of traffic breakdown at a highway bottleneck. To these fundamentals and methodologies of traffic and transportation theory belong (i) Lighthill-Whitham-Richards (LWR) theory, (ii) the General Motors (GM) model class (formore » example, Herman, Gazis et al. GM model, Gipps’s model, Payne’s model, Newell’s optimal velocity (OV) model, Wiedemann’s model, Bando et al. OV model, Treiber’s IDM, Krauß’s model), (iii) the understanding of highway capacity as a particular stochastic value, and (iv) principles for traffic and transportation network optimization and control (for example, Wardrop’s user equilibrium (UE) and system optimum (SO) principles). Alternatively to these generally accepted fundamentals and methodologies of traffic and transportation theory, we discuss three-phase traffic theory as the basis for traffic flow modeling as well as briefly consider the network breakdown minimization (BM) principle for the optimization of traffic and transportation networks with road bottlenecks.« less
NASA Astrophysics Data System (ADS)
Tutschku, Kurt; Nakao, Akihiro
This paper introduces a methodology for engineering best-effort P2P algorithms into dependable P2P-based network control mechanism. The proposed method is built upon an iterative approach consisting of improving the original P2P algorithm by appropriate mechanisms and of thorough performance assessment with respect to dependability measures. The potential of the methodology is outlined by the example of timely routing control for vertical handover in B3G wireless networks. In detail, the well-known Pastry and CAN algorithms are enhanced to include locality. By showing how to combine algorithmic enhancements with performance indicators, this case study paves the way for future engineering of dependable network control mechanisms through P2P algorithms.
Networks as Policy Instruments for Innovation
ERIC Educational Resources Information Center
Beers, Pieter J.; Geerling-Eiff, Florentien
2014-01-01
Purpose: The purpose of this article is to compare the effectiveness of facilitated networks to other policy instruments for agricultural innovation. Design/ methodology/ approach: In an exploratory study of the Dutch agricultural policy context, we conducted semi-structured interviews with ten experts on networks and innovation. Policy…
Westbury, D B; Park, J R; Mauchline, A L; Crane, R T; Mortimer, S R
2011-03-01
Agri-environment schemes (AESs) have been implemented across EU member states in an attempt to reconcile agricultural production methods with protection of the environment and maintenance of the countryside. To determine the extent to which such policy objectives are being fulfilled, participating countries are obliged to monitor and evaluate the environmental, agricultural and socio-economic impacts of their AESs. However, few evaluations measure precise environmental outcomes and critically, there are no agreed methodologies to evaluate the benefits of particular agri-environmental measures, or to track the environmental consequences of changing agricultural practices. In response to these issues, the Agri-Environmental Footprint project developed a common methodology for assessing the environmental impact of European AES. The Agri-Environmental Footprint Index (AFI) is a farm-level, adaptable methodology that aggregates measurements of agri-environmental indicators based on Multi-Criteria Analysis (MCA) techniques. The method was developed specifically to allow assessment of differences in the environmental performance of farms according to participation in agri-environment schemes. The AFI methodology is constructed so that high values represent good environmental performance. This paper explores the use of the AFI methodology in combination with Farm Business Survey data collected in England for the Farm Accountancy Data Network (FADN), to test whether its use could be extended for the routine surveillance of environmental performance of farming systems using established data sources. Overall, the aim was to measure the environmental impact of three different types of agriculture (arable, lowland livestock and upland livestock) in England and to identify differences in AFI due to participation in agri-environment schemes. However, because farm size, farmer age, level of education and region are also likely to influence the environmental performance of a holding, these factors were also considered. Application of the methodology revealed that only arable holdings participating in agri-environment schemes had a greater environmental performance, although responses differed between regions. Of the other explanatory variables explored, the key factors determining the environmental performance for lowland livestock holdings were farm size, farmer age and level of education. In contrast, the AFI value of upland livestock holdings differed only between regions. The paper demonstrates that the AFI methodology can be used readily with English FADN data and therefore has the potential to be applied more widely to similar data sources routinely collected across the EU-27 in a standardised manner. Copyright © 2010 Elsevier Ltd. All rights reserved.
Dil, Ebrahim Alipanahpour; Ghaedi, Mehrorang; Asfaram, Arash; Hajati, Shaaker; Mehrabi, Fatemeh; Goudarzi, Alireza
2017-01-01
Copper oxide nanoparticle-loaded activated carbon (CuO-NP-AC) was synthesized and characterized using different techniques such as FE-SEM, XRD and FT-IR. It was successfully applied for the ultrasound-assisted simultaneous removal of Pb 2+ ions and malachite green (MG) dye in binary system from aqueous solution. The effect of important parameters was modeled and optimized by artificial neural network (ANN) and response surface methodology (RSM). Maximum simultaneous removal percentages (>99.0%) were found at 25mgL -1 , 20mgL -1 , 0.02g, 5min and 6.0 corresponding to initial Pb 2+ concentration, initial MG concentration, CuO-NP-AC amount, ultrasonication time and pH, respectively. The precision of the equation obtained by RSM was confirmed by the analysis of variance and calculation of correlation coefficient relating the predicted and the experimental values of ultrasound-assisted simultaneous removal of the analytes. A good agreement between experimental and predicted values was observed. A feed-forward neural network with a topology optimized by response surface methodology was successfully applied for the prediction of ultrasound-assisted simultaneous removal of Pb 2+ ions and MG dye in binary system by CuO-NPs-AC. The number of hidden neurons, MSE, R 2 , number of epochs and error histogram were chosen for ANN modeling. Then, Langmuir, Freundlich, Temkin and D-R isothermal models were applied for fitting the experimental data. It was found that the Langmuir model well describes the isotherm data with a maximum adsorption capacity of 98.328 and 87.719mgg -1 for Pb 2+ and MG, respectively. Kinetic studies at optimum condition showed that maximum Pb 2+ and MG adsorption is achieved within 5min of the start of most experiments. The combination of pseudo-second-order rate equation and intraparticle diffusion model was applicable to explain the experimental data of ultrasound-assisted simultaneous removal of Pb 2+ and MG at optimum condition obtained from RSM. Copyright © 2016 Elsevier B.V. All rights reserved.
Gross, Alexander; Murthy, Dhiraj
2014-10-01
This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
Light, John M; Jason, Leonard A; Stevens, Edward B; Callahan, Sarah; Stone, Ariel
2016-03-01
The complex system conception of group social dynamics often involves not only changing individual characteristics, but also changing within-group relationships. Recent advances in stochastic dynamic network modeling allow these interdependencies to be modeled from data. This methodology is discussed within a context of other mathematical and statistical approaches that have been or could be applied to study the temporal evolution of relationships and behaviors within small- to medium-sized groups. An example model is presented, based on a pilot study of five Oxford House recovery homes, sober living environments for individuals following release from acute substance abuse treatment. This model demonstrates how dynamic network modeling can be applied to such systems, examines and discusses several options for pooling, and shows how results are interpreted in line with complex system concepts. Results suggest that this approach (a) is a credible modeling framework for studying group dynamics even with limited data, (b) improves upon the most common alternatives, and (c) is especially well-suited to complex system conceptions. Continuing improvements in stochastic models and associated software may finally lead to mainstream use of these techniques for the study of group dynamics, a shift already occurring in related fields of behavioral science.
Stein, Mart L; van Steenbergen, Jim E; Chanyasanha, Charnchudhi; Tipayamongkholgul, Mathuros; Buskens, Vincent; van der Heijden, Peter G M; Sabaiwan, Wasamon; Bengtsson, Linus; Lu, Xin; Thorson, Anna E; Kretzschmar, Mirjam E E
2014-01-01
Information on social interactions is needed to understand the spread of airborne infections through a population. Previous studies mostly collected egocentric information of independent respondents with self-reported information about contacts. Respondent-driven sampling (RDS) is a sampling technique allowing respondents to recruit contacts from their social network. We explored the feasibility of webRDS for studying contact patterns relevant for the spread of respiratory pathogens. We developed a webRDS system for facilitating and tracking recruitment by Facebook and email. One-day diary surveys were conducted by applying webRDS among a convenience sample of Thai students. Students were asked to record numbers of contacts at different settings and self-reported influenza-like-illness symptoms, and to recruit four contacts whom they had met in the previous week. Contacts were asked to do the same to create a network tree of socially connected individuals. Correlations between linked individuals were analysed to investigate assortativity within networks. We reached up to 6 waves of contacts of initial respondents, using only non-material incentives. Forty-four (23.0%) of the initially approached students recruited one or more contacts. In total 257 persons participated, of which 168 (65.4%) were recruited by others. Facebook was the most popular recruitment option (45.1%). Strong assortative mixing was seen by age, gender and education, indicating a tendency of respondents to connect to contacts with similar characteristics. Random mixing was seen by reported number of daily contacts. Despite methodological challenges (e.g. clustering among respondents and their contacts), applying RDS provides new insights in mixing patterns relevant for close-contact infections in real-world networks. Such information increases our knowledge of the transmission of respiratory infections within populations and can be used to improve existing modelling approaches. It is worthwhile to further develop and explore webRDS for the detection of clusters of respiratory symptoms in social networks.
Network-based Modeling of Mesoscale Catchments - The Hydrology Perspective of Glowa-danube
NASA Astrophysics Data System (ADS)
Ludwig, R.; Escher-Vetter, H.; Hennicker, R.; Mauser, W.; Niemeyer, S.; Reichstein, M.; Tenhunen, J.
Within the GLOWA initiative of the German Ministry for Research and Educa- tion (BMBF), the project GLOWA-Danube is funded to establish a transdisciplinary network-based decision support tool for water related issues in the Upper Danube wa- tershed. It aims to develop and validate integration techniques, integrated models and integrated monitoring procedures and to implement them in the network-based De- cision Support System DANUBIA. An accurate description of processes involved in energy, water and matter fluxes and turnovers requires an intense collaboration and exchange of water related expertise of different scientific disciplines. DANUBIA is conceived as a distributed expert network and is developed on the basis of re-useable, refineable, and documented sub-models. In order to synthesize a common understand- ing between the project partners, a standardized notation of parameters and functions and a platform-independent structure of computational methods and interfaces has been established using the Unified Modeling Language UML. DANUBIA is object- oriented, spatially distributed and raster-based at its core. It applies the concept of "proxels" (Process Pixel) as its basic object, which has different dimensions depend- ing on the viewing scale and connects to its environment through fluxes. The presented study excerpts the hydrological view point of GLOWA-Danube, its approach of model coupling and network based communication (using the Remote Method Invocation RMI), the object-oriented technology to simulate physical processes and interactions at the land surface and the methodology to treat the issue of spatial and temporal scal- ing in large, heterogeneous catchments. The mechanisms applied to communicate data and model parameters across the typical discipline borders will be demonstrated from the perspective of a land-surface object, which comprises the capabilities of interde- pendent expert models for snowmelt, soil water movement, runoff formation, plant growth and radiation balance in a distributed JAVA-based modeling environment. The coupling to the adjacent physical objects of atmosphere, groundwater and river net- work will also be addressed.
Stein, Mart L.; van Steenbergen, Jim E.; Chanyasanha, Charnchudhi; Tipayamongkholgul, Mathuros; Buskens, Vincent; van der Heijden, Peter G. M.; Sabaiwan, Wasamon; Bengtsson, Linus; Lu, Xin; Thorson, Anna E.; Kretzschmar, Mirjam E. E.
2014-01-01
Background Information on social interactions is needed to understand the spread of airborne infections through a population. Previous studies mostly collected egocentric information of independent respondents with self-reported information about contacts. Respondent-driven sampling (RDS) is a sampling technique allowing respondents to recruit contacts from their social network. We explored the feasibility of webRDS for studying contact patterns relevant for the spread of respiratory pathogens. Materials and Methods We developed a webRDS system for facilitating and tracking recruitment by Facebook and email. One-day diary surveys were conducted by applying webRDS among a convenience sample of Thai students. Students were asked to record numbers of contacts at different settings and self-reported influenza-like-illness symptoms, and to recruit four contacts whom they had met in the previous week. Contacts were asked to do the same to create a network tree of socially connected individuals. Correlations between linked individuals were analysed to investigate assortativity within networks. Results We reached up to 6 waves of contacts of initial respondents, using only non-material incentives. Forty-four (23.0%) of the initially approached students recruited one or more contacts. In total 257 persons participated, of which 168 (65.4%) were recruited by others. Facebook was the most popular recruitment option (45.1%). Strong assortative mixing was seen by age, gender and education, indicating a tendency of respondents to connect to contacts with similar characteristics. Random mixing was seen by reported number of daily contacts. Conclusions Despite methodological challenges (e.g. clustering among respondents and their contacts), applying RDS provides new insights in mixing patterns relevant for close-contact infections in real-world networks. Such information increases our knowledge of the transmission of respiratory infections within populations and can be used to improve existing modelling approaches. It is worthwhile to further develop and explore webRDS for the detection of clusters of respiratory symptoms in social networks. PMID:24416371
NASA Astrophysics Data System (ADS)
Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel
2018-07-01
Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.
River rehabilitation for the delivery of multiple ecosystem services at the river network scale.
Gilvear, David J; Spray, Chris J; Casas-Mulet, Roser
2013-09-15
This paper presents a conceptual framework and methodology to assist with optimising the outcomes of river rehabilitation in terms of delivery of multiple ecosystem services and the benefits they represent for humans at the river network scale. The approach is applicable globally, but was initially devised in the context of a project critically examining opportunities and constraints on delivery of river rehabilitation in Scotland. The spatial-temporal approach highlighted is river rehabilitation measure, rehabilitation scale, location on the stream network, ecosystem service and timescale specific and could be used as initial scoping in the process of planning rehabilitation at the river network scale. The levels of service delivered are based on an expert-derived scoring system based on understanding how the rehabilitation measure assists in reinstating important geomorphological, hydrological and ecological processes and hence intermediate or primary ecosystem function. The framework permits a "total long-term (>25 years) ecosystem service score" to be calculated which is the cumulative result of the combined effect of the number of and level of ecosystem services delivered over time. Trajectories over time for attaining the long-term ecosystem service score for each river rehabilitation measures are also given. Scores could also be weighted according to societal values and economic valuation. These scores could assist decision making in relation to river rehabilitation at the catchment scale in terms of directing resources towards alternative scenarios. A case study is presented of applying the methodology to the Eddleston Water in Scotland using proposed river rehabilitation options for the catchment to demonstrate the value of the approach. Our overall assertion is that unless sound conceptual frameworks are developed that permit the river network scale ecosystem services of river rehabilitation to be evaluated as part of the process of river basin planning and management, the total benefit of river rehabilitation may well be reduced. River rehabilitation together with a 'vision' and framework within which it can be developed, is fundamental to future success in river basin management. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mi, Ye
1998-12-01
The major objective of this thesis is focused on theoretical and experimental investigations of identifying and characterizing vertical and horizontal flow regimes in two-phase flows. A methodology of flow regime identification with impedance-based neural network systems and a comprehensive model of vertical slug flow have been developed. Vertical slug flow has been extensively investigated and characterized with geometric, kinematic and hydrodynamic parameters. A multi-sensor impedance void-meter and a multi-sensor magnetic flowmeter were developed. The impedance void-meter was cross-calibrated with other reliable techniques for void fraction measurements. The performance of the impedance void-meter to measure the void propagation velocity was evaluated by the drift flux model. It was proved that the magnetic flowmeter was applicable to vertical slug flow measurements. Separable signals from these instruments allow us to unearth most characteristics of vertical slug flow. A methodology of vertical flow regime identification was developed. Supervised neural network and self-organizing neural network systems were employed. First, they were trained with results from an idealized simulation of impedance in a two-phase mixture. The simulation was mainly based on Mishima and Ishii's flow regime map, the drift flux model, and the newly developed model of slug flow. Then, these trained systems were tested with impedance signals. The results showed that the neural network systems were appropriate classifiers of vertical flow regimes. The theoretical models and experimental databases used in the simulation were reliable. Furthermore, this approach was applied successfully to horizontal flow identification. A comprehensive model was developed to predict important characteristics of vertical slug flow. It was realized that the void fraction of the liquid slug is determined by the relative liquid motion between the Taylor bubble tail and the Taylor bubble wake. Relying on this understanding and experimental results, a special relationship was built for the void fraction of the liquid slug. The prediction of the void fraction of the liquid slug was considerably improved. Experimental characterization of vertical slug flows was performed extensively with the impedance void-meter and the magnetic flowmeter. The theoretical predictions were compared with the experimental results. The agreements between them are very satisfactory.
Women-Only (Homophilous) Networks Supporting Women Leaders in Education
ERIC Educational Resources Information Center
Coleman, Marianne
2010-01-01
Purpose: This paper aims to consider what all-women networks have, and might offer, in terms of support and development of women in educational leadership. Design/methodology/approach: The study draws on two case studies of such networks in education in England, the first, a regional network for women secondary school principals, and the other…