Science.gov

Sample records for analytic network process

  1. Analytically solvable processes on networks.

    PubMed

    Smilkov, Daniel; Kocarev, Ljupco

    2011-07-01

    We introduce a broad class of analytically solvable processes on networks. In the special case, they reduce to random walk and consensus process, the two most basic processes on networks. Our class differs from previous models of interactions (such as the stochastic Ising model, cellular automata, infinite particle systems, and the voter model) in several ways, the two most important being (i) the model is analytically solvable even when the dynamical equation for each node may be different and the network may have an arbitrary finite graph and influence structure and (ii) when local dynamics is described by the same evolution equation, the model is decomposable, with the equilibrium behavior of the system expressed as an explicit function of network topology and node dynamics. PMID:21867254

  2. Choosing a municipal landfill site by analytic network process

    NASA Astrophysics Data System (ADS)

    Banar, Mufide; Kose, Barbaros Murat; Ozkan, Aysun; Poyraz Acar, Ilgin

    2007-04-01

    In this study, analytic network process (ANP), one of the multi-criteria decision making (MCDM) tools has been used to choose one of the four alternative landfill sites for the city of Eskisehir, Turkey. For this purpose, Super Decision Software has been used and benefit opportunity cost and risk (BOCR) analysis has been done to apply ANP. In BOCR analysis, each alternative site has been evaluated in terms of its benefits, costs and risks; the opportunity cluster has been examined under the benefit cluster. In this context, technical, economical and social assessments have been done for the site selection of sanitary landfill. Also, results have been compared with analytic hierarchy process (AHP) which is another MCDM technique used in the study conducted before. Finally, the current site has been determined as the most appropriate site in both methods. These methods have not been commonly used in the discipline of environmental engineering but it is believed to be an important contribution for decision makers.

  3. Selecting public relations personnel of hospitals by analytic network process.

    PubMed

    Liao, Sen-Kuei; Chang, Kuei-Lun

    2009-01-01

    This study describes the use of analytic network process (ANP) in the Taiwanese hospital public relations personnel selection process. Starting with interviewing 48 practitioners and executives in north Taiwan, we collected selection criteria. Then, we retained the 12 critical criteria that were mentioned above 40 times by theses respondents, including: interpersonal skill, experience, negotiation, language, ability to follow orders, cognitive ability, adaptation to environment, adaptation to company, emotion, loyalty, attitude, and Response. Finally, we discussed with the 20 executives to take these important criteria into three perspectives to structure the hierarchy for hospital public relations personnel selection. After discussing with practitioners and executives, we find that selecting criteria are interrelated. The ANP, which incorporates interdependence relationships, is a new approach for multi-criteria decision-making. Thus, we apply ANP to select the most optimal public relations personnel of hospitals. An empirical study of public relations personnel selection problems in Taiwan hospitals is conducted to illustrate how the selection procedure works. PMID:19197656

  4. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods. PMID:26360215

  5. INTEGRATED ENVIRONMENTAL ASSESSMENT OF THE MID-ATLANTIC REGION WITH ANALYTICAL NETWORK PROCESS

    EPA Science Inventory

    A decision analysis method for integrating environmental indicators was developed. This was a combination of Principal Component Analysis (PCA) and the Analytic Network Process (ANP). Being able to take into account interdependency among variables, the method was capable of ran...

  6. An analytic network process model for municipal solid waste disposal options

    SciTech Connect

    Khan, Sheeba Faisal, Mohd Nishat

    2008-07-01

    The aim of this paper is to present an evaluation method that can aid decision makers in a local civic body to prioritize and select appropriate municipal solid waste disposal methods. We introduce a hierarchical network (hiernet) decision structure and apply the analytic network process (ANP) super-matrix approach to measure the relative desirability of disposal alternatives using value judgments as the input of the various stakeholders. ANP is a flexible analytical program that enables decision makers to find the best possible solution to complex problems by breaking down a problem into a systematic network of inter-relationships among the various levels and attributes. This method therefore may not only aid in selecting the best alternative but also helps decision makers to understand why an alternative is preferred over the other options.

  7. Assessment of wastewater treatment alternatives for small communities: An analytic network process approach.

    PubMed

    Molinos-Senante, María; Gómez, Trinidad; Caballero, Rafael; Hernández-Sancho, Francesc; Sala-Garrido, Ramón

    2015-11-01

    The selection of the most appropriate wastewater treatment (WWT) technology is a complex problem since many alternatives are available and many criteria are involved in the decision-making process. To deal with this challenge, the analytic network process (ANP) is applied for the first time to rank a set of seven WWT technology set-ups for secondary treatment in small communities. A major advantage of ANP is that it incorporates interdependent relationships between elements. Results illustrated that extensive technologies, constructed wetlands and pond systems are the most preferred alternatives by WWT experts. The sensitivity analysis performed verified that the ranking of WWT alternatives is very stable since constructed wetlands are almost always placed in the first position. This paper showed that ANP analysis is suitable to deal with complex decision-making problems, such as the selection of the most appropriate WWT system contributing to better understand the multiple interdependences among elements involved in the assessment. PMID:26119382

  8. Analytic network process model for sustainable lean and green manufacturing performance indicator

    NASA Astrophysics Data System (ADS)

    Aminuddin, Adam Shariff Adli; Nawawi, Mohd Kamal Mohd; Mohamed, Nik Mohd Zuki Nik

    2014-09-01

    Sustainable manufacturing is regarded as the most complex manufacturing paradigm to date as it holds the widest scope of requirements. In addition, its three major pillars of economic, environment and society though distinct, have some overlapping among each of its elements. Even though the concept of sustainability is not new, the development of the performance indicator still needs a lot of improvement due to its multifaceted nature, which requires integrated approach to solve the problem. This paper proposed the best combination of criteria en route a robust sustainable manufacturing performance indicator formation via Analytic Network Process (ANP). The integrated lean, green and sustainable ANP model can be used to comprehend the complex decision system of the sustainability assessment. The finding shows that green manufacturing is more sustainable than lean manufacturing. It also illustrates that procurement practice is the most important criteria in the sustainable manufacturing performance indicator.

  9. An environmental pressure index proposal for urban development planning based on the analytic network process

    SciTech Connect

    Gomez-Navarro, Tomas; Diaz-Martin, Diego

    2009-09-15

    This paper introduces a new approach to prioritize urban planning projects according to their environmental pressure in an efficient and reliable way. It is based on the combination of three procedures: (i) the use of environmental pressure indicators, (ii) the aggregation of the indicators in an Environmental Pressure Index by means of the Analytic Network Process method (ANP) and (iii) the interpretation of the information obtained from the experts during the decision-making process. The method has been applied to a proposal for urban development of La Carlota airport in Caracas (Venezuela). There are three options which are currently under evaluation. They include a Health Club, a Residential Area and a Theme Park. After a selection process the experts chose the following environmental pressure indicators as ANP criteria for the project life cycle: used land area, population density, energy consumption, water consumption and waste generation. By using goal-oriented questionnaires designed by the authors, the experts determined the importance of the criteria, the relationships among criteria, and the relationships between the criteria and the urban development alternatives. The resulting data showed that water consumption is the most important environmental pressure factor, and the Theme Park project is by far the urban development alternative which exerts the least environmental pressure on the area. The participating experts coincided in appreciating the technique proposed in this paper is useful and, for ranking ordering these alternatives, an improvement from traditional techniques such as environmental impact studies, life-cycle analysis, etc.

  10. Evaluating water management strategies in watersheds by new hybrid Fuzzy Analytical Network Process (FANP) methods

    NASA Astrophysics Data System (ADS)

    RazaviToosi, S. L.; Samani, J. M. V.

    2016-03-01

    Watersheds are considered as hydrological units. Their other important aspects such as economic, social and environmental functions play crucial roles in sustainable development. The objective of this work is to develop methodologies to prioritize watersheds by considering different development strategies in environmental, social and economic sectors. This ranking could play a significant role in management to assign the most critical watersheds where by employing water management strategies, best condition changes are expected to be accomplished. Due to complex relations among different criteria, two new hybrid fuzzy ANP (Analytical Network Process) algorithms, fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and fuzzy max-min set methods are used to provide more flexible and accurate decision model. Five watersheds in Iran named Oroomeyeh, Atrak, Sefidrood, Namak and Zayandehrood are considered as alternatives. Based on long term development goals, 38 water management strategies are defined as subcriteria in 10 clusters. The main advantage of the proposed methods is its ability to overcome uncertainty. This task is accomplished by using fuzzy numbers in all steps of the algorithms. To validate the proposed method, the final results were compared with those obtained from the ANP algorithm and the Spearman rank correlation coefficient is applied to find the similarity in the different ranking methods. Finally, the sensitivity analysis was conducted to investigate the influence of cluster weights on the final ranking.

  11. Process Analytical Chemistry.

    ERIC Educational Resources Information Center

    Callis, James B.; And Others

    1987-01-01

    Discusses process analytical chemistry as a discipline designed to supply quantitative and qualitative information about a chemical process. Encourages academic institutions to examine this field for employment opportunities for students. Describes the five areas of process analytical chemistry, including off-line, at-line, on-line, in-line, and…

  12. Analytic Networks in Music Task Definition.

    ERIC Educational Resources Information Center

    Piper, Richard M.

    For a student to acquire the conceptual systems of a discipline, the designer must reflect that structure or analytic network in his curriculum. The four networks identified for music and used in the development of the Southwest Regional Laboratory (SWRL) Music Program are the variable-value, the whole-part, the process-stage, and the class-member…

  13. Visual Analytics of Brain Networks

    PubMed Central

    Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming

    2014-01-01

    Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. PMID:22414991

  14. Visual analytics of brain networks.

    PubMed

    Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming

    2012-05-15

    Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. PMID:22414991

  15. An Analytic Network Process approach for the environmental aspect selection problem — A case study for a hand blender

    SciTech Connect

    Bereketli Zafeirakopoulos, Ilke Erol Genevois, Mujde

    2015-09-15

    Life Cycle Assessment is a tool to assess, in a systematic way, the environmental aspects and its potential environmental impacts and resources used throughout a product's life cycle. It is widely accepted and considered as one of the most powerful tools to support decision-making processes used in ecodesign and sustainable production in order to learn about the most problematic parts and life cycle phases of a product and to have a projection for future improvements. However, since Life Cycle Assessment is a cost and time intensive method, companies do not intend to carry out a full version of it, except for large corporate ones. Especially for small and medium sized enterprises, which do not have enough budget for and knowledge on sustainable production and ecodesign approaches, focusing only on the most important possible environmental aspect is unavoidable. In this direction, finding the right environmental aspect to work on is crucial for the companies. In this study, a multi-criteria decision-making methodology, Analytic Network Process is proposed to select the most relevant environmental aspect. The proposed methodology aims at providing a simplified environmental assessment to producers. It is applied for a hand blender, which is a member of the Electrical and Electronic Equipment family. The decision criteria for the environmental aspects and relations of dependence are defined. The evaluation is made by the Analytic Network Process in order to create a realistic approach to inter-dependencies among the criteria. The results are computed via the Super Decisions software. Finally, it is observed that the procedure is completed in less time, with less data, with less cost and in a less subjective way than conventional approaches. - Highlights: • We present a simplified environmental assessment methodology to support LCA. • ANP is proposed to select the most relevant environmental aspect. • ANP deals well with the interdependencies between aspects and

  16. Disaster risk management in prospect mining area Blitar district, East Java, using microtremor analysis and ANP (analytical network processing) approach

    NASA Astrophysics Data System (ADS)

    Parwatiningtyas, Diyan; Ambarsari, Erlin Windia; Marlina, Dwi; Wiratomo, Yogi

    2014-03-01

    Indonesia has a wealth of natural assets is so large to be managed and utilized, either from its own local government and local communities, especially in the mining sector. However, mining activities can change the state of the surface layer of the earth that have a high impact disaster risk. This could threaten the safety and disrupt human life, environmental damage, loss of property, and the psychological impact, sulking to the rule of law no 24 of 2007. That's why we strive to manage and minimize the risk of mine disasters in the region, how to use the method of calculation of Amplification Factor (AF) from the analysis based microtremor sulking Kanai and Nakamura, and decision systems were tested by analysis of ANP. Based on the amplification factor and Analytical Network Processing (ANP) obtained, some points showed instability in the surface layer of a mining area include the site of the TP-7, TP-8, TP-9, TP-10, (Birowo2). If in terms of structure, location indicated unstable due to have a sloping surface layer, resulting in the occurrence of landslides and earthquake risk is high. In the meantime, other areas of the mine site can be said to be a stable area.

  17. Disaster risk management in prospect mining area Blitar district, East Java, using microtremor analysis and ANP (analytical network processing) approach

    SciTech Connect

    Parwatiningtyas, Diyan E-mail: erlinunindra@gmail.com; Ambarsari, Erlin Windia E-mail: erlinunindra@gmail.com; Marlina, Dwi E-mail: erlinunindra@gmail.com; Wiratomo, Yogi E-mail: erlinunindra@gmail.com

    2014-03-24

    Indonesia has a wealth of natural assets is so large to be managed and utilized, either from its own local government and local communities, especially in the mining sector. However, mining activities can change the state of the surface layer of the earth that have a high impact disaster risk. This could threaten the safety and disrupt human life, environmental damage, loss of property, and the psychological impact, sulking to the rule of law no 24 of 2007. That's why we strive to manage and minimize the risk of mine disasters in the region, how to use the method of calculation of Amplification Factor (AF) from the analysis based microtremor sulking Kanai and Nakamura, and decision systems were tested by analysis of ANP. Based on the amplification factor and Analytical Network Processing (ANP) obtained, some points showed instability in the surface layer of a mining area include the site of the TP-7, TP-8, TP-9, TP-10, (Birowo2). If in terms of structure, location indicated unstable due to have a sloping surface layer, resulting in the occurrence of landslides and earthquake risk is high. In the meantime, other areas of the mine site can be said to be a stable area.

  18. Networked analytical sample management system

    SciTech Connect

    Kerrigan, W.J.; Spencer, W.A.

    1986-01-01

    Since 1982, the Savannah River Laboratory (SRL) has operated a computer-controlled analytical sample management system. The system, pogrammed in COBOL, runs on the site IBM 3081 mainframe computer. The system provides for the following subtasks: sample logging, analytical method assignment, worklist generation, cost accounting, and results reporting. Within these subtasks the system functions in a time-sharing mode. Communications between subtasks are done overnight in a batch mode. The system currently supports management of up to 3000 samples a month. Each sample requires, on average, three independent methods. Approximately 100 different analytical techniques are available for customized input of data. The laboratory has implemented extensive computer networking using Ethernet. Electronic mail, RS/1, and online literature searches are in place. Based on our experience with the existing sample management system, we have begun a project to develop a second generation system. The new system will utilize the panel designs developed for the present LIMS, incorporate more realtime features, and take advantage of the many commercial LIMS systems.

  19. An Introduction to Social Network Data Analytics

    NASA Astrophysics Data System (ADS)

    Aggarwal, Charu C.

    The advent of online social networks has been one of the most exciting events in this decade. Many popular online social networks such as Twitter, LinkedIn, and Facebook have become increasingly popular. In addition, a number of multimedia networks such as Flickr have also seen an increasing level of popularity in recent years. Many such social networks are extremely rich in content, and they typically contain a tremendous amount of content and linkage data which can be leveraged for analysis. The linkage data is essentially the graph structure of the social network and the communications between entities; whereas the content data contains the text, images and other multimedia data in the network. The richness of this network provides unprecedented opportunities for data analytics in the context of social networks. This book provides a data-centric view of online social networks; a topic which has been missing from much of the literature. This chapter provides an overview of the key topics in this field, and their coverage in this book.

  20. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    PubMed

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design. PMID:26387349

  1. Analysis of land suitability for urban development in Ahwaz County in southwestern Iran using fuzzy logic and analytic network process (ANP).

    PubMed

    Malmir, Maryam; Zarkesh, Mir Masoud Kheirkhah; Monavari, Seyed Masoud; Jozi, Seyed Ali; Sharifi, Esmail

    2016-08-01

    The ever-increasing development of cities due to population growth and migration has led to unplanned constructions and great changes in urban spatial structure, especially the physical development of cities in unsuitable places, which requires conscious guidance and fundamental organization. It is therefore necessary to identify suitable sites for future development of cities and prevent urban sprawl as one of the main concerns of urban managers and planners. In this study, to determine the suitable sites for urban development in the county of Ahwaz, the effective biophysical and socioeconomic criteria (including 27 sub-criteria) were initially determined based on literature review and interviews with certified experts. In the next step, a database of criteria and sub-criteria was prepared. Standardization of values and unification of scales in map layers were done using fuzzy logic. The criteria and sub-criteria were weighted by analytic network process (ANP) in the Super Decision software. Next, the map layers were overlaid using weighted linear combination (WLC) in the GIS software. According to the research findings, the final land suitability map was prepared with five suitability classes of very high (5.86 %), high (31.93 %), medium (38.61 %), low (17.65 %), and very low (5.95 %). Also, in terms of spatial distribution, suitable lands for urban development are mainly located in the central and southern parts of the Ahwaz County. It is expected that integration of fuzzy logic and ANP model will provide a better decision support tool compared with other models. The developed model can also be used in the land suitability analysis of other cities. PMID:27376847

  2. Analytical Computation of the Epidemic Threshold on Temporal Networks

    NASA Astrophysics Data System (ADS)

    Valdano, Eugenio; Ferreri, Luca; Poletto, Chiara; Colizza, Vittoria

    2015-04-01

    The time variation of contacts in a networked system may fundamentally alter the properties of spreading processes and affect the condition for large-scale propagation, as encoded in the epidemic threshold. Despite the great interest in the problem for the physics, applied mathematics, computer science, and epidemiology communities, a full theoretical understanding is still missing and currently limited to the cases where the time-scale separation holds between spreading and network dynamics or to specific temporal network models. We consider a Markov chain description of the susceptible-infectious-susceptible process on an arbitrary temporal network. By adopting a multilayer perspective, we develop a general analytical derivation of the epidemic threshold in terms of the spectral radius of a matrix that encodes both network structure and disease dynamics. The accuracy of the approach is confirmed on a set of temporal models and empirical networks and against numerical results. In addition, we explore how the threshold changes when varying the overall time of observation of the temporal network, so as to provide insights on the optimal time window for data collection of empirical temporal networked systems. Our framework is of both fundamental and practical interest, as it offers novel understanding of the interplay between temporal networks and spreading dynamics.

  3. Analytical reasoning task reveals limits of social learning in networks.

    PubMed

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-04-01

    Social learning-by observing and copying others-is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an 'unreflective copying bias', which limits their social learning to the output, rather than the process, of their peers' reasoning-even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning. PMID:24501275

  4. Analytical reasoning task reveals limits of social learning in networks

    PubMed Central

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-01-01

    Social learning—by observing and copying others—is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an ‘unreflective copying bias’, which limits their social learning to the output, rather than the process, of their peers’ reasoning—even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning. PMID:24501275

  5. Extremal dynamics on complex networks: Analytic solutions

    NASA Astrophysics Data System (ADS)

    Masuda, N.; Goh, K.-I.; Kahng, B.

    2005-12-01

    The Bak-Sneppen model displaying punctuated equilibria in biological evolution is studied on random complex networks. By using the rate equation and the random walk approaches, we obtain the analytic solution of the fitness threshold xc to be 1/(⟨k⟩f+1) , where ⟨k⟩f=⟨k2⟩/⟨k⟩ (=⟨k⟩) in the quenched (annealed) updating case, where ⟨kn⟩ is the nth moment of the degree distribution. Thus, the threshold is zero (finite) for the degree exponent γ<3 (γ>3) for the quenched case in the thermodynamic limit. The theoretical value xc fits well to the numerical simulation data in the annealed case only. Avalanche size, defined as the duration of successive mutations below the threshold, exhibits a critical behavior as its distribution follows a power law, Pa(s)˜s-3/2 .

  6. Microsystem process networks

    DOEpatents

    Wegeng, Robert S [Richland, WA; TeGrotenhuis, Ward E [Kennewick, WA; Whyatt, Greg A [West Richland, WA

    2010-01-26

    Various aspects and applications or microsystem process networks are described. The design of many types of microsystems can be improved by ortho-cascading mass, heat, or other unit process operations. Microsystems having energetically efficient microchannel heat exchangers are also described. Detailed descriptions of numerous design features in microcomponent systems are also provided.

  7. Microsystem process networks

    DOEpatents

    Wegeng, Robert S.; TeGrotenhuis, Ward E.; Whyatt, Greg A.

    2007-09-18

    Various aspects and applications of microsystem process networks are described. The design of many types of Microsystems can be improved by ortho-cascading mass, heat, or other unit process operations. Microsystems having energetically efficient microchannel heat exchangers are also described. Detailed descriptions of numerous design features in microcomponent systems are also provided.

  8. Microsystem process networks

    DOEpatents

    Wegeng, Robert S.; TeGrotenhuis, Ward E.; Whyatt, Greg A.

    2006-10-24

    Various aspects and applications of microsystem process networks are described. The design of many types of microsystems can be improved by ortho-cascading mass, heat, or other unit process operations. Microsystems having exergetically efficient microchannel heat exchangers are also described. Detailed descriptions of numerous design features in microcomponent systems are also provided.

  9. Epidemic processes in complex networks

    NASA Astrophysics Data System (ADS)

    Pastor-Satorras, Romualdo; Castellano, Claudio; Van Mieghem, Piet; Vespignani, Alessandro

    2015-07-01

    In recent years the research community has accumulated overwhelming evidence for the emergence of complex and heterogeneous connectivity patterns in a wide range of biological and sociotechnical systems. The complex properties of real-world networks have a profound impact on the behavior of equilibrium and nonequilibrium phenomena occurring in various systems, and the study of epidemic spreading is central to our understanding of the unfolding of dynamical processes in complex networks. The theoretical analysis of epidemic spreading in heterogeneous networks requires the development of novel analytical frameworks, and it has produced results of conceptual and practical relevance. A coherent and comprehensive review of the vast research activity concerning epidemic processes is presented, detailing the successful theoretical approaches as well as making their limits and assumptions clear. Physicists, mathematicians, epidemiologists, computer, and social scientists share a common interest in studying epidemic spreading and rely on similar models for the description of the diffusion of pathogens, knowledge, and innovation. For this reason, while focusing on the main results and the paradigmatic models in infectious disease modeling, the major results concerning generalized social contagion processes are also presented. Finally, the research activity at the forefront in the study of epidemic spreading in coevolving, coupled, and time-varying networks is reported.

  10. Analytical investigation of self-organized criticality in neural networks

    PubMed Central

    Droste, Felix; Do, Anne-Ly; Gross, Thilo

    2013-01-01

    Dynamical criticality has been shown to enhance information processing in dynamical systems, and there is evidence for self-organized criticality in neural networks. A plausible mechanism for such self-organization is activity-dependent synaptic plasticity. Here, we model neurons as discrete-state nodes on an adaptive network following stochastic dynamics. At a threshold connectivity, this system undergoes a dynamical phase transition at which persistent activity sets in. In a low-dimensional representation of the macroscopic dynamics, this corresponds to a transcritical bifurcation. We show analytically that adding activity-dependent rewiring rules, inspired by homeostatic plasticity, leads to the emergence of an attractive steady state at criticality and present numerical evidence for the system's evolution to such a state. PMID:22977096

  11. Using Analytic Hierarchy Process in Textbook Evaluation

    ERIC Educational Resources Information Center

    Kato, Shigeo

    2014-01-01

    This study demonstrates the application of the analytic hierarchy process (AHP) in English language teaching materials evaluation, focusing in particular on its potential for systematically integrating different components of evaluation criteria in a variety of teaching contexts. AHP is a measurement procedure wherein pairwise comparisons are made…

  12. An analytical framework for local feedforward networks.

    PubMed

    Weaver, S; Baird, L; Polycarpou, M

    1998-01-01

    Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To obtain a better understanding of these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed. These measures incorporate not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal, multilayer perceptron (MLP) networks that employ the backpropagation learning algorithm on the quadratic cost function, we address a familiar misconception that single-hidden-layer sigmoidal networks are inherently nonlocal by demonstrating that given a sufficiently large number of adjustable weights, single-hidden-layer sigmoidal MLP's exist that are arbitrarily local and retain the ability to approximate any continuous function on a compact domain. PMID:18252471

  13. Controlling Contagion Processes in Activity Driven Networks

    NASA Astrophysics Data System (ADS)

    Liu, Suyu; Perra, Nicola; Karsai, Márton; Vespignani, Alessandro

    2014-03-01

    The vast majority of strategies aimed at controlling contagion processes on networks consider the connectivity pattern of the system either quenched or annealed. However, in the real world, many networks are highly dynamical and evolve, in time, concurrently with the contagion process. Here, we derive an analytical framework for the study of control strategies specifically devised for a class of time-varying networks, namely activity-driven networks. We develop a block variable mean-field approach that allows the derivation of the equations describing the coevolution of the contagion process and the network dynamic. We derive the critical immunization threshold and assess the effectiveness of three different control strategies. Finally, we validate the theoretical picture by simulating numerically the spreading process and control strategies in both synthetic networks and a large-scale, real-world, mobile telephone call data set.

  14. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    SciTech Connect

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.

  15. Analytic sequential methods for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2014-05-01

    In this paper, we propose an analytic sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. We have developed explicit formulae for quick determination of the parameters of the new detection algorithm.

  16. Exploring the Analytical Processes of Intelligence Analysts

    SciTech Connect

    Chin, George; Kuchar, Olga A.; Wolf, Katherine E.

    2009-04-04

    We present an observational case study in which we investigate and analyze the analytical processes of intelligence analysts. Participating analysts in the study carry out two scenarios where they organize and triage information, conduct intelligence analysis, report results, and collaborate with one another. Through a combination of artifact analyses, group interviews, and participant observations, we explore the space and boundaries in which intelligence analysts work and operate. We also assess the implications of our findings on the use and application of relevant information technologies.

  17. Network Analytical Tool for Monitoring Global Food Safety Highlights China

    PubMed Central

    Nepusz, Tamás; Petróczi, Andrea; Naughton, Declan P.

    2009-01-01

    Background The Beijing Declaration on food safety and security was signed by over fifty countries with the aim of developing comprehensive programs for monitoring food safety and security on behalf of their citizens. Currently, comprehensive systems for food safety and security are absent in many countries, and the systems that are in place have been developed on different principles allowing poor opportunities for integration. Methodology/Principal Findings We have developed a user-friendly analytical tool based on network approaches for instant customized analysis of food alert patterns in the European dataset from the Rapid Alert System for Food and Feed. Data taken from alert logs between January 2003 – August 2008 were processed using network analysis to i) capture complexity, ii) analyze trends, and iii) predict possible effects of interventions by identifying patterns of reporting activities between countries. The detector and transgressor relationships are readily identifiable between countries which are ranked using i) Google's PageRank algorithm and ii) the HITS algorithm of Kleinberg. The program identifies Iran, China and Turkey as the transgressors with the largest number of alerts. However, when characterized by impact, counting the transgressor index and the number of countries involved, China predominates as a transgressor country. Conclusions/Significance This study reports the first development of a network analysis approach to inform countries on their transgressor and detector profiles as a user-friendly aid for the adoption of the Beijing Declaration. The ability to instantly access the country-specific components of the several thousand annual reports will enable each country to identify the major transgressors and detectors within its trading network. Moreover, the tool can be used to monitor trading countries for improved detector/transgressor ratios. PMID:19688088

  18. Process-in-Network: A Comprehensive Network Processing Approach

    PubMed Central

    Urzaiz, Gabriel; Villa, David; Villanueva, Felix; Lopez, Juan Carlos

    2012-01-01

    A solid and versatile communications platform is very important in modern Ambient Intelligence (AmI) applications, which usually require the transmission of large amounts of multimedia information over a highly heterogeneous network. This article focuses on the concept of Process-in-Network (PIN), which is defined as the possibility that the network processes information as it is being transmitted, and introduces a more comprehensive approach than current network processing technologies. PIN can take advantage of waiting times in queues of routers, idle processing capacity in intermediate nodes, and the information that passes through the network. PMID:22969390

  19. Risk prioritisation using the analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Sum, Rabihah Md.

    2015-12-01

    This study demonstrated how to use the Analytic Hierarchy Process (AHP) to prioritise risks of an insurance company. AHP is a technique to structure complex problems by arranging elements of the problems in a hierarchy, assigning numerical values to subjective judgements on the relative importance of the elements and synthesizing the judgements to determine which elements have the highest priority. The study is motivated by wide application of AHP as a prioritisation technique in complex problems. It aims to show AHP is able to minimise some limitations of risk assessment technique using likelihood and impact. The study shows AHP is able to provide consistency check on subjective judgements, organise a large number of risks into a structured framework, assist risk managers to make explicit risk trade-offs, and provide an easy to understand and systematic risk assessment process.

  20. Parallel processing in immune networks

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Bartolucci, Silvia; Galluzzi, Andrea; Guerra, Francesco; Moauro, Francesco

    2013-04-01

    In this work, we adopt a statistical-mechanics approach to investigate basic, systemic features exhibited by adaptive immune systems. The lymphocyte network made by B cells and T cells is modeled by a bipartite spin glass, where, following biological prescriptions, links connecting B cells and T cells are sparse. Interestingly, the dilution performed on links is shown to make the system able to orchestrate parallel strategies to fight several pathogens at the same time; this multitasking capability constitutes a remarkable, key property of immune systems as multiple antigens are always present within the host. We also define the stochastic process ruling the temporal evolution of lymphocyte activity and show its relaxation toward an equilibrium measure allowing statistical-mechanics investigations. Analytical results are compared with Monte Carlo simulations and signal-to-noise outcomes showing overall excellent agreement. Finally, within our model, a rationale for the experimentally well-evidenced correlation between lymphocytosis and autoimmunity is achieved; this sheds further light on the systemic features exhibited by immune networks.

  1. Is Analytic Information Processing a Feature of Expertise in Medicine?

    ERIC Educational Resources Information Center

    McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.

    2008-01-01

    Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…

  2. Controlling Contagion Processes in Time Varying Networks

    NASA Astrophysics Data System (ADS)

    Liu, Suyu; Perra, Nicola; Karsai, Marton; Vespignani, Alessandro

    2013-03-01

    The vast majority of strategies aimed at controlling contagion and spreading processes on networks consider the connectivity pattern of the system as quenched. In this paper, we consider the class of activity driven networks to analytically evaluate how different control strategies perform in time-varying networks. We consider the limit in which the evolution of the structure of the network and the spreading process are simultaneous yet independent. We analyze three control strategies based on node's activity patterns to decide the removal/immunization of nodes. We find that targeted strategies aimed at the removal of active nodes outperform by orders of magnitude the widely used random strategies. In time-varying networks however any finite time observation of the network dynamics provides only incomplete information on the nodes' activity and does not allow the precise ranking of the most active nodes as needed to implement targeted strategies. Here we develop a control strategy that focuses on targeting the egocentric time-aggregated network of a small control group of nodes.The presented strategy allows the control of spreading processes by removing a fraction of nodes much smaller than the random strategy while at the same time limiting the observation time on the system.

  3. Discovery of Information Diffusion Process in Social Networks

    NASA Astrophysics Data System (ADS)

    Kim, Kwanho; Jung, Jae-Yoon; Park, Jonghun

    Information diffusion analysis in social networks is of significance since it enables us to deeply understand dynamic social interactions among users. In this paper, we introduce approaches to discovering information diffusion process in social networks based on process mining. Process mining techniques are applied from three perspectives: social network analysis, process discovery and community recognition. We then present experimental results by using a real-life social network data. The proposed techniques are expected to employ as new analytical tools in online social networks such as blog and wikis for company marketers, politicians, news reporters and online writers.

  4. Parallel processing neural networks

    SciTech Connect

    Zargham, M.

    1988-09-01

    A model for Neural Network which is based on a particular kind of Petri Net has been introduced. The model has been implemented in C and runs on the Sequent Balance 8000 multiprocessor, however it can be directly ported to different multiprocessor environments. The potential advantages of using Petri Nets include: (1) the overall system is often easier to understand due to the graphical and precise nature of the representation scheme, (2) the behavior of the system can be analyzed using Petri Net theory. Though, the Petri Net is an obvious choice as a basis for the model, the basic Petri Net definition is not adequate to represent the neuronal system. To eliminate certain inadequacies more information has been added to the Petri Net model. In the model, a token represents either a processor or a post synaptic potential. Progress through a particular Neural Network is thus graphically depicted in the movement of the processor tokens through the Petri Net.

  5. Analytical framework for recurrence network analysis of time series

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Donner, Reik V.; Kurths, Jürgen

    2012-04-01

    Recurrence networks are a powerful nonlinear tool for time series analysis of complex dynamical systems. While there are already many successful applications ranging from medicine to paleoclimatology, a solid theoretical foundation of the method has still been missing so far. Here, we interpret an ɛ-recurrence network as a discrete subnetwork of a “continuous” graph with uncountably many vertices and edges corresponding to the system's attractor. This step allows us to show that various statistical measures commonly used in complex network analysis can be seen as discrete estimators of newly defined continuous measures of certain complex geometric properties of the attractor on the scale given by ɛ. In particular, we introduce local measures such as the ɛ-clustering coefficient, mesoscopic measures such as ɛ-motif density, path-based measures such as ɛ-betweennesses, and global measures such as ɛ-efficiency. This new analytical basis for the so far heuristically motivated network measures also provides an objective criterion for the choice of ɛ via a percolation threshold, and it shows that estimation can be improved by so-called node splitting invariant versions of the measures. We finally illustrate the framework for a number of archetypical chaotic attractors such as those of the Bernoulli and logistic maps, periodic and two-dimensional quasiperiodic motions, and for hyperballs and hypercubes by deriving analytical expressions for the novel measures and comparing them with data from numerical experiments. More generally, the theoretical framework put forward in this work describes random geometric graphs and other networks with spatial constraints, which appear frequently in disciplines ranging from biology to climate science.

  6. Accelerating Network Traffic Analytics Using Query-DrivenVisualization

    SciTech Connect

    Bethel, E. Wes; Campbell, Scott; Dart, Eli; Stockinger, Kurt; Wu,Kesheng

    2006-07-29

    Realizing operational analytics solutions where large and complex data must be analyzed in a time-critical fashion entails integrating many different types of technology. This paper focuses on an interdisciplinary combination of scientific data management and visualization/analysis technologies targeted at reducing the time required for data filtering, querying, hypothesis testing and knowledge discovery in the domain of network connection data analysis. We show that use of compressed bitmap indexing can quickly answer queries in an interactive visual data analysis application, and compare its performance with two alternatives for serial and parallel filtering/querying on 2.5 billion records worth of network connection data collected over a period of 42 weeks. Our approach to visual network connection data exploration centers on two primary factors: interactive ad-hoc and multiresolution query formulation and execution over n dimensions and visual display of then-dimensional histogram results. This combination is applied in a case study to detect a distributed network scan and to then identify the set of remote hosts participating in the attack. Our approach is sufficiently general to be applied to a diverse set of data understanding problems as well as used in conjunction with a diverse set of analysis and visualization tools.

  7. Entropy-based heavy tailed distribution transformation and visual analytics for monitoring massive network traffic

    NASA Astrophysics Data System (ADS)

    Han, Keesook J.; Hodge, Matthew; Ross, Virginia W.

    2011-06-01

    For monitoring network traffic, there is an enormous cost in collecting, storing, and analyzing network traffic datasets. Data mining based network traffic analysis has a growing interest in the cyber security community, but is computationally expensive for finding correlations between attributes in massive network traffic datasets. To lower the cost and reduce computational complexity, it is desirable to perform feasible statistical processing on effective reduced datasets instead of on the original full datasets. Because of the dynamic behavior of network traffic, traffic traces exhibit mixtures of heavy tailed statistical distributions or overdispersion. Heavy tailed network traffic characterization and visualization are important and essential tasks to measure network performance for the Quality of Services. However, heavy tailed distributions are limited in their ability to characterize real-time network traffic due to the difficulty of parameter estimation. The Entropy-Based Heavy Tailed Distribution Transformation (EHTDT) was developed to convert the heavy tailed distribution into a transformed distribution to find the linear approximation. The EHTDT linearization has the advantage of being amenable to characterize and aggregate overdispersion of network traffic in realtime. Results of applying the EHTDT for innovative visual analytics to real network traffic data are presented.

  8. Evaluation methodology for comparing memory and communication of analytic processes in visual analytics

    SciTech Connect

    Ragan, Eric D; Goodall, John R

    2014-01-01

    Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit process recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.

  9. Diffusive capture process on complex networks

    NASA Astrophysics Data System (ADS)

    Lee, Sungmin; Yook, Soon-Hyung; Kim, Yup

    2006-10-01

    We study the dynamical properties of a diffusing lamb captured by a diffusing lion on the complex networks with various sizes of N . We find that the lifetime ⟨T⟩ of a lamb scales as ⟨T⟩˜N and the survival probability S(N→∞,t) becomes finite on scale-free networks with degree exponent γ>3 . However, S(N,t) for γ<3 has a long-living tail on tree-structured scale-free networks and decays exponentially on looped scale-free networks. This suggests that the second moment of degree distribution ⟨k2⟩ is the relevant factor for the dynamical properties in the diffusive capture process. We numerically find that the normalized number of capture events at a node with degree k , n(k) , decreases as n(k)˜k-σ . When γ<3 , n(k) still increases anomalously for k≈kmax , where kmax is the maximum value of k of given networks with size N . We analytically show that n(k) satisfies the relation n(k)˜k2P(k) for any degree distribution P(k) and the total number of capture events Ntot is proportional to ⟨k2⟩ , which causes the γ -dependent behavior of S(N,t) and ⟨T⟩ .

  10. The influence of retrieval practice on metacognition: The contribution of analytic and non-analytic processes.

    PubMed

    Miller, Tyler M; Geraci, Lisa

    2016-05-01

    People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. PMID:26985881

  11. Controlling Contagion Processes in Time-Varying Networks

    NASA Astrophysics Data System (ADS)

    Perra, Nicola; Liu, Suyu; Karsai, Marton; Vespignani, Alessandro

    2014-03-01

    The vast majority of strategies aimed at controlling contagion processes on networks considers the connectivity pattern of the system as either quenched or annealed. However, in the real world many networks are highly dynamical and evolve in time concurrently to the contagion process. Here, we derive an analytical framework for the study of control strategies specifically devised for time-varying networks. We consider the removal/immunization of individual nodes according the their activity in the network and develop a block variable mean-field approach that allows the derivation of the equations describing the evolution of the contagion process concurrently to the network dynamic. We derive the critical immunization threshold and assess the effectiveness of the control strategies. Finally, we validate the theoretical picture by simulating numerically the information spreading process and control strategies in both synthetic networks and a large-scale, real-world mobile telephone call dataset.

  12. Visual analytics for multimodal social network analysis: a design study with social scientists.

    PubMed

    Ghani, Sohaib; Kwon, Bum Chul; Lee, Seungyoon; Yi, Ji Soo; Elmqvist, Niklas

    2013-12-01

    Social network analysis (SNA) is becoming increasingly concerned not only with actors and their relations, but also with distinguishing between different types of such entities. For example, social scientists may want to investigate asymmetric relations in organizations with strict chains of command, or incorporate non-actors such as conferences and projects when analyzing coauthorship patterns. Multimodal social networks are those where actors and relations belong to different types, or modes, and multimodal social network analysis (mSNA) is accordingly SNA for such networks. In this paper, we present a design study that we conducted with several social scientist collaborators on how to support mSNA using visual analytics tools. Based on an openended, formative design process, we devised a visual representation called parallel node-link bands (PNLBs) that splits modes into separate bands and renders connections between adjacent ones, similar to the list view in Jigsaw. We then used the tool in a qualitative evaluation involving five social scientists whose feedback informed a second design phase that incorporated additional network metrics. Finally, we conducted a second qualitative evaluation with our social scientist collaborators that provided further insights on the utility of the PNLBs representation and the potential of visual analytics for mSNA. PMID:24051769

  13. Information Network Model Query Processing

    NASA Astrophysics Data System (ADS)

    Song, Xiaopu

    Information Networking Model (INM) [31] is a novel database model for real world objects and relationships management. It naturally and directly supports various kinds of static and dynamic relationships between objects. In INM, objects are networked through various natural and complex relationships. INM Query Language (INM-QL) [30] is designed to explore such information network, retrieve information about schema, instance, their attributes, relationships, and context-dependent information, and process query results in the user specified form. INM database management system has been implemented using Berkeley DB, and it supports INM-QL. This thesis is mainly focused on the implementation of the subsystem that is able to effectively and efficiently process INM-QL. The subsystem provides a lexical and syntactical analyzer of INM-QL, and it is able to choose appropriate evaluation strategies and index mechanism to process queries in INM-QL without the user's intervention. It also uses intermediate result structure to hold intermediate query result and other helping structures to reduce complexity of query processing.

  14. Network command processing system overview

    NASA Technical Reports Server (NTRS)

    Nam, Yon-Woo; Murphy, Lisa D.

    1993-01-01

    The Network Command Processing System (NCPS) developed for the National Aeronautics and Space Administration (NASA) Ground Network (GN) stations is a spacecraft command system utilizing a MULTIBUS I/68030 microprocessor. This system was developed and implemented at ground stations worldwide to provide a Project Operations Control Center (POCC) with command capability for support of spacecraft operations such as the LANDSAT, Shuttle, Tracking and Data Relay Satellite, and Nimbus-7. The NCPS consolidates multiple modulation schemes for supporting various manned/unmanned orbital platforms. The NCPS interacts with the POCC and a local operator to process configuration requests, generate modulated uplink sequences, and inform users of the ground command link status. This paper presents the system functional description, hardware description, and the software design.

  15. Intersubjectivity and the creation of meaning in the analytic process.

    PubMed

    Maier, Christian

    2014-11-01

    By means of a clinical illustration, the author describes how the intersubjective exchanges involved in an analytic process facilitate the representation of affects and memories which have been buried in the unconscious or indeed have never been available to consciousness. As a result of projective identificatory processes in the analytic relationship, in this example the analyst falls into a situation of helplessness which connects with his own traumatic experiences. Then he gets into a formal regression of the ego and responds with a so-to-speak hallucinatory reaction-an internal image which enables him to keep the analytic process on track and, later on, to construct an early traumatic experience of the analysand. PMID:25331503

  16. An analytical study of various telecomminication networks using markov models

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, M.; Jayamani, E.; Ezhumalai, P.

    2015-04-01

    The main aim of this paper is to examine issues relating to the performance of various Telecommunication networks, and applied queuing theory for better design and improved efficiency. Firstly, giving an analytical study of queues deals with quantifying the phenomenon of waiting lines using representative measures of performances, such as average queue length (on average number of customers in the queue), average waiting time in queue (on average time to wait) and average facility utilization (proportion of time the service facility is in use). In the second, using Matlab simulator, summarizes the finding of the investigations, from which and where we obtain results and describing methodology for a) compare the waiting time and average number of messages in the queue in M/M/1 and M/M/2 queues b) Compare the performance of M/M/1 and M/D/1 queues and study the effect of increasing the number of servers on the blocking probability M/M/k/k queue model.

  17. Coupling entropy of co-processing model on social networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhanli

    2015-08-01

    Coupling entropy of co-processing model on social networks is investigated in this paper. As one crucial factor to determine the processing ability of nodes, the information flow with potential time lag is modeled by co-processing diffusion which couples the continuous time processing and the discrete diffusing dynamics. Exact results on master equation and stationary state are achieved to disclose the formation. In order to understand the evolution of the co-processing and design the optimal routing strategy according to the maximal entropic diffusion on networks, we propose the coupling entropy comprehending the structural characteristics and information propagation on social network. Based on the analysis of the co-processing model, we analyze the coupling impact of the structural factor and information propagating factor on the coupling entropy, where the analytical results fit well with the numerical ones on scale-free social networks.

  18. Analytical advantages of multivariate data processing. One, two, three, infinity?

    PubMed

    Olivieri, Alejandro C

    2008-08-01

    Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646

  19. On the propagation of diel signals in river networks using analytic solutions of flow equations

    NASA Astrophysics Data System (ADS)

    Fonley, Morgan; Mantilla, Ricardo; Small, Scott J.; Curtu, Rodica

    2016-07-01

    Several authors have reported diel oscillations in streamflow records and have hypothesized that these oscillations are linked to evapotranspiration cycles in the watershed. The timing of oscillations in rivers, however, lags behind those of temperature and evapotranspiration in hillslopes. Two hypotheses have been put forth to explain the magnitude and timing of diel streamflow oscillations during low-flow conditions. The first suggests that delays between the peaks and troughs of streamflow and daily evapotranspiration are due to processes occurring in the soil as water moves toward the channels in the river network. The second posits that they are due to the propagation of the signal through the channels as water makes its way to the outlet of the basin. In this paper, we design and implement a theoretical model to test these hypotheses. We impose a baseflow signal entering the river network and use a linear transport equation to represent flow along the network. We develop analytic streamflow solutions for the case of uniform velocities in space over all river links. We then use our analytic solution to simulate streamflows along a self-similar river network for different flow velocities. Our results show that the amplitude and time delay of the streamflow solution are heavily influenced by transport in the river network. Moreover, our equations show that the geomorphology and topology of the river network play important roles in determining how amplitude and signal delay are reflected in streamflow signals. Finally, we have tested our theoretical formulation in the Dry Creek Experimental Watershed, where oscillations are clearly observed in streamflow records. We find that our solution produces streamflow values and fluctuations that are similar to those observed in the summer of 2011.

  20. Quantitative high throughput analytics to support polysaccharide production process development.

    PubMed

    Noyes, Aaron; Godavarti, Ranga; Titchener-Hooker, Nigel; Coffman, Jonathan; Mukhopadhyay, Tarit

    2014-05-19

    The rapid development of purification processes for polysaccharide vaccines is constrained by a lack of analytical tools current technologies for the measurement of polysaccharide recovery and process-related impurity clearance are complex, time-consuming, and generally not amenable to high throughput process development (HTPD). HTPD is envisioned to be central to the improvement of existing polysaccharide manufacturing processes through the identification of critical process parameters that potentially impact the quality attributes of the vaccine and to the development of de novo processes for clinical candidates, across the spectrum of downstream processing. The availability of a fast and automated analytics platform will expand the scope, robustness, and evolution of Design of Experiment (DOE) studies. This paper details recent advances in improving the speed, throughput, and success of in-process analytics at the micro-scale. Two methods, based on modifications of existing procedures, are described for the rapid measurement of polysaccharide titre in microplates without the need for heating steps. A simplification of a commercial endotoxin assay is also described that features a single measurement at room temperature. These assays, along with existing assays for protein and nucleic acids are qualified for deployment in the high throughput screening of polysaccharide feedstreams. Assay accuracy, precision, robustness, interference, and ease of use are assessed and described. In combination, these assays are capable of measuring the product concentration and impurity profile of a microplate of 96 samples in less than one day. This body of work relies on the evaluation of a combination of commercially available and clinically relevant polysaccharides to ensure maximum versatility and reactivity of the final assay suite. Together, these advancements reduce overall process time by up to 30-fold and significantly reduce sample volume over current practices. The

  1. Functional Analytic Psychotherapy for Interpersonal Process Groups: A Behavioral Application

    ERIC Educational Resources Information Center

    Hoekstra, Renee

    2008-01-01

    This paper is an adaptation of Kohlenberg and Tsai's work, Functional Analytical Psychotherapy (1991), or FAP, to group psychotherapy. This author applied a behavioral rationale for interpersonal process groups by illustrating key points with a hypothetical client. Suggestions are also provided for starting groups, identifying goals, educating…

  2. Interpersonal Processes in Psychoanalytic, Cognitive Analytical and Cognitive Behavioural Therapy.

    ERIC Educational Resources Information Center

    Habicht, Manuela H.

    The aim of the review was to compare interpersonal processes in psychoanalytic therapy, cognitive analytical therapy, and cognitive-behavioral therapy. Since the emphasis is on psychodynamic therapy, Freud's conceptualization of the phenomenon of transference is discussed. Countertransference as an unconscious and defensive reaction to the…

  3. Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process

    ERIC Educational Resources Information Center

    Tang, Hui-Wen Vivian

    2011-01-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…

  4. Using the Analytic Hierarchy Process to Analyze Multiattribute Decisions.

    ERIC Educational Resources Information Center

    Spires, Eric E.

    1991-01-01

    The use of the Analytic Hierarchy Process (AHP) in assisting researchers to analyze decisions is discussed. The AHP is compared with other decision-analysis techniques, including multiattribute utility measurement, conjoint analysis, and general linear models. Insights that AHP can provide are illustrated with data gathered in an auditing context.…

  5. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    cortex by the application of lateral interactions during the learning phase. The organization of the mature network is compared to that found in the macaque monkey by several analytical tests. The capacity of the network to process images is investigated. By a method of reconstructing the input images in terms of V1 activities, the simulations show that images can be faithfully represented in V1 by the proposed network. The signal-to-noise ratio of the image is improved by the representation, and compression ratios of well over two-hundred are possible. Lateral interactions between V1 neurons sharpen their orientational tuning. We further study the dynamics of the processing, showing that the rate of decrease of the error of the reconstruction is maximized for the receptive fields used. Lastly, we employ a Fokker-Planck equation for a more detailed prediction of the error value vs. time. The Fokker-Planck equation for an underdamped system with a driving force is derived, yielding an energy-dependent diffusion coefficient which is the integral of the spectral densities of the force and the velocity of the system. The theory is applied to correlated noise activation and resonant activation. Simulation results for the error of the network vs time are compared to the solution of the Fokker-Planck equation.

  6. An analytical approach to photonic reservoir computing - a network of SOA's - for noisy speech recognition

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Abiri, Ebrahim; Dehyadegari, Louiza

    2013-10-01

    This paper seeks to investigate an approach of photonic reservoir computing for optical speech recognition on an examination isolated digit recognition task. An analytical approach in photonic reservoir computing is further drawn on to decrease time consumption, compared to numerical methods; which is very important in processing large signals such as speech recognition. It is also observed that adjusting reservoir parameters along with a good nonlinear mapping of the input signal into the reservoir, analytical approach, would boost recognition accuracy performance. Perfect recognition accuracy (i.e. 100%) can be achieved for noiseless speech signals. For noisy signals with 0-10 db of signal to noise ratios, however, the accuracy ranges observed varied between 92% and 98%. In fact, photonic reservoir application demonstrated 9-18% improvement compared to classical reservoir networks with hyperbolic tangent nodes.

  7. Automated refinement and inference of analytical models for metabolic networks

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael D.; Vallabhajosyula, Ravishankar R.; Jenkins, Jerry W.; Hood, Jonathan E.; Soni, Abhishek S.; Wikswo, John P.; Lipson, Hod

    2011-10-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model--suggesting nonlinear terms and structural modifications--or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time.

  8. Automated refinement and inference of analytical models for metabolic networks

    PubMed Central

    Schmidt, Michael D; Vallabhajosyula, Ravishankar R; Jenkins, Jerry W; Hood, Jonathan E; Soni, Abhishek S; Wikswo, John P; Lipson, Hod

    2013-01-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model – suggesting nonlinear terms and structural modifications – or even constructing a new model that agrees with the system’s time-series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real-time. PMID:21832805

  9. Cooperative spreading processes in multiplex networks

    NASA Astrophysics Data System (ADS)

    Wei, Xiang; Chen, Shihua; Wu, Xiaoqun; Ning, Di; Lu, Jun-an

    2016-06-01

    This study is concerned with the dynamic behaviors of epidemic spreading in multiplex networks. A model composed of two interacting complex networks is proposed to describe cooperative spreading processes, wherein the virus spreading in one layer can penetrate into the other to promote the spreading process. The global epidemic threshold of the model is smaller than the epidemic thresholds of the corresponding isolated networks. Thus, global epidemic onset arises in the interacting networks even though an epidemic onset does not arise in each isolated network. Simulations verify the analysis results and indicate that cooperative spreading processes in multiplex networks enhance the final infection fraction.

  10. Heuristic and analytic processing in online sports betting.

    PubMed

    d'Astous, Alain; Di Gaspero, Marc

    2015-06-01

    This article presents the results of two studies that examine the occurrence of heuristic (i.e., intuitive and fast) and analytic (i.e., deliberate and slow) processes among people who engage in online sports betting on a regular basis. The first study was qualitative and was conducted with a convenience sample of 12 regular online sports gamblers who described the processes by which they arrive at a sports betting decision. The results of this study showed that betting online on sports events involves a mix of heuristic and analytic processes. The second study consisted in a survey of 161 online sports gamblers where performance in terms of monetary gains, experience in online sports betting, propensity to collect and analyze relevant information prior to betting, and use of bookmaker odds were measured. This study showed that heuristic and analytic processes act as mediators of the relationship between experience and performance. The findings stemming of these two studies give some insights into gamblers' modes of thinking and behaviors in an online sports betting context and show the value of the dual mediation process model for research that looks at gambling activities from a judgment and decision making perspective. PMID:24390714

  11. Inferring network topology via the propagation process

    NASA Astrophysics Data System (ADS)

    Zeng, An

    2013-11-01

    Inferring the network topology from the dynamics is a fundamental problem, with wide applications in geology, biology, and even counter-terrorism. Based on the propagation process, we present a simple method to uncover the network topology. A numerical simulation on artificial networks shows that our method enjoys a high accuracy in inferring the network topology. We find that the infection rate in the propagation process significantly influences the accuracy, and that each network corresponds to an optimal infection rate. Moreover, the method generally works better in large networks. These finding are confirmed in both real social and nonsocial networks. Finally, the method is extended to directed networks, and a similarity measure specific for directed networks is designed.

  12. Analytic hierarchy process (AHP) as a tool in asset allocation

    NASA Astrophysics Data System (ADS)

    Zainol Abidin, Siti Nazifah; Mohd Jaffar, Maheran

    2013-04-01

    Allocation capital investment into different assets is the best way to balance the risk and reward. This can prevent from losing big amount of money. Thus, the aim of this paper is to help investors in making wise investment decision in asset allocation. This paper proposes modifying and adapting Analytic Hierarchy Process (AHP) model. The AHP model is widely used in various fields of study that are related in decision making. The results of the case studies show that the proposed model can categorize stocks and determine the portion of capital investment. Hence, it can assist investors in decision making process and reduce the risk of loss in stock market investment.

  13. "Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process

    SciTech Connect

    Sanfilippo, Antonio P.; Nibbs, Faith G.

    2007-08-24

    While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.

  14. Analytical and experimental study on complex compressed air pipe network

    NASA Astrophysics Data System (ADS)

    Gai, Yushou; Cai, Maolin; Shi, Yan

    2015-09-01

    To analyze the working characteristics of complex compressed air networks, numerical methods are widely used which are based on finite element technology or intelligent algorithms. However, the effectiveness of the numerical methods is limited. In this paper, to provide a new method to optimize the design and the air supply strategy of the complex compressed air pipe network, firstly, a novel method to analyze the topology structure of the compressed air flow in the pipe network is initially proposed. A matrix is used to describe the topology structure of the compressed air flow. Moreover, based on the analysis of the pressure loss of the pipe network, the relationship between the pressure and the flow of the compressed air is derived, and a prediction method of pressure fluctuation and air flow in a segment in a complex pipe network is proposed. Finally, to inspect the effectiveness of the method, an experiment with a complex network is designed. The pressure and the flow of airflow in the network are measured and studied. The results of the study show that, the predicted results with the proposed method have a good consistency with the experimental results, and that verifies the air flow prediction method of the complex pipe network. This research proposes a new method to analyze the compressed air network and a prediction method of pressure fluctuation and air flow in a segment, which can predicate the fluctuation of the pressure according to the flow of compressed air, and predicate the fluctuation of the flow according to the pressure in a segment of a complex pipe network.

  15. Analytic prediction of sidelobe statistics for matched-field processing

    NASA Astrophysics Data System (ADS)

    Tracey, Brian; Lee, Nigel; Zurk, Lisa

    2002-05-01

    Underwater source localization using matched-field processing (MFP) is complicated by the relatively high sidelobe levels characteristic of MFP ambiguity surfaces. An understanding of sidelobe statistics is expected to aid in designing robust detection and localization algorithms. MFP sidelobe levels are influenced by the underwater channel, array design, and mismatch between assumed and actual environmental parameters. In earlier work [J. Acoust. Soc. Am. 108, 2645 (2000)], a statistical approach was used to derive analytic expressions for the probability distribution function of the Bartlett ambiguity surface. The distribution was shown to depend on the orthogonality of the mode shapes as sampled by the array. Extensions to a wider class of array geometries and to broadband processing will be shown. Numerical results demonstrating the accuracy of the analytic results and exploring their range of validity will be presented. Finally, analytic predictions will be compared to data from the Santa Barbara Channel experiment. [Work sponsored by DARPA under Air Force Contract F19628-00-C0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the Department of Defense.

  16. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  17. H CANYON PROCESSING IN CORRELATION WITH FH ANALYTICAL LABS

    SciTech Connect

    Weinheimer, E.

    2012-08-06

    Management of radioactive chemical waste can be a complicated business. H Canyon and F/H Analytical Labs are two facilities present at the Savannah River Site in Aiken, SC that are at the forefront. In fact H Canyon is the only large-scale radiochemical processing facility in the United States and this processing is only enhanced by the aid given from F/H Analytical Labs. As H Canyon processes incoming materials, F/H Labs provide support through a variety of chemical analyses. Necessary checks of the chemical makeup, processing, and accountability of the samples taken from H Canyon process tanks are performed at the labs along with further checks on waste leaving the canyon after processing. Used nuclear material taken in by the canyon is actually not waste. Only a small portion of the radioactive material itself is actually consumed in nuclear reactors. As a result various radioactive elements such as Uranium, Plutonium and Neptunium are commonly found in waste and may be useful to recover. Specific processing is needed to allow for separation of these products from the waste. This is H Canyon's specialty. Furthermore, H Canyon has the capacity to initiate the process for weapons-grade nuclear material to be converted into nuclear fuel. This is one of the main campaigns being set up for the fall of 2012. Once usable material is separated and purified of impurities such as fission products, it can be converted to an oxide and ultimately turned into commercial fuel. The processing of weapons-grade material for commercial fuel is important in the necessary disposition of plutonium. Another processing campaign to start in the fall in H Canyon involves the reprocessing of used nuclear fuel for disposal in improved containment units. The importance of this campaign involves the proper disposal of nuclear waste in order to ensure the safety and well-being of future generations and the environment. As processing proceeds in the fall, H Canyon will have a substantial

  18. Analyte species and concentration identification using differentially functionalized microcantilever arrays and artificial neural networks

    SciTech Connect

    Senesac, Larry R; Datskos, Panos G; Sepaniak, Michael J

    2006-01-01

    In the present work, we have performed analyte species and concentration identification using an array of ten differentially functionalized microcantilevers coupled with a back-propagation artificial neural network pattern recognition algorithm. The array consists of ten nanostructured silicon microcantilevers functionalized by polymeric and gas chromatography phases and macrocyclic receptors as spatially dense, differentially responding sensing layers for identification and quantitation of individual analyte(s) and their binary mixtures. The array response (i.e. cantilever bending) to analyte vapor was measured by an optical readout scheme and the responses were recorded for a selection of individual analytes as well as several binary mixtures. An artificial neural network (ANN) was designed and trained to recognize not only the individual analytes and binary mixtures, but also to determine the concentration of individual components in a mixture. To the best of our knowledge, ANNs have not been applied to microcantilever array responses previously to determine concentrations of individual analytes. The trained ANN correctly identified the eleven test analyte(s) as individual components, most with probabilities greater than 97%, whereas it did not misidentify an unknown (untrained) analyte. Demonstrated unique aspects of this work include an ability to measure binary mixtures and provide both qualitative (identification) and quantitative (concentration) information with array-ANN-based sensor methodologies.

  19. The developmental approach to 'working through' in the analytic process.

    PubMed

    Shane, M

    1979-01-01

    The developmental orientation and approach has been utilized in this paper as a paradigm to understand some of the phenomena of working through in the analytic process. A case is presented of a patient who was arrested along several developmental lines and had suffered from a wool fetish. Many changes in the working through process could be attributed not only to meliorative effects of interpretation but to developmental progression as well. Furthermore, this developmental progression occurred within the analysis not only in relation to the analyst's interpretations but to the developmental impact on the patient of experience with the analyst and with significant others. The patient attained increasing capacities to utilize insight in actions that themselves led to new experience of developmental import, and in a spiral process, further structural developmental change was achieved which consolidated its dominance through further capacity for new insights. PMID:533738

  20. Laser induced breakdown spectroscopy inside liquids: Processes and analytical aspects

    NASA Astrophysics Data System (ADS)

    Lazic, V.; Jovićević, S.

    2014-11-01

    This paper provides an overview of the laser induced breakdown spectroscopy (LIBS) inside liquids, applied for detection of the elements present in the media itself or in the submerged samples. The processes inherent to the laser induced plasma formation and evolution inside liquids are discussed, including shockwave generation, vapor cavitation, and ablation of solids. Types of the laser excitation considered here are single pulse, dual pulse and multi-pulse. The literature relative to the LIBS measurements and applications inside liquids is reviewed and the most relevant results are summarized. Finally, we discuss the analytical aspects and release some suggestions for improving the LIBS sensitivity and accuracy in liquid environment.

  1. SensePath: Understanding the Sensemaking Process Through Analytic Provenance.

    PubMed

    Nguyen, Phong H; Xu, Kai; Wheat, Ashley; Wong, B L William; Attfield, Simon; Fields, Bob

    2016-01-01

    Sensemaking is described as the process of comprehension, finding meaning and gaining insight from information, producing new knowledge and informing further action. Understanding the sensemaking process allows building effective visual analytics tools to make sense of large and complex datasets. Currently, it is often a manual and time-consuming undertaking to comprehend this: researchers collect observation data, transcribe screen capture videos and think-aloud recordings, identify recurring patterns, and eventually abstract the sensemaking process into a general model. In this paper, we propose a general approach to facilitate such a qualitative analysis process, and introduce a prototype, SensePath, to demonstrate the application of this approach with a focus on browser-based online sensemaking. The approach is based on a study of a number of qualitative research sessions including observations of users performing sensemaking tasks and post hoc analyses to uncover their sensemaking processes. Based on the study results and a follow-up participatory design session with HCI researchers, we decided to focus on the transcription and coding stages of thematic analysis. SensePath automatically captures user's sensemaking actions, i.e., analytic provenance, and provides multi-linked views to support their further analysis. A number of other requirements elicited from the design session are also implemented in SensePath, such as easy integration with existing qualitative analysis workflow and non-intrusive for participants. The tool was used by an experienced HCI researcher to analyze two sensemaking sessions. The researcher found the tool intuitive and considerably reduced analysis time, allowing better understanding of the sensemaking process. PMID:26357398

  2. Meta-Analytically Informed Network Analysis of Resting State fMRI Reveals Hyperconnectivity in an Introspective Socio-Affective Network in Depression

    PubMed Central

    Schilbach, Leonhard; Müller, Veronika I.; Hoffstaedter, Felix; Clos, Mareike; Goya-Maldonado, Roberto

    2014-01-01

    Alterations of social cognition and dysfunctional interpersonal expectations are thought to play an important role in the etiology of depression and have, thus, become a key target of psychotherapeutic interventions. The underlying neurobiology, however, remains elusive. Based upon the idea of a close link between affective and introspective processes relevant for social interactions and alterations thereof in states of depression, we used a meta-analytically informed network analysis to investigate resting-state functional connectivity in an introspective socio-affective (ISA) network in individuals with and without depression. Results of our analysis demonstrate significant differences between the groups with depressed individuals showing hyperconnectivity of the ISA network. These findings demonstrate that neurofunctional alterations exist in individuals with depression in a neural network relevant for introspection and socio-affective processing, which may contribute to the interpersonal difficulties that are linked to depressive symptomatology. PMID:24759619

  3. How multiple social networks affect user awareness: The information diffusion process in multiplex networks

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Tang, Shaoting; Fang, Wenyi; Guo, Quantong; Zhang, Xiao; Zheng, Zhiming

    2015-10-01

    The information diffusion process in single complex networks has been extensively studied, especially for modeling the spreading activities in online social networks. However, individuals usually use multiple social networks at the same time, and can share the information they have learned from one social network to another. This phenomenon gives rise to a new diffusion process on multiplex networks with more than one network layer. In this paper we account for this multiplex network spreading by proposing a model of information diffusion in two-layer multiplex networks. We develop a theoretical framework using bond percolation and cascading failure to describe the intralayer and interlayer diffusion. This allows us to obtain analytical solutions for the fraction of informed individuals as a function of transmissibility T and the interlayer transmission rate θ . Simulation results show that interaction between layers can greatly enhance the information diffusion process. And explosive diffusion can occur even if the transmissibility of the focal layer is under the critical threshold, due to interlayer transmission.

  4. Analytical solution of average path length for Apollonian networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongzhi; Chen, Lichao; Zhou, Shuigeng; Fang, Lujun; Guan, Jihong; Zou, Tao

    2008-01-01

    With the help of recursion relations derived from the self-similar structure, we obtain the solution of average path length, dmacr t , for Apollonian networks. In contrast to the well-known numerical result dmacr t∝(lnNt)3/4 [J. S. Andrade, Jr. , Phys. Rev. Lett. 94, 018702 (2005)], our rigorous solution shows that the average path length grows logarithmically as dmacr t∝lnNt in the infinite limit of network size Nt . The extensive numerical calculations completely agree with our closed-form solution.

  5. In-Database Raster Analytics: Map Algebra and Parallel Processing in Oracle Spatial Georaster

    NASA Astrophysics Data System (ADS)

    Xie, Q. J.; Zhang, Z. Z.; Ravada, S.

    2012-07-01

    Over the past decade several products have been using enterprise database technology to store and manage geospatial imagery and raster data inside RDBMS, which in turn provides the best manageability and security. With the data volume growing exponentially, real-time or near real-time processing and analysis of such big data becomes more challenging. Oracle Spatial GeoRaster, different from most other products, takes the enterprise database-centric approach for both data management and data processing. This paper describes one of the central components of this database-centric approach: the processing engine built completely inside the database. Part of this processing engine is raster algebra, which we call the In-database Raster Analytics. This paper discusses the three key characteristics of this in-database analytics engine and the benefits. First, it moves the data processing closer to the data instead of moving the data to the processing, which helps achieve greater performance by overcoming the bottleneck of computer networks. Second, we designed and implemented a new raster algebra expression language. This language is based on PL/SQL and is currently focused on the "local" function type of map algebra. This language includes general arithmetic, logical and relational operators and any combination of them, which dramatically improves the analytical capability of the GeoRaster database. The third feature is the implementation of parallel processing of such operations to further improve performance. This paper also presents some sample use cases. The testing results demonstrate that this in-database approach for raster analytics can effectively help solve the biggest performance challenges we are facing today with big raster and image data.

  6. Model and Analytic Processes for Export License Assessments

    SciTech Connect

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.; Wood, Thomas W.; Daly, Don S.; Brothers, Alan J.; Sanfilippo, Antonio P.; Cook, Diane; Holder, Larry

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determine which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An approach to

  7. SIRS Dynamics on Random Networks: Simulations and Analytical Models

    NASA Astrophysics Data System (ADS)

    Rozhnova, Ganna; Nunes, Ana

    The standard pair approximation equations (PA) for the Susceptible-Infective-Recovered-Susceptible (SIRS) model of infection spread on a network of homogeneous degree k predict a thin phase of sustained oscillations for parameter values that correspond to diseases that confer long lasting immunity. Here we present a study of the dependence of this oscillatory phase on the parameter k and of its relevance to understand the behaviour of simulations on networks. For k = 4, we compare the phase diagram of the PA model with the results of simulations on regular random graphs (RRG) of the same degree. We show that for parameter values in the oscillatory phase, and even for large system sizes, the simulations either die out or exhibit damped oscillations, depending on the initial conditions. This failure of the standard PA model to capture the qualitative behaviour of the simulations on large RRGs is currently being investigated.

  8. Analytical controllability of deterministic scale-free networks and Cayley trees

    NASA Astrophysics Data System (ADS)

    Xu, Ming; Xu, Chuan-Yun; Wang, Huan; Deng, Cong-Zheng; Cao, Ke-Fei

    2015-07-01

    According to the exact controllability theory, the controllability is investigated analytically for two typical types of self-similar bipartite networks, i.e., the classic deterministic scale-free networks and Cayley trees. Due to their self-similarity, the analytical results of the exact controllability are obtained, and the minimum sets of driver nodes (drivers) are also identified by elementary transformations on adjacency matrices. For these two types of undirected networks, no matter their links are unweighted or (nonzero) weighted, the controllability of networks and the configuration of drivers remain the same, showing a robustness to the link weights. These results have implications for the control of real networked systems with self-similarity.

  9. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  10. TRUEX processing of plutonium analytical solutions at Argonne National Laboratory

    SciTech Connect

    Chamberlain, D.B.; Conner, C.; Hutter, J.C.; Leonard, R.A.; Wygmans, D.G.; Vandegrift, G.F.

    1995-12-31

    The TRUEX (TRansUranic EXtraction) solvent extraction process was developed at Argonne National Laboratory (ANL) for the Department of Energy. A TRUEX demonstration completed at ANL involved the processing of analytical and experimental waste generated there and at the New Brunswick Laboratory. A 20-stage centrifugal contactor was used to recover plutonium, americium, and uranium from the waste. Approximately 84 g of plutonium, 18 g of uranium, and 0.2 g of americium were recovered from about 118 liters of solution during four process runs. Alpha decontamination factors as high as 65,000 were attained, which was especially important because it allowed the disposal of the process raffinate as a low-level waste. The recovered plutonium and uranium were converted to oxide; the recovered americium solution was concentrated by evaporation to approximately 100 ml. The flowsheet and operational procedures were modified to overcome process difficulties. These difficulties included the presence of complexants in the feed, solvent degradation, plutonium precipitation, and inadequate decontamination factors during startup. This paper will discuss details of the experimental effort.

  11. Statistical signal processing in sensor networks

    NASA Astrophysics Data System (ADS)

    Guerriero, Marco

    In this dissertation we focus on decentralized signal processing in Sensor Networks (SN). Four topics are studied: (i) Direction of Arrival (DOA) estimation using a Wireless Sensor network (WSN), (ii) multiple target tracking in large SN, (iii) decentralized target detection in SN and (iv) decentralized sequential detection in SN with communication constraints. The first topic of this thesis addresses the problem of estimating the DOA of an acoustic wavefront using a a WSN made of isotropic (hence individually useless) sensors. The WSN was designed according to the SENMA (SEnsor Network with Mobile Agents) architecture with a mobile agent (MA) that successively queries the sensors lying inside its field of view. We propose both fast/simple and optimal DOA-estimation schemes, and an optimization of the MAs observation management is also carried out, with the surprising finding that the MA ought to orient itself at an oblique angle to the expected DOA, rather than directly toward it. We also consider the extension to multiple sources; intriguingly, per-source DOA accuracy is higher when there is more than one source. In all cases, performance is investigated by simulation and compared, when appropriate, with asymptotic bounds; these latter are usually met after a moderate number of MA dwells. In the second topic, we study the problem of tracking multiple targets in large SN. While these networks hold significant potential for surveillance, it is of interest to address fundamental limitations in large-scale implementations. We first introduce a simple analytical tracker performance model. Analysis of this model suggests that scan-based tracking performance improves with increasing numbers of sensors, but only to a certain point beyond which degradation is observed. Correspondingly, we address model-based optimization of the local sensor detection threshold and the number of sensors. Next, we propose a two-stage tracking approach (fuse-before-track) as a possible

  12. Evaluation of Analytical Modeling Functions for the Phonation Onset Process.

    PubMed

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW. PMID:27066108

  13. Analytical model of reactive transport processes with spatially variable coefficients.

    PubMed

    Simpson, Matthew J; Morrow, Liam C

    2015-05-01

    Analytical solutions of partial differential equation (PDE) models describing reactive transport phenomena in saturated porous media are often used as screening tools to provide insight into contaminant fate and transport processes. While many practical modelling scenarios involve spatially variable coefficients, such as spatially variable flow velocity, v(x), or spatially variable decay rate, k(x), most analytical models deal with constant coefficients. Here we present a framework for constructing exact solutions of PDE models of reactive transport. Our approach is relevant for advection-dominant problems, and is based on a regular perturbation technique. We present a description of the solution technique for a range of one-dimensional scenarios involving constant and variable coefficients, and we show that the solutions compare well with numerical approximations. Our general approach applies to a range of initial conditions and various forms of v(x) and k(x). Instead of simply documenting specific solutions for particular cases, we present a symbolic worksheet, as supplementary material, which enables the solution to be evaluated for different choices of the initial condition, v(x) and k(x). We also discuss how the technique generalizes to apply to models of coupled multispecies reactive transport as well as higher dimensional problems. PMID:26064648

  14. Evaluation of Analytical Modeling Functions for the Phonation Onset Process

    PubMed Central

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW. PMID:27066108

  15. Analytical model of reactive transport processes with spatially variable coefficients

    PubMed Central

    Simpson, Matthew J.; Morrow, Liam C.

    2015-01-01

    Analytical solutions of partial differential equation (PDE) models describing reactive transport phenomena in saturated porous media are often used as screening tools to provide insight into contaminant fate and transport processes. While many practical modelling scenarios involve spatially variable coefficients, such as spatially variable flow velocity, v(x), or spatially variable decay rate, k(x), most analytical models deal with constant coefficients. Here we present a framework for constructing exact solutions of PDE models of reactive transport. Our approach is relevant for advection-dominant problems, and is based on a regular perturbation technique. We present a description of the solution technique for a range of one-dimensional scenarios involving constant and variable coefficients, and we show that the solutions compare well with numerical approximations. Our general approach applies to a range of initial conditions and various forms of v(x) and k(x). Instead of simply documenting specific solutions for particular cases, we present a symbolic worksheet, as supplementary material, which enables the solution to be evaluated for different choices of the initial condition, v(x) and k(x). We also discuss how the technique generalizes to apply to models of coupled multispecies reactive transport as well as higher dimensional problems. PMID:26064648

  16. Electrogenerated thin films of microporous polymer networks with remarkably increased electrochemical response to nitroaromatic analytes.

    PubMed

    Palma-Cando, Alex; Scherf, Ullrich

    2015-06-01

    Thin films of microporous polymer networks (MPNs) have been generated by electrochemical polymerization of a series of multifunctional carbazole-based monomers. The microporous films show high Brunauer-Emmett-Teller (BET) surface areas up to 1300 m2 g(-1) as directly measured by krypton sorption experiments. A correlation between the number of polymerizable carbazole units of the monomer and the resulting surface area is observed. Electrochemical sensing experiments with 1,3,5-trinitrobenzene as prototypical nitroaromatic analyte demonstrate an up to 180 times increased current response of MPN-modified glassy carbon electrodes in relation to the nonmodified electrode. The phenomenon probably involves intermolecular interactions between the electron-poor nitroaromatic analytes and the electron-rich, high surface area microporous deposits, with the electrochemical reduction at the MPN-modified electrodes being an adsorption-controlled process for low scan rates. We expect a high application potential of such MPN-modified electrodes for boosting the sensitivity of electrochemical sensor devices. PMID:25946727

  17. A Task Analytic Process to Define Future Concepts in Aviation

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Wolter, Cynthia A.

    2014-01-01

    A necessary step when developing next generation systems is to understand the tasks that operators will perform. One NextGen concept under evaluation termed Single Pilot Operations (SPO) is designed to improve the efficiency of airline operations. One SPO concept includes a Pilot on Board (PoB), a Ground Station Operator (GSO), and automation. A number of procedural changes are likely to result when such changes in roles and responsibilities are undertaken. Automation is expected to relieve the PoB and GSO of some tasks (e.g. radio frequency changes, loading expected arrival information). A major difference in the SPO environment is the shift to communication-cued crosschecks (verbal / automated) rather than movement-cued crosschecks that occur in a shared cockpit. The current article highlights a task analytic process of the roles and responsibilities between a PoB, an approach-phase GSO, and automation.

  18. Capital budgeting in hospital management using the analytic hierarchy process.

    PubMed

    Tarimcilar, M M; Khaksari, S Z

    1991-01-01

    In recent years, the health care industry has been experiencing change to a degree unprecedented since the inception of the Medicare program. With traditional in-hospital care on the decline, hospitals are being forced to compete for business. They must identify within their own systems feasible alternatives for dealing with these changes and then determine which ones will best accomplish the goals of the organization. This paper offers a procedure that utilizes the analytic hierarchy process--a multicriteria decision-making tool that helps arrange the possible alternatives in hierarchical order given the priorities of relevant decision makers. An application of the method to a mid-sized hospital is presented. Although the procedure is structured, it is flexible enough to be updated for the realities of any health care institution. PMID:10111675

  19. Special concrete shield selection using the analytic hierarchy process

    SciTech Connect

    Abulfaraj, W.H. . Nuclear Engineering Dept.)

    1994-08-01

    Special types of concrete radiation shields that depend on locally available materials and have improved properties for both neutron and gamma-ray attenuation were developed by using plastic materials and heavy ores. The analytic hierarchy process (AHP) is implemented to evaluate these types for selecting the best biological radiation shield for nuclear reactors. Factors affecting the selection decision are degree of protection against neutrons, degree of protection against gamma rays, suitability of the concrete as building material, and economic considerations. The seven concrete alternatives are barite-polyethylene concrete, barite-polyvinyl chloride (PVC) concrete, barite-portland cement concrete, pyrite-polyethylene concrete, pyrite-PVC concrete, pyrite-portland cement concrete, and ordinary concrete. The AHP analysis shows the superiority of pyrite-polyethylene concrete over the others.

  20. Evaluating supplier quality performance using fuzzy analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Ahmad, Nazihah; Kasim, Maznah Mat; Rajoo, Shanmugam Sundram Kalimuthu

    2014-12-01

    Evaluating supplier quality performance is vital in ensuring continuous supply chain improvement, reducing the operational costs and risks towards meeting customer's expectation. This paper aims to illustrate an application of Fuzzy Analytical Hierarchy Process to prioritize the evaluation criteria in a context of automotive manufacturing in Malaysia. Five main criteria were identified which were quality, cost, delivery, customer serviceand technology support. These criteria had been arranged into hierarchical structure and evaluated by an expert. The relative importance of each criteria was determined by using linguistic variables which were represented as triangular fuzzy numbers. The Center of Gravity defuzzification method was used to convert the fuzzy evaluations into their corresponding crisps values. Such fuzzy evaluation can be used as a systematic tool to overcome the uncertainty evaluation of suppliers' performance which usually associated with human being subjective judgments.

  1. Analytical chemistry of the citrate process for flue gas desulfurization

    SciTech Connect

    Marchant, W.N.; May, S.L.; Simpson, W.W.; Winter, J.K.; Beard, H.R.

    1980-01-01

    The citrate process for flue gas desulfurization (FGD) is a product of continuing research by the US Bureau of Mines to meet the goal of minimizing the objectionable effects of minerals industry operations upon the environment. The reduction of SO/sub 2/ in solution by H/sub 2/S to produce elemental sulfur by the citrate process is extremely complex and results in solutions that contain at least nine different sulfur species. Process solution analysis is essential to a clear understanding of process chemistry and its safe, efficient operation. The various chemical species, the approximate ranges of their concentrations in citrate process solutions, and the analytical methods evolved to determine them are hydrogen sulfide (approx. 0M to 0.06M) by specific ion electrode, polysulfides (unknown) by ultraviolet (uv) spectrophotometry, elemental sulfur (approx. 0M to approx. 0.001M dissolved, approx. 0M to approx. 0.1M suspended) by uv spectrophotometry, thiosulfate (approx. 0M to approx. 0.25M) by iodometry or high performance liquid chromatography (HPLC), polythionates (approx. 0M to approx. 0.01M) by thin layer chromatography (TLC), dithionite (searched for but not detected in process solutions) by polarography or TLC, bisulfite (approx. 0M to 0.2M) by iodometry, sulfate (approx. 0M to 1M) by a Bureau-developed gravimetric procedure, citric acid (approx. 0M to 0.5M) by titration or visible colorimetry, glycolic acid (approx. 0M to 1M) by HPLC, sodium (approx. 1.5M) by flame photometry, and chloride by argentometric titration.

  2. The European Network of Analytical and Experimental Laboratories for Geosciences

    NASA Astrophysics Data System (ADS)

    Freda, Carmela; Funiciello, Francesca; Meredith, Phil; Sagnotti, Leonardo; Scarlato, Piergiorgio; Troll, Valentin R.; Willingshofer, Ernst

    2013-04-01

    Integrating Earth Sciences infrastructures in Europe is the mission of the European Plate Observing System (EPOS).The integration of European analytical, experimental, and analogue laboratories plays a key role in this context and is the task of the EPOS Working Group 6 (WG6). Despite the presence in Europe of high performance infrastructures dedicated to geosciences, there is still limited collaboration in sharing facilities and best practices. The EPOS WG6 aims to overcome this limitation by pushing towards national and trans-national coordination, efficient use of current laboratory infrastructures, and future aggregation of facilities not yet included. This will be attained through the creation of common access and interoperability policies to foster and simplify personnel mobility. The EPOS ambition is to orchestrate European laboratory infrastructures with diverse, complementary tasks and competences into a single, but geographically distributed, infrastructure for rock physics, palaeomagnetism, analytical and experimental petrology and volcanology, and tectonic modeling. The WG6 is presently organizing its thematic core services within the EPOS distributed research infrastructure with the goal of joining the other EPOS communities (geologists, seismologists, volcanologists, etc...) and stakeholders (engineers, risk managers and other geosciences investigators) to: 1) develop tools and services to enhance visitor programs that will mutually benefit visitors and hosts (transnational access); 2) improve support and training activities to make facilities equally accessible to students, young researchers, and experienced users (training and dissemination); 3) collaborate in sharing technological and scientific know-how (transfer of knowledge); 4) optimize interoperability of distributed instrumentation by standardizing data collection, archive, and quality control standards (data preservation and interoperability); 5) implement a unified e-Infrastructure for data

  3. Are EU networks anticipatory systems? An empirical and analytical approach

    NASA Astrophysics Data System (ADS)

    Leydesdorff, Loet

    2000-05-01

    A social system can be considered as distributed by its very nature. Social communication among humans can be expected to be reflexive. Thus, this system contains uncertainty and the uncertainty is provided with meaning. This dual-layeredness enables the network to organize itself ("autopoietically") into an anticipatory mode. The extent to which anticipatory functions have been developed can be observed, notably in the case of intentional constructions of reflexive layers of organization. In this study, the "Self-Organization of the European Information Society" is analyzed from this angle. Using empirical data, I argue that the increasing unification in representations at the European level allows for another differentiation in terms of the substantive communications that are represented. Insofar as the reflexive layers are differently codified, the anticipatory functions of the system can be strengthened.

  4. Optimization of analytical laboratory work using computer networking and databasing

    SciTech Connect

    Upp, D.L.; Metcalf, R.A.

    1996-06-01

    The Health Physics Analysis Laboratory (HPAL) performs around 600,000 analyses for radioactive nuclides each year at Los Alamos National Laboratory (LANL). Analysis matrices vary from nasal swipes, air filters, work area swipes, liquids, to the bottoms of shoes and cat litter. HPAL uses 8 liquid scintillation counters, 8 gas proportional counters, and 9 high purity germanium detectors in 5 laboratories to perform these analyses. HPAL has developed a computer network between the labs and software to produce analysis results. The software and hardware package includes barcode sample tracking, log-in, chain of custody, analysis calculations, analysis result printing, and utility programs. All data are written to a database, mirrored on a central server, and eventually written to CD-ROM to provide for online historical results. This system has greatly reduced the work required to provide for analysis results as well as improving the quality of the work performed.

  5. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  6. Controlled English to facilitate human/machine analytical processing

    NASA Astrophysics Data System (ADS)

    Braines, Dave; Mott, David; Laws, Simon; de Mel, Geeth; Pham, Tien

    2013-06-01

    Controlled English is a human-readable information representation format that is implemented using a restricted subset of the English language, but which is unambiguous and directly accessible by simple machine processes. We have been researching the capabilities of CE in a number of contexts, and exploring the degree to which a flexible and more human-friendly information representation format could aid the intelligence analyst in a multi-agent collaborative operational environment; especially in cases where the agents are a mixture of other human users and machine processes aimed at assisting the human users. CE itself is built upon a formal logic basis, but allows users to easily specify models for a domain of interest in a human-friendly language. In our research we have been developing an experimental component known as the "CE Store" in which CE information can be quickly and flexibly processed and shared between human and machine agents. The CE Store environment contains a number of specialized machine agents for common processing tasks and also supports execution of logical inference rules that can be defined in the same CE language. This paper outlines the basic architecture of this approach, discusses some of the example machine agents that have been developed, and provides some typical examples of the CE language and the way in which it has been used to support complex analytical tasks on synthetic data sources. We highlight the fusion of human and machine processing supported through the use of the CE language and CE Store environment, and show this environment with examples of highly dynamic extensions to the model(s) and integration between different user-defined models in a collaborative setting.

  7. Wavelet networks for face processing

    NASA Astrophysics Data System (ADS)

    Krüger, V.; Sommer, G.

    2002-06-01

    Wavelet networks (WNs) were introduced in 1992 as a combination of artificial neural radial basis function (RBF) networks and wavelet decomposition. Since then, however, WNs have received only a little attention. We believe that the potential of WNs has been generally underestimated. WNs have the advantage that the wavelet coefficients are directly related to the image data through the wavelet transform. In addition, the parameters of the wavelets in the WNs are subject to optimization, which results in a direct relation between the represented function and the optimized wavelets, leading to considerable data reduction (thus making subsequent algorithms much more efficient) as well as to wavelets that can be used as an optimized filter bank. In our study we analyze some WN properties and highlight their advantages for object representation purposes. We then present a series of results of experiments in which we used WNs for face tracking. We exploit the efficiency that is due to data reduction for face recognition and face-pose estimation by applying the optimized-filter-bank principle of the WNs.

  8. A Geovisual Analytic Approach to Understanding Geo-Social Relationships in the International Trade Network

    PubMed Central

    Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M.

    2014-01-01

    The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social representations. We then apply it to an exploration of international trade networks in terms of the complex interactions between spatial and social relationships. This exploration using the GeoSocialApp helps us develop a two-part hypothesis: international trade network clusters with structural equivalence are strongly ‘balkanized’ (fragmented) according to the geography of trading partners, and the geographical distance weighted by population within each network cluster has a positive relationship with the development level of countries. In addition to demonstrating the potential of visual analytics to provide insight concerning complex geo-social relationships at a global scale, the research also addresses the challenge of validating insights derived through interactive geovisual analytics. We develop two indicators to quantify the observed patterns, and then use a Monte-Carlo approach to support the hypothesis developed above. PMID:24558409

  9. A geovisual analytic approach to understanding geo-social relationships in the international trade network.

    PubMed

    Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M

    2014-01-01

    The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social representations. We then apply it to an exploration of international trade networks in terms of the complex interactions between spatial and social relationships. This exploration using the GeoSocialApp helps us develop a two-part hypothesis: international trade network clusters with structural equivalence are strongly 'balkanized' (fragmented) according to the geography of trading partners, and the geographical distance weighted by population within each network cluster has a positive relationship with the development level of countries. In addition to demonstrating the potential of visual analytics to provide insight concerning complex geo-social relationships at a global scale, the research also addresses the challenge of validating insights derived through interactive geovisual analytics. We develop two indicators to quantify the observed patterns, and then use a Monte-Carlo approach to support the hypothesis developed above. PMID:24558409

  10. Explicit solutions to analytical models of cross-layer protocol optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    The work is based on the interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. A multivariate point process (MVPP) model of discrete random events in WSNs establishes stochastic characteristics of optimal cross-layer protocols. In previous work by the author, discreteevent, cross-layer interactions in the MANET protocol are modeled in very general analytical terms with a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Characterization of the "best" cross-layer designs for the MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Assumptions on WSN characteristics simplify the dynamic programming conditions to yield mathematically tractable descriptions for the optimal routing protocols. Modeling limitations on the determination of closed-form solutions versus iterative explicit solutions for ad hoc WSN controls are presented.

  11. Using neural networks for process planning

    NASA Astrophysics Data System (ADS)

    Huang, Samuel H.; Zhang, HongChao

    1995-08-01

    Process planning has been recognized as an interface between computer-aided design and computer-aided manufacturing. Since the late 1960s, computer techniques have been used to automate process planning activities. AI-based techniques are designed for capturing, representing, organizing, and utilizing knowledge by computers, and are extremely useful for automated process planning. To date, most of the AI-based approaches used in automated process planning are some variations of knowledge-based expert systems. Due to their knowledge acquisition bottleneck, expert systems are not sufficient in solving process planning problems. Fortunately, AI has developed other techniques that are useful for knowledge acquisition, e.g., neural networks. Neural networks have several advantages over expert systems that are desired in today's manufacturing practice. However, very few neural network applications in process planning have been reported. We present this paper in order to stimulate the research on using neural networks for process planning. This paper also identifies the problems with neural networks and suggests some possible solutions, which will provide some guidelines for research and implementation.

  12. Assessment of Learning in Digital Interactive Social Networks: A Learning Analytics Approach

    ERIC Educational Resources Information Center

    Wilson, Mark; Gochyyev, Perman; Scalise, Kathleen

    2016-01-01

    This paper summarizes initial field-test results from data analytics used in the work of the Assessment and Teaching of 21st Century Skills (ATC21S) project, on the "ICT Literacy--Learning in digital networks" learning progression. This project, sponsored by Cisco, Intel and Microsoft, aims to help educators around the world enable…

  13. Analytical solution for a class of network dynamics with mechanical and financial applications

    NASA Astrophysics Data System (ADS)

    Krejčí, P.; Lamba, H.; Melnik, S.; Rachinskii, D.

    2014-09-01

    We show that for a certain class of dynamics at the nodes the response of a network of any topology to arbitrary inputs is defined in a simple way by its response to a monotone input. The nodes may have either a discrete or continuous set of states and there is no limit on the complexity of the network. The results provide both an efficient numerical method and the potential for accurate analytic approximation of the dynamics on such networks. As illustrative applications, we introduce a quasistatic mechanical model with objects interacting via frictional forces and a financial market model with avalanches and critical behavior that are generated by momentum trading strategies.

  14. Analytic Concepts and the Relation Between Content and Process in Science Curricula.

    ERIC Educational Resources Information Center

    Smith, Edward L.

    The interrelation of science content and process is discussed in terms of analytic and systemic concepts. Analytic concepts identify the type or form of systemic concepts found in particular disciplines. In terms of analytic concepts, science processes such as observation, deduction, and prediction can be identified and defined as operations…

  15. Analytical approach to cross-layer protocol optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in

  16. Evaluating supplier quality performance using analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Kalimuthu Rajoo, Shanmugam Sundram; Kasim, Maznah Mat; Ahmad, Nazihah

    2013-09-01

    This paper elaborates the importance of evaluating supplier quality performance to an organization. Supplier quality performance evaluation reflects the actual performance of the supplier exhibited at customer's end. It is critical in enabling the organization to determine the area of improvement and thereafter works with supplier to close the gaps. Success of the customer partly depends on supplier's quality performance. Key criteria as quality, cost, delivery, technology support and customer service are categorized as main factors in contributing to supplier's quality performance. 18 suppliers' who were manufacturing automotive application parts evaluated in year 2010 using weight point system. There were few suppliers with common rating which led to common ranking observed by few suppliers'. Analytical Hierarchy Process (AHP), a user friendly decision making tool for complex and multi criteria problems was used to evaluate the supplier's quality performance challenging the weight point system that was used for 18 suppliers'. The consistency ratio was checked for criteria and sub-criteria. Final results of AHP obtained with no overlap ratings, therefore yielded a better decision making methodology as compared to weight point rating system.

  17. Consistent analytic approach to the efficiency of collisional Penrose process

    NASA Astrophysics Data System (ADS)

    Harada, Tomohiro; Ogasawara, Kota; Miyamoto, Umpei

    2016-07-01

    We propose a consistent analytic approach to the efficiency of collisional Penrose process in the vicinity of a maximally rotating Kerr black hole. We focus on a collision with arbitrarily high center-of-mass energy, which occurs if either of the colliding particles has its angular momentum fine-tuned to the critical value to enter the horizon. We show that if the fine-tuned particle is ingoing on the collision, the upper limit of the efficiency is (2 +√{3 })(2 -√{2 })≃2.186 , while if the fine-tuned particle is bounced back before the collision, the upper limit is (2 +√{3 })2≃13.93 . Despite earlier claims, the former can be attained for inverse Compton scattering if the fine-tuned particle is massive and starts at rest at infinity, while the latter can be attained for various particle reactions, such as inverse Compton scattering and pair annihilation, if the fine-tuned particle is either massless or highly relativistic at infinity. We discuss the difference between the present and earlier analyses.

  18. The best motivator priorities parents choose via analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Farah, R. N.; Latha, P.

    2015-05-01

    Motivation is probably the most important factor that educators can target in order to improve learning. Numerous cross-disciplinary theories have been postulated to explain motivation. While each of these theories has some truth, no single theory seems to adequately explain all human motivation. The fact is that human beings in general and pupils in particular are complex creatures with complex needs and desires. In this paper, Analytic Hierarchy Process (AHP) has been proposed as an emerging solution to move towards too large, dynamic and complex real world multi-criteria decision making problems in selecting the most suitable motivator when choosing school for their children. Data were analyzed using SPSS 17.0 ("Statistical Package for Social Science") software. Statistic testing used are descriptive and inferential statistic. Descriptive statistic used to identify respondent pupils and parents demographic factors. The statistical testing used to determine the pupils and parents highest motivator priorities and parents' best priorities using AHP to determine the criteria chosen by parents such as school principals, teachers, pupils and parents. The moderating factors are selected schools based on "Standard Kualiti Pendidikan Malaysia" (SKPM) in Ampang. Inferential statistics such as One-way ANOVA used to get the significant and data used to calculate the weightage of AHP. School principals is found to be the best motivator for parents in choosing school for their pupils followed by teachers, parents and pupils.

  19. Model choice considerations and information integration using analytical hierarchy process

    SciTech Connect

    Langenbrunner, James R; Hemez, Francois M; Booker, Jane M; Ross, Timothy J.

    2010-10-15

    Using the theory of information-gap for decision-making under severe uncertainty, it has been shown that model output compared to experimental data contains irrevocable trade-offs between fidelity-to-data, robustness-to-uncertainty and confidence-in-prediction. We illustrate a strategy for information integration by gathering and aggregating all available data, knowledge, theory, experience, similar applications. Such integration of information becomes important when the physics is difficult to model, when observational data are sparse or difficult to measure, or both. To aggregate the available information, we take an inference perspective. Models are not rejected, nor wasted, but can be integrated into a final result. We show an example of information integration using Saaty's Analytic Hierarchy Process (AHP), integrating theory, simulation output and experimental data. We used expert elicitation to determine weights for two models and two experimental data sets, by forming pair-wise comparisons between model output and experimental data. In this way we transform epistemic and/or statistical strength from one field of study into another branch of physical application. The price to pay for utilizing all available knowledge is that inferences drawn for the integrated information must be accounted for and the costs can be considerable. Focusing on inferences and inference uncertainty (IU) is one way to understand complex information.

  20. Reducing Snapshots to Points: A Visual Analytics Approach to Dynamic Network Exploration.

    PubMed

    van den Elzen, Stef; Holten, Danny; Blaas, Jorik; van Wijk, Jarke J

    2016-01-01

    We propose a visual analytics approach for the exploration and analysis of dynamic networks. We consider snapshots of the network as points in high-dimensional space and project these to two dimensions for visualization and interaction using two juxtaposed views: one for showing a snapshot and one for showing the evolution of the network. With this approach users are enabled to detect stable states, recurring states, outlier topologies, and gain knowledge about the transitions between states and the network evolution in general. The components of our approach are discretization, vectorization and normalization, dimensionality reduction, and visualization and interaction, which are discussed in detail. The effectiveness of the approach is shown by applying it to artificial and real-world dynamic networks. PMID:26529683

  1. Dynamic Graph Analytic Framework (DYGRAF): greater situation awareness through layered multi-modal network analysis

    NASA Astrophysics Data System (ADS)

    Margitus, Michael R.; Tagliaferri, William A., Jr.; Sudit, Moises; LaMonica, Peter M.

    2012-06-01

    Understanding the structure and dynamics of networks are of vital importance to winning the global war on terror. To fully comprehend the network environment, analysts must be able to investigate interconnected relationships of many diverse network types simultaneously as they evolve both spatially and temporally. To remove the burden from the analyst of making mental correlations of observations and conclusions from multiple domains, we introduce the Dynamic Graph Analytic Framework (DYGRAF). DYGRAF provides the infrastructure which facilitates a layered multi-modal network analysis (LMMNA) approach that enables analysts to assemble previously disconnected, yet related, networks in a common battle space picture. In doing so, DYGRAF provides the analyst with timely situation awareness, understanding and anticipation of threats, and support for effective decision-making in diverse environments.

  2. Testing the accuracy of analytical estimates of spare capacity in protected-mesh networks

    NASA Astrophysics Data System (ADS)

    Forst, Brian; Grover, Wayne D.

    2006-10-01

    Recently, two different investigators published analytical models to predict the spare capacity requirements of shared-mesh survivable networks. If accurate, such estimators could be used in network planning and technology-selection applications in network-operating companies, displacing or reducing the need for detailed design studies. However, relatively few test-case results involving irregular topology and demands were provided, and some possibly significant idealizations were involved. We have therefore conducted a further series of tests of the equations to more widely assess the general accuracy of the results and to be aware of the possible limitations to their use. We review and implement the equations in question and compare their predictions, along with two well-known simple estimators, to the properties of integer linear programming (ILP)-based network design solutions for three families of protected-mesh networks. In all, 1464 detailed network designs are used as 'truth' tests for the equations over a systematically varying range of network topologies and demand patterns. On this set of trials the new mathematical models were rarely within 10% accuracy and typically had up to 30% error. By dissecting some specific cases we gain insights as to why average-case mathematical models of such a network-dependent phenomenon are unlikely to be reliable. Insights into the effects of network nodal degree, demand variance, hop and distance topologies, and topology dependence are also given.

  3. An analytic explanation of the stellar initial mass function from the theory of spatial networks

    NASA Astrophysics Data System (ADS)

    Klishin, Andrei; Chilingarian, Igor

    2015-08-01

    The distribution of stars by mass or the stellar initial mass function (IMF) that has been intensively studied in the Milky Way and other galaxies is the key property regulating star formation and galaxy evolution. The mass function of prestellar dense cores (DCMF) is an IMF precursor that has a similar shape, a broken power law with a sharp decline at low masses, but offset to higher masses. Results from numerical simulations of star formation qualitatively resemble an observed IMF/DCMF, however, most analytic IMF theories critically depend on the empirically chosen input spectrum of mass fluctuations which evolve into dense cores and, subsequently, stars. Here we propose an analytic approach by representing a system of dense cores accreting gas from the surrounding diffuse interstellar medium (ISM) as a spatial network growing by preferential attachment and assuming that the ISM density has a self-similar fractal distribution following the Kolmogorov turbulence theory. We obtain a scale free power law with the exponent that is not related to the input fluctuation mass spectrum but depends only on the fractal distribution dimensionalities of infalling gas (Dp) and turbulent ISM (Dm=2.35). It can be as steep as -3.24 (uniform volume density Dp=3) and becomes Salpeter (α=-2.35) for Dp=2.5 that corresponds to a variety of Brownian processes in physics. Our theory reproduces the observed DCMF shape over three orders of magnitude in mass, and it rules out a low mass star dominated "bottom-heavy" IMF shape unless the same steep slope holds at the higher masses.

  4. Nonlinear signal processing using neural networks: Prediction and system modelling

    SciTech Connect

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  5. Weighted networks as randomly reinforced urn processes

    NASA Astrophysics Data System (ADS)

    Caldarelli, Guido; Chessa, Alessandro; Crimaldi, Irene; Pammolli, Fabio

    2013-02-01

    We analyze weighted networks as randomly reinforced urn processes, in which the edge-total weights are determined by a reinforcement mechanism. We develop a statistical test and a procedure based on it to study the evolution of networks over time, detecting the “dominance” of some edges with respect to the others and then assessing if a given instance of the network is taken at its steady state or not. Distance from the steady state can be considered as a measure of the relevance of the observed properties of the network. Our results are quite general, in the sense that they are not based on a particular probability distribution or functional form of the random weights. Moreover, the proposed tool can be applied also to dense networks, which have received little attention by the network community so far, since they are often problematic. We apply our procedure in the context of the International Trade Network, determining a core of “dominant edges.”

  6. Resilient networked sensor-processing implementation

    NASA Astrophysics Data System (ADS)

    Wada, Glen; Hansen, J. S.

    1996-05-01

    The spatial infrared imaging telescope (SPIRIT) III sensor data processing requirement for the calibrated conversion of data to engineering units at a rate of 8 gigabytes of input data per day necessitated a distributed processing solution. As the sensor's five-band scanning radiometer and six- channel Fourier-transform spectrometer characteristics became fully understood, the processing requirements were enhanced. Hardware and schedule constraints compounded the need for a simple and resilient distributed implementation. Sensor data processing was implemented as a loosely coupled, fiber distributed data interface network of Silicon Graphics computers under the IRIX Operating Systems. The software was written in ANSI C and incorporated exception processing. Interprocessor communications and control were done both by the native capabilities of the network and Parallel Virtual Machine (PVM) software. The implementation was limited to four software components. The data reformatter component reduced the data coupling among sensor data processing components by providing self-contained data sets. The distributed processing control and graphical user interface components encased the PVM aspect of the implementation and lessened the concern of the sensor data processing component developers for the distributed model. A loosely coupled solution that dissociated the sensor data processing from the distributed processing environment, a simplified error processing scheme using exception processing, and a limited software configuration have proven resilient and compatible with the dynamics of sensor data processing.

  7. Focused analyte spray emission apparatus and process for mass spectrometric analysis

    DOEpatents

    Roach, Patrick J.; Laskin, Julia; Laskin, Alexander

    2012-01-17

    An apparatus and process are disclosed that deliver an analyte deposited on a substrate to a mass spectrometer that provides for trace analysis of complex organic analytes. Analytes are probed using a small droplet of solvent that is formed at the junction between two capillaries. A supply capillary maintains the droplet of solvent on the substrate; a collection capillary collects analyte desorbed from the surface and emits analyte ions as a focused spray to the inlet of a mass spectrometer for analysis. The invention enables efficient separation of desorption and ionization events, providing enhanced control over transport and ionization of the analyte.

  8. Analytical modelling of no-vent fill process

    NASA Technical Reports Server (NTRS)

    Vaughan, David A.; Schmidt, George R.

    1990-01-01

    An analytical model called FILL is presented which represents the first step in attaining the capability for no-vent fill of cryogens in space. The model's analytical structure is described, including the equations used to calculate transient thermodynamic behavior in different regions of the tank. The code predictions are compared with data from recent no-vent fill ground tests using Freon-114. The results are used to validate the FILL model to evaluate the viability of universal submerged jet theory in predicting system-level condensation effects.

  9. Inferring sparse networks for noisy transient processes.

    PubMed

    Tran, Hoang M; Bukkapatnam, Satish T S

    2016-01-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the l1-min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of l1-min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues. PMID:26916813

  10. Inferring sparse networks for noisy transient processes

    NASA Astrophysics Data System (ADS)

    Tran, Hoang M.; Bukkapatnam, Satish T. S.

    2016-02-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the -min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of -min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues.

  11. Inferring sparse networks for noisy transient processes

    PubMed Central

    Tran, Hoang M.; Bukkapatnam, Satish T.S.

    2016-01-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the -min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of -min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues. PMID:26916813

  12. Stationary and integrated autoregressive neural network processes.

    PubMed

    Trapletti, A; Leisch, F; Hornik, K

    2000-10-01

    We consider autoregressive neural network (AR-NN) processes driven by additive noise and demonstrate that the characteristic roots of the shortcuts-the standard conditions from linear time-series analysis-determine the stochastic behavior of the overall AR-NN process. If all the characteristic roots are outside the unit circle, then the process is ergodic and stationary. If at least one characteristic root lies inside the unit circle, then the process is transient. AR-NN processes with characteristic roots lying on the unit circle exhibit either ergodic, random walk, or transient behavior. We also analyze the class of integrated AR-NN (ARI-NN) processes and show that a standardized ARI-NN process "converges" to a Wiener process. Finally, least-squares estimation (training) of the stationary models and testing for nonstationarity is discussed. The estimators are shown to be consistent, and expressions on the limiting distributions are given. PMID:11032041

  13. Multipass optical device and process for gas and analyte determination

    DOEpatents

    Bernacki, Bruce E.

    2011-01-25

    A torus multipass optical device and method are described that provide for trace level determination of gases and gas-phase analytes. The torus device includes an optical cavity defined by at least one ring mirror. The mirror delivers optical power in at least a radial and axial direction and propagates light in a multipass optical path of a predefined path length.

  14. Neural networks in the process industries

    SciTech Connect

    Ben, L.R.; Heavner, L.

    1996-12-01

    Neural networks, or more precisely, artificial neural networks (ANNs), are rapidly gaining in popularity. They first began to appear on the process-control scene in the early 1990s, but have been a research focus for more than 30 years. Neural networks are really empirical models that approximate the way man thinks neurons in the human brain work. Neural-net technology is not trying to produce computerized clones, but to model nature in an effort to mimic some of the brain`s capabilities. Modeling, for the purposes of this article, means developing a mathematical description of physical phenomena. The physics and chemistry of industrial processes are usually quite complex and sometimes poorly understood. Our process understanding, and our imperfect ability to describe complexity in mathematical terms, limit fidelity of first-principle models. Computational requirements for executing these complex models are a further limitation. It is often not possible to execute first-principle model algorithms at the high rate required for online control. Nevertheless, rigorous first principle models are commonplace design tools. Process control is another matter. Important model inputs are often not available as process measurements, making real-time application difficult. In fact, engineers often use models to infer unavailable measurements. 5 figs.

  15. Experimental and Analytical Research on Fracture Processes in ROck

    SciTech Connect

    Herbert H.. Einstein; Jay Miller; Bruno Silva

    2009-02-27

    Experimental studies on fracture propagation and coalescence were conducted which together with previous tests by this group on gypsum and marble, provide information on fracturing. Specifically, different fracture geometries wsere tested, which together with the different material properties will provide the basis for analytical/numerical modeling. INitial steps on the models were made as were initial investigations on the effect of pressurized water on fracture coalescence.

  16. Analytically tractable studies of traveling waves of activity in integrate-and-fire neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Osan, Remus

    2016-05-01

    In contrast to other large-scale network models for propagation of electrical activity in neural tissue that have no analytical solutions for their dynamics, we show that for a specific class of integrate and fire neural networks the acceleration depends quadratically on the instantaneous speed of the activity propagation. We use this property to analytically compute the network spike dynamics and to highlight the emergence of a natural time scale for the evolution of the traveling waves. These results allow us to examine other applications of this model such as the effect that a nonconductive gap of tissue has on further activity propagation. Furthermore we show that activity propagation also depends on local conditions for other more general connectivity functions, by converting the evolution equations for network dynamics into a low-dimensional system of ordinary differential equations. This approach greatly enhances our intuition into the mechanisms of the traveling waves evolution and significantly reduces the simulation time for this class of models.

  17. Analytically tractable studies of traveling waves of activity in integrate-and-fire neural networks.

    PubMed

    Zhang, Jie; Osan, Remus

    2016-05-01

    In contrast to other large-scale network models for propagation of electrical activity in neural tissue that have no analytical solutions for their dynamics, we show that for a specific class of integrate and fire neural networks the acceleration depends quadratically on the instantaneous speed of the activity propagation. We use this property to analytically compute the network spike dynamics and to highlight the emergence of a natural time scale for the evolution of the traveling waves. These results allow us to examine other applications of this model such as the effect that a nonconductive gap of tissue has on further activity propagation. Furthermore we show that activity propagation also depends on local conditions for other more general connectivity functions, by converting the evolution equations for network dynamics into a low-dimensional system of ordinary differential equations. This approach greatly enhances our intuition into the mechanisms of the traveling waves evolution and significantly reduces the simulation time for this class of models. PMID:27300901

  18. Bosonic reaction-diffusion processes on scale-free networks

    NASA Astrophysics Data System (ADS)

    Baronchelli, Andrea; Catanzaro, Michele; Pastor-Satorras, Romualdo

    2008-07-01

    Reaction-diffusion processes can be adopted to model a large number of dynamics on complex networks, such as transport processes or epidemic outbreaks. In most cases, however, they have been studied from a fermionic perspective, in which each vertex can be occupied by at most one particle. While still useful, this approach suffers from some drawbacks, the most important probably being the difficulty to implement reactions involving more than two particles simultaneously. Here we develop a general framework for the study of bosonic reaction-diffusion processes on complex networks, in which there is no restriction on the number of interacting particles that a vertex can host. We describe these processes theoretically by means of continuous-time heterogeneous mean-field theory and divide them into two main classes: steady-state and monotonously decaying processes. We analyze specific examples of both behaviors within the class of one-species processes, comparing the results (whenever possible) with the corresponding fermionic counterparts. We find that the time evolution and critical properties of the particle density are independent of the fermionic or bosonic nature of the process, while differences exist in the functional form of the density of occupied vertices in a given degree class k . We implement a continuous-time Monte Carlo algorithm, well suited for general bosonic simulations, which allows us to confirm the analytical predictions formulated within mean-field theory. Our results, at both the theoretical and numerical levels, can be easily generalized to tackle more complex, multispecies, reaction-diffusion processes and open a promising path for a general study and classification of this kind of dynamical systems on complex networks.

  19. GraphPrints: Towards a Graph Analytic Method for Network Anomaly Detection

    SciTech Connect

    Harshaw, Chris R; Bridges, Robert A; Iannacone, Michael D; Reed, Joel W; Goodall, John R

    2016-01-01

    This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called \\textit{GraphPrints}. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets\\textemdash small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84\\% at the time-interval level, and 0.05\\% at the IP-level with 100\\% true positive rates at both.

  20. Analytical solution for a class of network dynamics with mechanical and financial applications.

    PubMed

    Krejčí, P; Lamba, H; Melnik, S; Rachinskii, D

    2014-09-01

    We show that for a certain class of dynamics at the nodes the response of a network of any topology to arbitrary inputs is defined in a simple way by its response to a monotone input. The nodes may have either a discrete or continuous set of states and there is no limit on the complexity of the network. The results provide both an efficient numerical method and the potential for accurate analytic approximation of the dynamics on such networks. As illustrative applications, we introduce a quasistatic mechanical model with objects interacting via frictional forces and a financial market model with avalanches and critical behavior that are generated by momentum trading strategies. PMID:25314496

  1. Speed of synchronization in complex networks of neural oscillators: Analytic results based on Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Timme, Marc; Geisel, Theo; Wolf, Fred

    2006-03-01

    We analyze the dynamics of networks of spiking neural oscillators. First, we present an exact linear stability theory of the synchronous state for networks of arbitrary connectivity. For general neuron rise functions, stability is determined by multiple operators, for which standard analysis is not suitable. We describe a general nonstandard solution to the multioperator problem. Subsequently, we derive a class of neuronal rise functions for which all stability operators become degenerate and standard eigenvalue analysis becomes a suitable tool. Interestingly, this class is found to consist of networks of leaky integrate-and-fire neurons. For random networks of inhibitory integrate-and-fire neurons, we then develop an analytical approach, based on the theory of random matrices, to precisely determine the eigenvalue distributions of the stability operators. This yields the asymptotic relaxation time for perturbations to the synchronous state which provides the characteristic time scale on which neurons can coordinate their activity in such networks. For networks with finite in-degree, i.e., finite number of presynaptic inputs per neuron, we find a speed limit to coordinating spiking activity. Even with arbitrarily strong interaction strengths neurons cannot synchronize faster than at a certain maximal speed determined by the typical in-degree.

  2. Bias and precision of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1984

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1987-01-01

    The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)

  3. The application of the analytic hierarchy process when choosing layout schemes for a geokhod pumping station

    NASA Astrophysics Data System (ADS)

    Chernukhin, R. V.; Dronov, A. A.; Blashchuk, M. Y.

    2015-09-01

    The article describes one possibility of choosing layout schemes for geokhod systems which is the analytic hierarchy process. There is the essence of the method summarized therein. The usage of the method is considered for the analysis and the choice of layout schemes for a geokhod pumping station. Keywords: geokhod, analytic hierarchy process, pumping station, layout scheme.

  4. Tensegrity II. How structural networks influence cellular information processing networks

    NASA Technical Reports Server (NTRS)

    Ingber, Donald E.

    2003-01-01

    The major challenge in biology today is biocomplexity: the need to explain how cell and tissue behaviors emerge from collective interactions within complex molecular networks. Part I of this two-part article, described a mechanical model of cell structure based on tensegrity architecture that explains how the mechanical behavior of the cell emerges from physical interactions among the different molecular filament systems that form the cytoskeleton. Recent work shows that the cytoskeleton also orients much of the cell's metabolic and signal transduction machinery and that mechanical distortion of cells and the cytoskeleton through cell surface integrin receptors can profoundly affect cell behavior. In particular, gradual variations in this single physical control parameter (cell shape distortion) can switch cells between distinct gene programs (e.g. growth, differentiation and apoptosis), and this process can be viewed as a biological phase transition. Part II of this article covers how combined use of tensegrity and solid-state mechanochemistry by cells may mediate mechanotransduction and facilitate integration of chemical and physical signals that are responsible for control of cell behavior. In addition, it examines how cell structural networks affect gene and protein signaling networks to produce characteristic phenotypes and cell fate transitions during tissue development.

  5. Diagnosing process faults using neural network models

    SciTech Connect

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  6. Evaluation of feeds for melt and dilute process using an analytical hierarchy process

    SciTech Connect

    Krupa, J.F.

    2000-03-22

    Westinghouse Savannah River Company was requested to evaluate whether nuclear materials other than aluminum-clad spent nuclear fuel should be considered for treatment to prepare them for disposal in the melt and dilute facility as part of the Treatment and Storage Facility currently projected for construction in the L-Reactor process area. The decision analysis process used to develop this analysis considered many variables and uncertainties, including repository requirements that are not yet finalized. The Analytical Hierarchy Process using a ratings methodology was used to rank potential feed candidates for disposition through the Melt and Dilute facility proposed for disposition of Savannah River Site aluminum-clad spent nuclear fuel. Because of the scoping nature of this analysis, the expert team convened for this purpose concentrated on technical feasibility and potential cost impacts associated with using melt and dilute versus the current disposition option. This report documents results of the decision analysis.

  7. The Rondonia Lightning Detection Network: Network Description, Science Objectives, Data Processing Archival/Methodology, and Results

    NASA Technical Reports Server (NTRS)

    Blakeslee, R. J.; Bailey, J. C.; Pinto, O.; Athayde, A.; Renno, N.; Weidman, C. D.

    2003-01-01

    A four station Advanced Lightning Direction Finder (ALDF) network was established in the state of Rondonia in western Brazil in 1999 through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of- arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/Marshall Space Flight Center in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the Internet. The network, which is still operational, was deployed to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite that was launched in November 1997. The measurements are also being used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-time series observations produced by this network will help establish a regional lightning climatological database, supplementing other databases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at the NASA/Marshall Space Flight Center have been applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The data will also be corrected for the network detection efficiency. The processing methodology and the results from the analysis of four years of network operations will be presented.

  8. Materials, Processes, and Environmental Engineering Network

    NASA Technical Reports Server (NTRS)

    White, Margo M.

    1993-01-01

    Attention is given to the Materials, Processes, and Environmental Engineering Network (MPEEN), which was developed as a central holding facility for materials testing information generated by the Materials and Processes Laboratory of NASA-Marshall. It contains information from other NASA centers and outside agencies, and also includes the NASA Environmental Information System (NEIS) and Failure Analysis Information System (FAIS) data. The data base is NEIS, which is accessible through MPEEN. Environmental concerns are addressed regarding materials identified by the NASA Operational Environment Team (NOET) to be hazardous to the environment. The data base also contains the usage and performance characteristics of these materials.

  9. Time-to-event analysis with artificial neural networks: an integrated analytical and rule-based study for breast cancer.

    PubMed

    Lisboa, Paulo J G; Etchells, Terence A; Jarman, Ian H; Hane Aung, M S; Chabaud, Sylvie; Bachelot, Thomas; Perol, David; Gargi, Thérèse; Bourdès, Valérie; Bonnevay, Stéphane; Négrier, Sylvie

    2008-01-01

    This paper presents an analysis of censored survival data for breast cancer specific mortality and disease-free survival. There are three stages to the process, namely time-to-event modelling, risk stratification by predicted outcome and model interpretation using rule extraction. Model selection was carried out using the benchmark linear model, Cox regression but risk staging was derived with Cox regression and with Partial Logistic Regression Artificial Neural Networks regularised with Automatic Relevance Determination (PLANN-ARD). This analysis compares the two approaches showing the benefit of using the neural network framework especially for patients at high risk. The neural network model also has results in a smooth model of the hazard without the need for limiting assumptions of proportionality. The model predictions were verified using out-of-sample testing with the mortality model also compared with two other prognostic models called TNG and the NPI rule model. Further verification was carried out by comparing marginal estimates of the predicted and actual cumulative hazards. It was also observed that doctors seem to treat mortality and disease-free models as equivalent, so a further analysis was performed to observe if this was the case. The analysis was extended with automatic rule generation using Orthogonal Search Rule Extraction (OSRE). This methodology translates analytical risk scores into the language of the clinical domain, enabling direct validation of the operation of the Cox or neural network model. This paper extends the existing OSRE methodology to data sets that include continuous-valued variables. PMID:18304780

  10. Resting-brain functional connectivity predicted by analytic measures of network communication.

    PubMed

    Goñi, Joaquín; van den Heuvel, Martijn P; Avena-Koenigsberger, Andrea; Velez de Mendizabal, Nieves; Betzel, Richard F; Griffa, Alessandra; Hagmann, Patric; Corominas-Murtra, Bernat; Thiran, Jean-Philippe; Sporns, Olaf

    2014-01-14

    The complex relationship between structural and functional connectivity, as measured by noninvasive imaging of the human brain, poses many unresolved challenges and open questions. Here, we apply analytic measures of network communication to the structural connectivity of the human brain and explore the capacity of these measures to predict resting-state functional connectivity across three independently acquired datasets. We focus on the layout of shortest paths across the network and on two communication measures--search information and path transitivity--which account for how these paths are embedded in the rest of the network. Search information is an existing measure of information needed to access or trace shortest paths; we introduce path transitivity to measure the density of local detours along the shortest path. We find that both search information and path transitivity predict the strength of functional connectivity among both connected and unconnected node pairs. They do so at levels that match or significantly exceed path length measures, Euclidean distance, as well as computational models of neural dynamics. This capacity suggests that dynamic couplings due to interactions among neural elements in brain networks are substantially influenced by the broader network context adjacent to the shortest communication pathways. PMID:24379387

  11. Exact performance analytical model for spectrum allocation in flexible grid optical networks

    NASA Astrophysics Data System (ADS)

    Yu, Yiming; Zhang, Jie; Zhao, Yongli; Li, Hui; Ji, Yuefeng; Gu, Wanyi

    2014-03-01

    Dynamic flexible grid optical networks have gained much attention because of the advantages of high spectrum efficiency and flexibility, while the performance analysis will be more complex compared with fixed grid optical networks. An analytical Markov model is first presented in the paper, which can exactly describe the stochastic characteristics of the spectrum allocation in flexible grid optical networks considering both random-fit and first-fit resource assignment policies. We focus on the effect of spectrum contiguous constraint which has not been systematically studied in respect of mathematical modeling, and three major properties of the model are presented and analyzed. The model can expose key performance features and act as the foundation of modeling the Routing and Spectrum Assignment (RSA) problem with diverse topologies. Two heuristic algorithms are also proposed to make it more tractable. Finally, several key parameters, such as blocking probability, resource utilization rate and fragmentation rate are presented and computed, and the corresponding Monte Carlo simulation results match closely with analytical results, which prove the correctness of this mathematical model.

  12. Analytical Model of Large Data Transactions in CoAP Networks

    PubMed Central

    Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.

    2014-01-01

    We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143

  13. Multi-analyte assay for triazines using cross-reactive antibodies and neural networks.

    PubMed

    Reder, Sabine; Dieterle, Frank; Jansen, Hendrikus; Alcock, Susan; Gauglitz, Günter

    2003-12-30

    A biosensor system based on total internal reflectance fluorescence (TIRF) was used to discriminate a mixture of the triazines atrazine and simazine. Only cross-reactive antibodies were available for these two analytes. The biosensor is fully automated and can be regenerated allowing several hundreds of measurements without any user input. Even a remote control for online monitoring in the field is possible. The multivariate calibration of the sensor signal was performed using artificial neural networks, as the relationship between the sensor signals and the concentration of the analytes is highly non-linear. For the development of a multi-analyte immunoassay consisting of two polyclonal antibodies with cross-reactivity to atrazine and simazine and different derivatives immobilised on the transducer surface, the binding characteristics between these substances like binding capacity and cross-reactivity were characterised. The examination of three different measurement procedures showed that a two-step measurement using only one antibody per step allows a quantification of both analytes in a mixture with limits of detection of 0.2 microg/l for atrazine and 0.3 microg/l for simazine. The biosensor is suitable for online monitoring in the field and remote control is possible. PMID:14623469

  14. Leveraging Big-Data for Business Process Analytics

    ERIC Educational Resources Information Center

    Vera-Baquero, Alejandro; Colomo Palacios, Ricardo; Stantchev, Vladimir; Molloy, Owen

    2015-01-01

    Purpose: This paper aims to present a solution that enables organizations to monitor and analyse the performance of their business processes by means of Big Data technology. Business process improvement can drastically influence in the profit of corporations and helps them to remain viable. However, the use of traditional Business Intelligence…

  15. Process models: analytical tools for managing industrial energy systems

    SciTech Connect

    Howe, S O; Pilati, D A; Balzer, C; Sparrow, F T

    1980-01-01

    How the process models developed at BNL are used to analyze industrial energy systems is described and illustrated. Following a brief overview of the industry modeling program, the general methodology of process modeling is discussed. The discussion highlights the important concepts, contents, inputs, and outputs of a typical process model. A model of the US pulp and paper industry is then discussed as a specific application of process modeling methodology. Applications addressed with the case study results include projections of energy demand, conservation technology assessment, energy-related tax policies, and sensitivity analysis. A subsequent discussion of these results supports the conclusion that industry process models are versatile and powerful tools for managing industrial energy systems.

  16. Transient stability assessment for network topology changes: Application of energy margin analytical sensitivity

    SciTech Connect

    Chadalavada, V.; Vittal, V. . Dept. of Electrical Engineering and Computer Engineering)

    1994-08-01

    Recent developments in direct transient stability assessment using the Transient Energy Function (TEF) method have included the exit point technique to determine the controlling unstable equilibrium point (uep). In this paper, analytical sensitivity of the energy margin is coupled with the exit point based TEF method to assess system stability when there is a change in system parameters: plant generation or network configuration. The principal features of this paper include: introduction of a very fast sensitivity technique to account for network configuration changes, elimination of the assumption that the mode of disturbance of the controlling uep does not change, correlation of the sensitivity results with time simulation through swing curves. The technique is tested on the 50-generator IEEE test system and the 161-generator Northern States Power (NSP) system.

  17. Direct, physically motivated derivation of the contagion condition for spreading processes on generalized random networks

    NASA Astrophysics Data System (ADS)

    Dodds, Peter Sheridan; Harris, Kameron Decker; Payne, Joshua L.

    2011-05-01

    For a broad range of single-seed contagion processes acting on generalized random networks, we derive a unifying analytic expression for the possibility of global spreading events in a straightforward, physically intuitive fashion. Our reasoning lays bare a direct mechanical understanding of an archetypal spreading phenomena that is not evident in circuitous extant mathematical approaches.

  18. A comprehensive Network Security Risk Model for process control networks.

    PubMed

    Henry, Matthew H; Haimes, Yacov Y

    2009-02-01

    The risk of cyber attacks on process control networks (PCN) is receiving significant attention due to the potentially catastrophic extent to which PCN failures can damage the infrastructures and commodity flows that they support. Risk management addresses the coupled problems of (1) reducing the likelihood that cyber attacks would succeed in disrupting PCN operation and (2) reducing the severity of consequences in the event of PCN failure or manipulation. The Network Security Risk Model (NSRM) developed in this article provides a means of evaluating the efficacy of candidate risk management policies by modeling the baseline risk and assessing expectations of risk after the implementation of candidate measures. Where existing risk models fall short of providing adequate insight into the efficacy of candidate risk management policies due to shortcomings in their structure or formulation, the NSRM provides model structure and an associated modeling methodology that captures the relevant dynamics of cyber attacks on PCN for risk analysis. This article develops the NSRM in detail in the context of an illustrative example. PMID:19000078

  19. An analytical approach to customer requirement information processing

    NASA Astrophysics Data System (ADS)

    Zhou, Zude; Xiao, Zheng; Liu, Quan; Ai, Qingsong

    2013-11-01

    'Customer requirements' (CRs) management is a key component of customer relationship management (CRM). By processing customer-focused information, CRs management plays an important role in enterprise systems (ESs). Although two main CRs analysis methods, quality function deployment (QFD) and Kano model, have been applied to many fields by many enterprises in the past several decades, the limitations such as complex processes and operations make them unsuitable for online businesses among small- and medium-sized enterprises (SMEs). Currently, most SMEs do not have the resources to implement QFD or Kano model. In this article, we propose a method named customer requirement information (CRI), which provides a simpler and easier way for SMEs to run CRs analysis. The proposed method analyses CRs from the perspective of information and applies mathematical methods to the analysis process. A detailed description of CRI's acquisition, classification and processing is provided.

  20. An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application

    ERIC Educational Resources Information Center

    Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth

    2016-01-01

    Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…

  1. Materials, processes, and environmental engineering network

    NASA Technical Reports Server (NTRS)

    White, Margo M.

    1993-01-01

    The Materials, Processes, and Environmental Engineering Network (MPEEN) was developed as a central holding facility for materials testing information generated by the Materials and Processes Laboratory. It contains information from other NASA centers and outside agencies, and also includes the NASA Environmental Information System (NEIS) and Failure Analysis Information System (FAIS) data. Environmental replacement materials information is a newly developed focus of MPEEN. This database is the NASA Environmental Information System, NEIS, which is accessible through MPEEN. Environmental concerns are addressed regarding materials identified by the NASA Operational Environment Team, NOET, to be hazardous to the environment. An environmental replacement technology database is contained within NEIS. Environmental concerns about materials are identified by NOET, and control or replacement strategies are formed. This database also contains the usage and performance characteristics of these hazardous materials. In addition to addressing environmental concerns, MPEEN contains one of the largest materials databases in the world. Over 600 users access this network on a daily basis. There is information available on failure analysis, metals and nonmetals testing, materials properties, standard and commercial parts, foreign alloy cross-reference, Long Duration Exposure Facility (LDEF) data, and Materials and Processes Selection List data.

  2. Analytical theory of polymer-network-mediated interaction between colloidal particles

    PubMed Central

    Di Michele, Lorenzo; Zaccone, Alessio; Eiser, Erika

    2012-01-01

    Nanostructured materials based on colloidal particles embedded in a polymer network are used in a variety of applications ranging from nanocomposite rubbers to organic-inorganic hybrid solar cells. Further, polymer-network-mediated colloidal interactions are highly relevant to biological studies whereby polymer hydrogels are commonly employed to probe the mechanical response of living cells, which can determine their biological function in physiological environments. The performance of nanomaterials crucially relies upon the spatial organization of the colloidal particles within the polymer network that depends, in turn, on the effective interactions between the particles in the medium. Existing models based on nonlocal equilibrium thermodynamics fail to clarify the nature of these interactions, precluding the way toward the rational design of polymer-composite materials. In this article, we present a predictive analytical theory of these interactions based on a coarse-grained model for polymer networks. We apply the theory to the case of colloids partially embedded in cross-linked polymer substrates and clarify the origin of attractive interactions recently observed experimentally. Monte Carlo simulation results that quantitatively confirm the theoretical predictions are also presented. PMID:22679289

  3. Neural network training as a dissipative process.

    PubMed

    Gori, Marco; Maggini, Marco; Rossi, Alessandro

    2016-09-01

    This paper analyzes the practical issues and reports some results on a theory in which learning is modeled as a continuous temporal process driven by laws describing the interactions of intelligent agents with their own environment. The classic regularization framework is paired with the idea of temporal manifolds by introducing the principle of least cognitive action, which is inspired by the related principle of mechanics. The introduction of the counterparts of the kinetic and potential energy leads to an interpretation of learning as a dissipative process. As an example, we apply the theory to supervised learning in neural networks and show that the corresponding Euler-Lagrange differential equations can be connected to the classic gradient descent algorithm on the supervised pairs. We give preliminary experiments to confirm the soundness of the theory. PMID:27389569

  4. Rhodobase, a meta-analytical tool for reconstructing gene regulatory networks in a model photosynthetic bacterium.

    PubMed

    Moskvin, Oleg V; Bolotin, Dmitry; Wang, Andrew; Ivanov, Pavel S; Gomelsky, Mark

    2011-02-01

    We present Rhodobase, a web-based meta-analytical tool for analysis of transcriptional regulation in a model anoxygenic photosynthetic bacterium, Rhodobacter sphaeroides. The gene association meta-analysis is based on the pooled data from 100 of R. sphaeroides whole-genome DNA microarrays. Gene-centric regulatory networks were visualized using the StarNet approach (Jupiter, D.C., VanBuren, V., 2008. A visual data mining tool that facilitates reconstruction of transcription regulatory networks. PLoS ONE 3, e1717) with several modifications. We developed a means to identify and visualize operons and superoperons. We designed a framework for the cross-genome search for transcription factor binding sites that takes into account high GC-content and oligonucleotide usage profile characteristic of the R. sphaeroides genome. To facilitate reconstruction of directional relationships between co-regulated genes, we screened upstream sequences (-400 to +20bp from start codons) of all genes for putative binding sites of bacterial transcription factors using a self-optimizing search method developed here. To test performance of the meta-analysis tools and transcription factor site predictions, we reconstructed selected nodes of the R. sphaeroides transcription factor-centric regulatory matrix. The test revealed regulatory relationships that correlate well with the experimentally derived data. The database of transcriptional profile correlations, the network visualization engine and the optimized search engine for transcription factor binding sites analysis are available at http://rhodobase.org. PMID:21070832

  5. Analytical and experimental studies for thermal plasma processing of materials

    NASA Astrophysics Data System (ADS)

    Work continued on thermal plasma processing of materials. This quarter, ceramic powders of carbides, aluminum nitride, oxides, solids solutions, magnetic and non magnetic spinels, superconductors, and composites have been successfully synthesized in a Triple DC Torch Plasma Jet Reactor (TTPR) and in a single DC Plasma Jet Reactor. All the ceramic powders with the exception of AIN were synthesized using a novel injection method developed to overcome the problems associated with solid injection, in particular for the single DC plasma jet reactor, and to realize the benefits of gas phase reactions. Also, initial experiments have been performed for the deposition of diamond coatings on Si wafers using the TTPR with methane as the carbon source. Well faceted diamond crystallites were deposited on the surface of the wafers, forming a continuous one particle thick coating. For measuring temperature and velocity fields in plasma systems, enthalpy probes have been developed and tested. The validity has been checked by performing energy and mass flux balances in an argon plasma jet operated in argon atmosphere. Total Gibbs free energy minimization calculations using a quasi-equilibrium modification have been applied to simulate several chemical reactions. Plasma reactor modelling has been performed for the counter-flow liquid injection plasma synthesis experiment. Plasma diagnostics has been initiated to determine the pressure gradient in the coalesced part of the plasma jet. The pressure gradient drives the diffusion of chemical species which ultimately controls the chemical reactions.

  6. Testing and Analytical Modeling for Purging Process of a Cryogenic Line

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2013-01-01

    The purging operations for cryogenic main propulsion systems of upper stage are usually carried out for the following cases: 1) Purging of the Fill/Drain line after completion of propellant loading. This operation allows the removal of residual propellant mass; and 2) Purging of the Feed/Drain line if the mission is scrubbed. The lines would be purged by connections to a ground high-pressure gas storage source. The flowrate of purge gas should be regulated such that the pressure in the line will not exceed the required maximum allowable value. Exceeding the maximum allowable pressure may lead to structural damage in the line. To gain confidence in analytical models of the purge process, a test series was conducted. The test article, a 20-cm incline line, was filled with liquid hydrogen and then purged with gaseous helium (GHe). The influences of GHe flowrates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program, an in-house general-purpose computer program for flow network analysis, was utilized to model and simulate the testing. The test procedures, modeling descriptions, and the results will be presented in the final paper.

  7. Testing and Analytical Modeling for Purging Process of a Cryogenic Line

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2015-01-01

    The purging operations for cryogenic main propulsion systems of upper stage are usually carried out for the following cases: 1) Purging of the Fill/Drain line after completion of propellant loading. This operation allows the removal of residual propellant mass; and 2) Purging of the Feed/Drain line if the mission is scrubbed. The lines would be purged by connections to a ground high-pressure gas storage source. The flow-rate of purge gas should be regulated such that the pressure in the line will not exceed the required maximum allowable value. Exceeding the maximum allowable pressure may lead to structural damage in the line. To gain confidence in analytical models of the purge process, a test series was conducted. The test article, a 20-cm incline line, was filled with liquid hydrogen and then purged with gaseous helium (GHe). The influences of GHe flow-rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program, an in-house general-purpose computer program for flow network analysis, was utilized to model and simulate the testing. The test procedures, modeling descriptions, and the results will be presented in the final paper.

  8. Default network activation during episodic and semantic memory retrieval: A selective meta-analytic comparison.

    PubMed

    Kim, Hongkeun

    2016-01-01

    It remains unclear whether and to what extent the default network subregions involved in episodic memory (EM) and semantic memory (SM) processes overlap or are separated from one another. This study addresses this issue through a controlled meta-analysis of functional neuroimaging studies involving healthy participants. Various EM and SM task paradigms differ widely in the extent of default network involvement. Therefore, the issue at hand cannot be properly addressed without some control for this factor. In this regard, this study employs a two-stage analysis: a preliminary meta-analysis to select EM and SM task paradigms that recruit relatively extensive default network regions and a main analysis to compare the selected task paradigms. Based on a within-EM comparison, the default network contributed more to recollection/familiarity effects than to old/new effects, and based on a within-SM comparison, it contributed more to word/pseudoword effects than to semantic/phonological effects. According to a direct comparison of recollection/familiarity and word/pseudoword effects, each involving a range of default network regions, there were more overlaps than separations in default network subregions involved in these two effects. More specifically, overlaps included the bilateral posterior cingulate/retrosplenial cortex, left inferior parietal lobule, and left anteromedial prefrontal regions, whereas separations included only the hippocampal formation and the parahippocampal cortex region, which was unique to recollection/familiarity effects. These results indicate that EM and SM retrieval processes involving strong memory signals recruit extensive and largely overlapping default network regions and differ mainly in distinct contributions of hippocampus and parahippocampal regions to EM retrieval. PMID:26562053

  9. Investigating the functional heterogeneity of the default mode network using coordinate-based meta-analytic modeling

    PubMed Central

    Laird, Angela R.; Eickhoff, Simon B.; Li, Karl; Robin, Donald A.; Glahn, David C.; Fox, Peter T.

    2010-01-01

    The default mode network (DMN) comprises a set of regions that exhibit ongoing, intrinsic activity in the resting state and task-related decreases in activity across a range of paradigms. However, DMN regions have also been reported as task-related increases, either independently or coactivated with other regions in the network. Cognitive subtractions and the use of low-level baseline conditions have generally masked the functional nature of these regions. Using a combination of activation likelihood estimation, which assesses statistically significant convergence of neuroimaging results, and tools distributed with the BrainMap database, we identified core regions in the DMN and examined their functional heterogeneity. Meta-analytic coactivation maps of task-related increases were independently generated for each region, which included both within-DMN and non-DMN connections. Their functional properties were assessed using behavioral domain metadata in BrainMap. These results were integrated to determine a DMN connectivity model that represents the patterns of interactions observed in task-related increases in activity across diverse tasks. Sub-network components of this model were identified, and behavioral domain analysis of these cliques yielded discrete functional properties, demonstrating that components of the DMN are differentially specialized. Affective and perceptual cliques of the DMN were identified, as well as the cliques associated with a reduced preference for motor processing. In summary, we used advanced coordinate-based meta-analysis techniques to explicate behavior and connectivity in the default mode network; future work will involve applying this analysis strategy to other modes of brain function, such as executive function or sensorimotor systems. PMID:19923283

  10. Near-infrared spectroscopic measurements of blood analytes using multi-layer perceptron neural networks.

    PubMed

    Kalamatianos, Dimitrios; Liatsis, Panos; Wellstead, Peter E

    2006-01-01

    Near-infrared (NIR) spectroscopy is being applied to the solution of problems in many areas of biomedical and pharmaceutical research. In this paper we investigate the use of NIR spectroscopy as an analytical tool to quantify concentrations of urea, creatinine, glucose and oxyhemoglobin (HbO2). Measurements have been made in vitro with a portable spectrometer developed in our labs that consists of a two beam interferometer operating in the range of 800-2300 nm. For the data analysis a pattern recognition philosophy was used with a preprocessing stage and a multi-layer perceptron (MLP) neural network for the measurement stage. Results show that the interferogram signatures of the above compounds are sufficiently strong in that spectral range. Measurements of three different concentrations were possible with mean squared error (MSE) of the order of 10(-6). PMID:17947035

  11. Technical and analytical support to the ARPA Artificial Neural Network Technology Program

    SciTech Connect

    1995-09-16

    Strategic Analysis (SA) has provided ongoing work for the Advanced Research Projects Agency (ARPA) Artificial Neural Network (ANN) technology program. This effort provides technical and analytical support to the ARPA ANN technology program in support of the following information areas of interest: (1) Alternative approaches for application of ANN technology, hardware approaches that utilize the inherent massive parallelism of ANN technology, and novel ANN theory and modeling analyses. (2) Promising military applications for ANN technology. (3) Measures to use in judging success of ANN technology research and development. (4) Alternative strategies for ARPA involvement in ANN technology R&D. These objectives were accomplished through the development of novel information management tools, strong SA knowledge base, and effective communication with contractors, agents, and other program participants. These goals have been realized. Through enhanced tracking and coordination of research, the ANN program is healthy and recharged for future technological breakthroughs.

  12. Analytical approach to the dynamics of facilitated spin models on random networks

    NASA Astrophysics Data System (ADS)

    Fennell, Peter G.; Gleeson, James P.; Cellai, Davide

    2014-09-01

    Facilitated spin models were introduced some decades ago to mimic systems characterized by a glass transition. Recent developments have shown that a class of facilitated spin models is also able to reproduce characteristic signatures of the structural relaxation properties of glass-forming liquids. While the equilibrium phase diagram of these models can be calculated analytically, the dynamics are usually investigated numerically. Here we propose a network-based approach, called approximate master equation (AME), to the dynamics of the Fredrickson-Andersen model. The approach correctly predicts the critical temperature at which the glass transition occurs. We also find excellent agreement between the theory and the numerical simulations for the transient regime, except in close proximity of the liquid-glass transition. Finally, we analytically characterize the critical clusters of the model and show that the departures between our AME approach and the Monte Carlo can be related to the large interface between blocked and unblocked spins at temperatures close to the glass transition.

  13. Downstream processing and chromatography based analytical methods for production of vaccines, gene therapy vectors, and bacteriophages

    PubMed Central

    Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš

    2015-01-01

    Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122

  14. An analytically resolved model of a potato's thermal processing using Heun functions

    NASA Astrophysics Data System (ADS)

    Vargas Toro, Agustín.

    2014-05-01

    A potato's thermal processing model is solved analytically. The model is formulated using the equation of heat diffusion in the case of a spherical potato processed in a furnace, and assuming that the potato's thermal conductivity is radially modulated. The model is solved using the method of the Laplace transform, applying Bromwich Integral and Residue Theorem. The temperatures' profile in the potato is presented as an infinite series of Heun functions. All computations are performed with computer algebra software, specifically Maple. Using the numerical values of the thermal parameters of the potato and geometric and thermal parameters of the processing furnace, the time evolution of the temperatures in different regions inside the potato are presented analytically and graphically. The duration of thermal processing in order to achieve a specified effect on the potato is computed. It is expected that the obtained analytical results will be important in food engineering and cooking engineering.

  15. Possibilities of Utilizing the Method of Analytical Hierarchy Process Within the Strategy of Corporate Social Business

    NASA Astrophysics Data System (ADS)

    Drieniková, Katarína; Hrdinová, Gabriela; Naňo, Tomáš; Sakál, Peter

    2010-01-01

    The paper deals with the analysis of the theory of corporate social responsibility, risk management and the exact method of analytic hierarchic process that is used in the decision-making processes. The Chapters 2 and 3 focus on presentation of the experience with the application of the method in formulating the stakeholders' strategic goals within the Corporate Social Responsibility (CSR) and simultaneously its utilization in minimizing the environmental risks. The major benefit of this paper is the application of Analytical Hierarchy Process (AHP).

  16. Analytical investigation of torque and flux ripple in induction motor control scheme using wavelet network

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Zhang, Hong; Qin, Aili

    2008-10-01

    By combining wavelet analysis and neural network, a new approach for condition monitoring is presented for rotating machine fault. The wavelet analysis can accurately localize the features of transient signal in time-frequency domains. The wavelet transform technology is appropriate for processing of fault signals consisting of short-lived, high-frequency components closely located in time as well as long duration components closely spaced in frequency. In a view of the inter relationship of wavelet decomposition theory, the crucial components as features are inputted into radial basis function for fault pattern recognition. In order to acquire the network parameters, the improved Levenberg-Marquardt optimization technique is used for training process. By choosing enough samples to train wavelet network, the fault pattern can be determined according to the output results. Also, the robustness of wavelet network for fault diagnosis is discussed. The applied results show that the proposed method can improve the performance for real-time monitoring of vibration fault.

  17. A uniform method for analytically modeling mulit-target acquisition with independent networked imaging sensors

    NASA Astrophysics Data System (ADS)

    Friedman, Melvin

    2014-05-01

    The problem solved in this paper is easily stated: for a scenario with 𝑛 networked and moving imaging sensors, 𝑚 moving targets and 𝑘 independent observers searching imagery produced by the 𝑛 moving sensors, analytically model system target acquisition probability for each target as a function of time. Information input into the model is the time dependence of 𝘗∞ and 𝜏, two parameters that describe observer-sensor-atmosphere-range-target properties of the target acquisition system for the case where neither the sensor nor target is moving. The parameter 𝘗∞ can be calculated by the NV-IPM model and 𝜏 is estimated empirically from 𝘗∞. In this model 𝑛, 𝑚 and 𝑘 are integers and 𝑘 can be less than, equal to or greater than 𝑛. Increasing 𝑛 and 𝑘 results in a substantial increase in target acquisition probabilities. Because the sensors are networked, a target is said to be detected the moment the first of the 𝑘 observers declares the target. The model applies to time-limited or time-unlimited search, and applies to any imaging sensors operating in any wavelength band provided each sensor can be described by 𝘗∞ and 𝜏 parameters.

  18. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  19. Competition and cooperation between active intra-network and passive extra-network transport processes

    PubMed Central

    Maruyama, Dan; Zochowski, Michal

    2014-01-01

    Many networks are embedded in physical space and often interact with it. This interaction can be exemplified through constraints exerted on network topology, or through interactions of processes defined on a network with those that are linked to the space that the network is embedded within, leading to complex dynamics. Here we discuss an example of such an interaction in which a signaling agent is actively transported through the network edges and, at the same time, spreads passively through space due to diffusion. We show that these two processes cooperate or compete depending on the network topology leading to complex dynamics. PMID:24920178

  20. Quality Measures in Pre-Analytical Phase of Tissue Processing: Understanding Its Value in Histopathology

    PubMed Central

    Masilamani, Suresh; Sundaram, Sandhya; Duvuru, Prathiba; Swaminathan, Rajendiran

    2016-01-01

    Introduction Quality monitoring in histopathology unit is categorized into three phases, pre-analytical, analytical and post-analytical, to cover various steps in the entire test cycle. Review of literature on quality evaluation studies pertaining to histopathology revealed that earlier reports were mainly focused on analytical aspects with limited studies on assessment of pre-analytical phase. Pre-analytical phase encompasses several processing steps and handling of specimen/sample by multiple individuals, thus allowing enough scope for errors. Due to its critical nature and limited studies in the past to assess quality in pre-analytical phase, it deserves more attention. Aim This study was undertaken to analyse and assess the quality parameters in pre-analytical phase in a histopathology laboratory. Materials and Methods This was a retrospective study done on pre-analytical parameters in histopathology laboratory of a tertiary care centre on 18,626 tissue specimens received in 34 months. Registers and records were checked for efficiency and errors for pre-analytical quality variables: specimen identification, specimen in appropriate fixatives, lost specimens, daily internal quality control performance on staining, performance in inter-laboratory quality assessment program {External quality assurance program (EQAS)} and evaluation of internal non-conformities (NC) for other errors. Results The study revealed incorrect specimen labelling in 0.04%, 0.01% and 0.01% in 2007, 2008 and 2009 respectively. About 0.04%, 0.07% and 0.18% specimens were not sent in fixatives in 2007, 2008 and 2009 respectively. There was no incidence of specimen lost. A total of 113 non-conformities were identified out of which 92.9% belonged to the pre-analytical phase. The predominant NC (any deviation from normal standard which may generate an error and result in compromising with quality standards) identified was wrong labelling of slides. Performance in EQAS for pre-analytical phase was

  1. Comparison of the Analytic Hierarchy Process and Incomplete Analytic Hierarchy Process for identifying customer preferences in the Texas retail energy provider market

    NASA Astrophysics Data System (ADS)

    Davis, Christopher

    The competitive market for retail energy providers in Texas has been in existence for 10 years. When the market opened in 2002, 5 energy providers existed, offering, on average, 20 residential product plans in total. As of January 2012, there are now 115 energy providers in Texas offering over 300 residential product plans for customers. With the increase in providers and product plans, customers can be bombarded with information and suffer from the "too much choice" effect. The goal of this praxis is to aid customers in the decision making process of identifying an energy provider and product plan. Using the Analytic Hierarchy Process (AHP), a hierarchical decomposition decision making tool, and the Incomplete Analytic Hierarchy Process (IAHP), a modified version of AHP, customers can prioritize criteria such as price, rate type, customer service, and green energy products to identify the provider and plan that best meets their needs. To gather customer data, a survey tool has been developed for customers to complete the pairwise comparison process. Results are compared for the Incomplete AHP and AHP method to determine if the Incomplete AHP method is just as accurate, but more efficient, than the traditional AHP method.

  2. Fast radiative transfer of dust reprocessing in semi-analytic models with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Silva, Laura; Fontanot, Fabio; Granato, Gian Luigi

    2012-06-01

    A serious concern for semi-analytical galaxy formation models, aiming to simulate multiwavelength surveys and to thoroughly explore the model parameter space, is the extremely time-consuming numerical solution of the radiative transfer of stellar radiation through dusty media. To overcome this problem, we have implemented an artificial neural network (ANN) algorithm in the radiative transfer code GRASIL, in order to significantly speed up the computation of the infrared (IR) spectral energy distribution (SED). The ANN we have implemented is of general use, in that its input neurons are defined as those quantities effectively determining the shape of the IR SED. Therefore, the training of the ANN can be performed with any model and then applied to other models. We made a blind test to check the algorithm, by applying a net trained with a standard chemical evolution model (i.e. CHE_EVO) to a mock catalogue extracted from the semi-analytic model MORGANA, and compared galaxy counts and evolution of the luminosity functions in several near-IR to sub-millimetre (sub-mm) bands, and also the spectral differences for a large subset of randomly extracted models. The ANN is able to excellently approximate the full computation, but with a gain in CPU time by ˜2 orders of magnitude. It is only advisable that the training covers reasonably well the range of values of the input neurons in the application. Indeed in the sub-mm at high redshift, a tiny fraction of models with some sensible input neurons out of the range of the trained net cause wrong answer by the ANN. These are extreme starbursting models with high optical depths, favourably selected by sub-mm observations, and are difficult to predict a priori.

  3. Recovery of Magnesium from Seawaters and Development of Analytical Techniques for Eco-Friendly Materials Processing.

    NASA Astrophysics Data System (ADS)

    Yoon, H.; Yoon, C.; Chung, K.

    2008-12-01

    Nevertheless other resources such as fossil fuel, oils, mineral resources, drive continued interest in developing fundamental techniques for recovering valuable metals like seawater origin. A process for recovery of magnesium from brine and bittern have been described in achieving low-level detection limits as well as reliability of analytical technique. The choice of analytical technique to meet the most stringent analytical needs of our fields is ICP-OES and XRF for commercial purposes in high solid waters like bittern. This study contains the results of a study of processes for seawater reverse osmosis with enhanced precipitation yield such as NaCl, Mg(OH)2, and Br2. The original bittern composition supplied from Hanjoo Co. Ltd. was pretreated for microbial matter and additional NaOH, NH4OH, or Na2CO3. Adding NaOH at pH 9.0 to pH 9.9 yield precipitation of Na2CO3.

  4. A semi-analytical model for the flow behavior of naturally fractured formations with multi-scale fracture networks

    NASA Astrophysics Data System (ADS)

    Jia, Pin; Cheng, Linsong; Huang, Shijun; Wu, Yonghui

    2016-06-01

    This paper presents a semi-analytical model for the flow behavior of naturally fractured formations with multi-scale fracture networks. The model dynamically couples an analytical dual-porosity model with a numerical discrete fracture model. The small-scale fractures with the matrix are idealized as a dual-porosity continuum and an analytical flow solution is derived based on source functions in Laplace domain. The large-scale fractures are represented explicitly as the major fluid conduits and the flow is numerically modeled, also in Laplace domain. This approach allows us to include finer details of the fracture network characteristics while keeping the computational work manageable. For example, the large-scale fracture network may have complex geometry and varying conductivity, and the computations can be done at predetermined, discrete times, without any grids in the dual-porosity continuum. The validation of the semi-analytical model is demonstrated in comparison to the solution of ECLIPSE reservoir simulator. The simulation is fast, gridless and enables rapid model setup. On the basis of the model, we provide detailed analysis of the flow behavior of a horizontal production well in fractured reservoir with multi-scale fracture networks. The study has shown that the system may exhibit six flow regimes: large-scale fracture network linear flow, bilinear flow, small-scale fracture network linear flow, pseudosteady-state flow, interporosity flow and pseudoradial flow. During the first four flow periods, the large-scale fracture network behaves as if it only drains in the small-scale fracture network; that is, the effect of the matrix is negligibly small. The characteristics of the bilinear flow and the small-scale fracture network linear flow are predominantly determined by the dimensionless large-scale fracture conductivity. And low dimensionless fracture conductivity will generate large pressure drops in the large-scale fractures surrounding the wellbore. With

  5. Optical processing for future computer networks

    NASA Technical Reports Server (NTRS)

    Husain, A.; Haugen, P. R.; Hutcheson, L. D.; Warrior, J.; Murray, N.; Beatty, M.

    1986-01-01

    In the development of future data management systems, such as the NASA Space Station, a major problem represents the design and implementation of a high performance communication network which is self-correcting and repairing, flexible, and evolvable. To obtain the goal of designing such a network, it will be essential to incorporate distributed adaptive network control techniques. The present paper provides an outline of the functional and communication network requirements for the Space Station data management system. Attention is given to the mathematical representation of the operations being carried out to provide the required functionality at each layer of communication protocol on the model. The possible implementation of specific communication functions in optics is also considered.

  6. Evaluating the Effectiveness of the Chemistry Education by Using the Analytic Hierarchy Process

    ERIC Educational Resources Information Center

    Yüksel, Mehmet

    2012-01-01

    In this study, an attempt was made to develop a method of measurement and evaluation aimed at overcoming the difficulties encountered in the determination of the effectiveness of chemistry education based on the goals of chemistry education. An Analytic Hierarchy Process (AHP), which is a multi-criteria decision technique, is used in the present…

  7. 75 FR 13766 - Food and Drug Administration and Process Analytical Technology for Pharma Manufacturing: Food and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-23

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration Food and Drug Administration and Process Analytical Technology for Pharma Manufacturing: Food and Drug Administration--Partnering With Industry; Public Conference AGENCY: Food and Drug Administration,...

  8. Network Traffic Analysis With Query Driven VisualizationSC 2005HPC Analytics Results

    SciTech Connect

    Stockinger, Kurt; Wu, Kesheng; Campbell, Scott; Lau, Stephen; Fisk, Mike; Gavrilov, Eugene; Kent, Alex; Davis, Christopher E.; Olinger,Rick; Young, Rob; Prewett, Jim; Weber, Paul; Caudell, Thomas P.; Bethel,E. Wes; Smith, Steve

    2005-09-01

    Our analytics challenge is to identify, characterize, and visualize anomalous subsets of large collections of network connection data. We use a combination of HPC resources, advanced algorithms, and visualization techniques. To effectively and efficiently identify the salient portions of the data, we rely on a multi-stage workflow that includes data acquisition, summarization (feature extraction), novelty detection, and classification. Once these subsets of interest have been identified and automatically characterized, we use a state-of-the-art-high-dimensional query system to extract data subsets for interactive visualization. Our approach is equally useful for other large-data analysis problems where it is more practical to identify interesting subsets of the data for visualization than to render all data elements. By reducing the size of the rendering workload, we enable highly interactive and useful visualizations. As a result of this work we were able to analyze six months worth of data interactively with response times two orders of magnitude shorter than with conventional methods.

  9. Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network

    NASA Astrophysics Data System (ADS)

    Fallahpour, R.; Chakouvari, S.; Askari, H.

    2015-03-01

    In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.

  10. Discontinuous phase transition in a core contact process on complex networks

    NASA Astrophysics Data System (ADS)

    Chae, Huiseung; Yook, Soon-Hyung; Kim, Yup

    2015-02-01

    To understand the effect of generalized infection processes, we suggest and study the core contact process (CCP) on complex networks. In CCP an uninfected node is infected when at least k different infected neighbors of the node select the node for the infection. The healing process is the same as that of the normal CP. It is analytically and numerically shown that discontinuous transitions occur in CCP on random networks and scale-free networks depending on infection rate and initial density of infected nodes. The discontinuous transitions include hybrid transitions with β = 1/2 and β = 1. The asymptotic behavior of the phase boundary related to the initial density is found analytically and numerically. The mapping between CCP with k and static (k+1)-core percolation is supposed from the (k+1)-core structure in the active phase and the hybrid transition with β = 1/2. From these properties of CCP one can see that CCP is one of the dynamical processes for the k-core structure on real networks.

  11. Gaussian process regression for sensor networks under localization uncertainty

    USGS Publications Warehouse

    Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming

    2013-01-01

    In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.

  12. What makes us think? A three-stage dual-process model of analytic engagement.

    PubMed

    Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J

    2015-08-01

    The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. PMID:26091582

  13. Coupling centrality and authority of co-processing model on complex networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhanli; Li, Huibin

    2016-04-01

    Coupling centrality and authority of co-processing model on complex networks are investigated in this paper. As one crucial factor to determine the processing ability of nodes, the information flow with potential time lag is modeled by co-processing diffusion which couples the continuous time processing and the discrete diffusing dynamics. Exact results on master equation and stationary state are obtained to disclose the formation. Considering the influence of a node to the global dynamical behavior, coupling centrality and authority are introduced for each node, which determine the relative importance and authority of nodes in the diffusion process. Furthermore, the experimental results on large-scale complex networks confirm our analytical prediction.

  14. Regulatory gene networks and the properties of the developmental process

    NASA Technical Reports Server (NTRS)

    Davidson, Eric H.; McClay, David R.; Hood, Leroy

    2003-01-01

    Genomic instructions for development are encoded in arrays of regulatory DNA. These specify large networks of interactions among genes producing transcription factors and signaling components. The architecture of such networks both explains and predicts developmental phenomenology. Although network analysis is yet in its early stages, some fundamental commonalities are already emerging. Two such are the use of multigenic feedback loops to ensure the progressivity of developmental regulatory states and the prevalence of repressive regulatory interactions in spatial control processes. Gene regulatory networks make it possible to explain the process of development in causal terms and eventually will enable the redesign of developmental regulatory circuitry to achieve different outcomes.

  15. Analogue implementation of analytic signal processing for pulse-echo systems

    NASA Technical Reports Server (NTRS)

    Gammell, P. M.

    1981-01-01

    An alternative to rectification is proposed for detection of an ultrasonic signal. This method is especially useful in medical and non-destructive evaluation (nde) applications. With this method, the magnitude of the complex analytic signal is used to define the envelope of the ultrasonic waveform. The square of this quantity has been shown elsewhere to be equal to the true rate-of-arrival of energy. An earlier study, using digital data processing, has already demonstrated the superior resolvability of closely spaced interfaces obtained with the analytic signal magnitude, as compared to conventional rectification. Here, an analogue implementation is presented which utilizes single-sideband techniques to obtain both quadrature components of the analytic signal and its magnitude. A conventional transducer, pulser, and receiver are used.

  16. Network cloning unfolds the effect of clustering on dynamical processes

    NASA Astrophysics Data System (ADS)

    Faqeeh, Ali; Melnik, Sergey; Gleeson, James P.

    2015-05-01

    We introduce network L -cloning, a technique for creating ensembles of random networks from any given real-world or artificial network. Each member of the ensemble is an L -cloned network constructed from L copies of the original network. The degree distribution of an L -cloned network and, more importantly, the degree-degree correlation between and beyond nearest neighbors are identical to those of the original network. The density of triangles in an L -cloned network, and hence its clustering coefficient, is reduced by a factor of L compared to those of the original network. Furthermore, the density of loops of any fixed length approaches zero for sufficiently large values of L . Other variants of L -cloning allow us to keep intact the short loops of certain lengths. As an application, we employ these network cloning methods to investigate the effect of short loops on dynamical processes running on networks and to inspect the accuracy of corresponding tree-based theories. We demonstrate that dynamics on L -cloned networks (with sufficiently large L ) are accurately described by the so-called adjacency tree-based theories, examples of which include the message passing technique, some pair approximation methods, and the belief propagation algorithm used respectively to study bond percolation, SI epidemics, and the Ising model.

  17. Reasoning about anomalies: a study of the analytical process of detecting and identifying anomalous behavior in maritime traffic data

    NASA Astrophysics Data System (ADS)

    Riveiro, Maria; Falkman, Göran; Ziemke, Tom; Kronhamn, Thomas

    2009-05-01

    The goal of visual analytical tools is to support the analytical reasoning process, maximizing human perceptual, understanding and reasoning capabilities in complex and dynamic situations. Visual analytics software must be built upon an understanding of the reasoning process, since it must provide appropriate interactions that allow a true discourse with the information. In order to deepen our understanding of the human analytical process and guide developers in the creation of more efficient anomaly detection systems, this paper investigates how is the human analytical process of detecting and identifying anomalous behavior in maritime traffic data. The main focus of this work is to capture the entire analysis process that an analyst goes through, from the raw data to the detection and identification of anomalous behavior. Three different sources are used in this study: a literature survey of the science of analytical reasoning, requirements specified by experts from organizations with interest in port security and user field studies conducted in different marine surveillance control centers. Furthermore, this study elaborates on how to support the human analytical process using data mining, visualization and interaction methods. The contribution of this paper is twofold: (1) within visual analytics, contribute to the science of analytical reasoning with practical understanding of users tasks in order to develop a taxonomy of interactions that support the analytical reasoning process and (2) within anomaly detection, facilitate the design of future anomaly detector systems when fully automatic approaches are not viable and human participation is needed.

  18. Curvature-processing network in macaque visual cortex

    PubMed Central

    Yue, Xiaomin; Pourladian, Irene S.; Tootell, Roger B. H.; Ungerleider, Leslie G.

    2014-01-01

    Our visual environment abounds with curved features. Thus, the goal of understanding visual processing should include the processing of curved features. Using functional magnetic resonance imaging in behaving monkeys, we demonstrated a network of cortical areas selective for the processing of curved features. This network includes three distinct hierarchically organized regions within the ventral visual pathway: a posterior curvature-biased patch (PCP) located in the near-foveal representation of dorsal V4, a middle curvature-biased patch (MCP) located on the ventral lip of the posterior superior temporal sulcus (STS) in area TEO, and an anterior curvature-biased patch (ACP) located just below the STS in anterior area TE. Our results further indicate that the processing of curvature becomes increasingly complex from PCP to ACP. The proximity of the curvature-processing network to the well-known face-processing network suggests a possible functional link between them. PMID:25092328

  19. Reliability theory for diffusion processes on interconnected networks

    NASA Astrophysics Data System (ADS)

    Khorramzadeh, Yasamin; Youssef, Mina; Eubank, Stephen

    2014-03-01

    We present the concept of network reliability as a framework to study diffusion dynamics in interdependent networks. We illustrate how different outcomes of diffusion processes, such as cascading failure, can be studied by estimating the reliability polynomial under different reliability rules. As an example, we investigate the effect of structural properties on diffusion dynamics for a few different topologies of two coupled networks. We evaluate the effect of varying the probability of failure propagating along the edges, both within a single network as well as between the networks. We exhibit the sensitivity of interdependent network reliability and connectivity to edge failures in each topology. Network Dynamics and Simulation Science Laboratory, Virginia Bioinformatics Institute, Virginia Tech, Blacksburg, Virginia 24061, USA.

  20. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  1. Detecting link failures in complex network processes using remote monitoring

    NASA Astrophysics Data System (ADS)

    Dhal, R.; Abad Torres, J.; Roy, S.

    2015-11-01

    We study whether local structural changes in a complex network can be distinguished from passive remote time-course measurements of the network's dynamics. Specifically the detection of link failures in a network synchronization process from noisy measurements at a single network component is considered. By phrasing the detection task as a Maximum A Posteriori Probability hypothesis testing problem, we are able to obtain conditions under which the detection is (1) improved over the a priori and (2) asymptotically perfect, in terms of the network spectrum and graph. We find that, in the case where the detector has knowledge of the network's state, perfect detection is possible under general connectivity conditions regardless of the measurement location. When the detector does not have state knowledge, a remote signature permits improved but not perfect detection, under the same connectivity conditions. At its essence, detectability is achieved because of the close connection between a network's topology, its eigenvalues and local response characteristics.

  2. Contagion processes on the static and activity-driven coupling networks

    NASA Astrophysics Data System (ADS)

    Lei, Yanjun; Jiang, Xin; Guo, Quantong; Ma, Yifang; Li, Meng; Zheng, Zhiming

    2016-03-01

    The evolution of network structure and the spreading of epidemic are common coexistent dynamical processes. In most cases, network structure is treated as either static or time-varying, supposing the whole network is observed in the same time window. In this paper, we consider the epidemics spreading on a network which has both static and time-varying structures. Meanwhile, the time-varying part and the epidemic spreading are supposed to be of the same time scale. We introduce a static and activity-driven coupling (SADC) network model to characterize the coupling between the static ("strong") structure and the dynamic ("weak") structure. Epidemic thresholds of the SIS and SIR models are studied using the SADC model both analytically and numerically under various coupling strategies, where the strong structure is of homogeneous or heterogeneous degree distribution. Theoretical thresholds obtained from the SADC model can both recover and generalize the classical results in static and time-varying networks. It is demonstrated that a weak structure might make the epidemic threshold low in homogeneous networks but high in heterogeneous cases. Furthermore, we show that the weak structure has a substantive effect on the outbreak of the epidemics. This result might be useful in designing some efficient control strategies for epidemics spreading in networks.

  3. A Sensemaking Approach to Visual Analytics of Attribute-Rich Social Networks

    ERIC Educational Resources Information Center

    Gou, Liang

    2012-01-01

    Social networks have become more complex, in particular considering the fact that elements in social networks are not only abstract topological nodes and links, but contain rich social attributes and reflecting diverse social relationships. For example, in a co-authorship social network in a scientific community, nodes in the social network, which…

  4. Advanced information processing system: Input/output network management software

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  5. Identifying and tracking dynamic processes in social networks

    NASA Astrophysics Data System (ADS)

    Chung, Wayne; Savell, Robert; Schütt, Jan-Peter; Cybenko, George

    2006-05-01

    The detection and tracking of embedded malicious subnets in an active social network can be computationally daunting due to the quantity of transactional data generated in the natural interaction of large numbers of actors comprising a network. In addition, detection of illicit behavior may be further complicated by evasive strategies designed to camouflage the activities of the covert subnet. In this work, we move beyond traditional static methods of social network analysis to develop a set of dynamic process models which encode various modes of behavior in active social networks. These models will serve as the basis for a new application of the Process Query System (PQS) to the identification and tracking of covert dynamic processes in social networks. We present a preliminary result from application of our technique in a real-world data stream-- the Enron email corpus.

  6. Can neural networks compete with process calculations

    SciTech Connect

    Blaesi, J.; Jensen, B.

    1992-12-01

    Neural networks have been called a real alternative to rigorous theoretical models. A theoretical model for the calculation of refinery coker naphtha end point and coker furnace oil 90% point already was in place on the combination tower of a coking unit. Considerable data had been collected on the theoretical model during the commissioning phase and benefit analysis of the project. A neural net developed for the coker fractionator has equalled the accuracy of theoretical models, and shown the capability to handle normal operating conditions. One disadvantage of a neural network is the amount of data needed to create a good model. Anywhere from 100 to thousands of cases are needed to create a neural network model. Overall, the correlation between theoretical and neural net models for both the coker naphtha end point and the coker furnace oil 90% point was about .80; the average deviation was about 4 degrees. This indicates that the neural net model was at least as capable as the theoretical model in calculating inferred properties. 3 figs.

  7. Load Shedding Scheme in Large Pulp Mill by Using Analytic Hierarchy Process

    NASA Astrophysics Data System (ADS)

    Goh, H. H.; Kok, B. C.; Lee, S. W.; Zin, A. A. Mohd.

    2011-06-01

    Pulp mill is one of the heavy industries that consumes large amount of electricity in its production. In particular, the breakdown of the generator would cause other generators to be overloaded. Thus, load shedding scheme is the best way in handling such condition. Selected load will be shed under this scheme in order to protect the generators from being damaged. In the meantime, the subsequence loads will be shed until the generators are sufficient to provide the power to other loads. In order to determine the sequences of load shedding scheme, analytic hierarchy process (AHP) is introduced. Analytic Hierarchy Process is one of the multi-criteria decision making methods. By using this method, the priority of the load can be determined. This paper presents the theory of the alternative methods to choose the load priority in load shedding scheme for a large pulp mill.

  8. Prioritizing factors influencing nurses' satisfaction with hospital information systems: a fuzzy analytic hierarchy process approach.

    PubMed

    Kimiafar, Khalil; Sadoughi, Farahnaz; Sheikhtaheri, Abbas; Sarbaz, Masoumeh

    2014-04-01

    Our aim was to use the fuzzy analytic hierarchy process approach to prioritize the factors that influence nurses' satisfaction with a hospital information system. First, we reviewed the related literature to identify and select possible factors. Second, we developed an analytic hierarchy process framework with three main factors (quality of services, of systems, and of information) and 22 subfactors. Third, we developed a questionnaire based on pairwise comparisons and invited 10 experienced nurses who were identified through snowball sampling to rate these factors. Finally, we used Chang's fuzzy extent analysis method to compute the weights of these factors and prioritize them. We found that information quality was the most important factor (58%), followed by service quality (22%) and then system quality (19%). In conclusion, although their weights were not similar, all factors were important and should be considered in evaluating nurses' satisfaction. PMID:24469556

  9. Large-scale analytical Fourier transform of photomask layouts using graphics processing units

    NASA Astrophysics Data System (ADS)

    Sakamoto, Julia A.

    2015-10-01

    Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.

  10. Analytical modeling and sensor monitoring for optimal processing of polymeric composite material systems

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Weideman, Mark H.; Kranbuehl, David E.; Long, Edward R., Jr.

    1991-01-01

    Process simulation models and cure monitoring sensors are discussed for use in optimal processing of fiber-reinforced composites. Analytical models relate the specified temperature and pressure cure cycle to the thermal, chemical, and physical processes occurring in the composite during consolidation and cure. Frequency-dependent electromagnetic sensing (FDEMS) is described as an in situ sensor for monitoring the composite curing process and for verification of process simulation models. A model for resin transfer molding of textile composites is used to illustrate the predictive capabilities of a process simulation model. The model is used to calculate the resin infiltration time, fiber volume fraction, resin viscosity, and resin degree of cure. Results of the model are compared with in situ FDEMS measurements.

  11. Hardware and networks for Gaia data processing

    NASA Astrophysics Data System (ADS)

    O'Mullane, W.; Beck, M.; de Angeli, F.; Hoar, J.; Martino, M.; Passot, X.; Portell, J.

    2011-02-01

    A considerable amount of computing power is needed for Gaia data processing during the mission. A pan European system of six data centres are working together to perform different parts of the processing and combine the results. Data processing estimates suggest around 1020 FLOP total processing is required. Data will be transferred daily around Europe and with a final raw data volume approaching 100 TB. With these needs in mind the centres are already gearing up for Gaia. We present the status and plans of the Gaia Data Processing Centres.

  12. Optical Multiple Access Network (OMAN) for advanced processing satellite applications

    NASA Technical Reports Server (NTRS)

    Mendez, Antonio J.; Gagliardi, Robert M.; Park, Eugene; Ivancic, William D.; Sherman, Bradley D.

    1991-01-01

    An OMAN breadboard for exploring advanced processing satellite circuit switch applications is introduced. Network architecture, hardware trade offs, and multiple user interference issues are presented. The breadboard test set up and experimental results are discussed.

  13. Relative frequencies of constrained events in stochastic processes: An analytical approach

    NASA Astrophysics Data System (ADS)

    Rusconi, S.; Akhmatskaya, E.; Sokolovski, D.; Ballard, N.; de la Cal, J. C.

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈104 ). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  14. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications. PMID:26565363

  15. Effects of pre-analytical processes on blood samples used in metabolomics studies.

    PubMed

    Yin, Peiyuan; Lehmann, Rainer; Xu, Guowang

    2015-07-01

    Every day, analytical and bio-analytical chemists make sustained efforts to improve the sensitivity, specificity, robustness, and reproducibility of their methods. Especially in targeted and non-targeted profiling approaches, including metabolomics analysis, these objectives are not easy to achieve; however, robust and reproducible measurements and low coefficients of variation (CV) are crucial for successful metabolomics approaches. Nevertheless, all efforts from the analysts are in vain if the sample quality is poor, i.e. if preanalytical errors are made by the partner during sample collection. Preanalytical risks and errors are more common than expected, even when standard operating procedures (SOP) are used. This risk is particularly high in clinical studies, and poor sample quality may heavily bias the CV of the final analytical results, leading to disappointing outcomes of the study and consequently, although unjustified, to critical questions about the analytical performance of the approach from the partner who provided the samples. This review focuses on the preanalytical phase of liquid chromatography-mass spectrometry-driven metabolomics analysis of body fluids. Several important preanalytical factors that may seriously affect the profile of the investigated metabolome in body fluids, including factors before sample collection, blood drawing, subsequent handling of the whole blood (transportation), processing of plasma and serum, and inadequate conditions for sample storage, will be discussed. In addition, a detailed description of latent effects on the stability of the blood metabolome and a suggestion for a practical procedure to circumvent risks in the preanalytical phase will be given. PMID:25736245

  16. Bipartite memory network architectures for parallel processing

    SciTech Connect

    Smith, W.; Kale, L.V. . Dept. of Computer Science)

    1990-01-01

    Parallel architectures are boradly classified as either shared memory or distributed memory architectures. In this paper, the authors propose a third family of architectures, called bipartite memory network architectures. In this architecture, processors and memory modules constitute a bipartite graph, where each processor is allowed to access a small subset of the memory modules, and each memory module allows access from a small set of processors. The architecture is particularly suitable for computations requiring dynamic load balancing. The authors explore the properties of this architecture by examining the Perfect Difference set based topology for the graph. Extensions of this topology are also suggested.

  17. Large-Scale Neural Network for Sentence Processing

    ERIC Educational Resources Information Center

    Cooke, Ayanna; Grossman, Murray; DeVita, Christian; Gonzalez-Atavales, Julio; Moore, Peachie; Chen, Willis; Gee, James; Detre, John

    2006-01-01

    Our model of sentence comprehension includes at least grammatical processes important for structure-building, and executive resources such as working memory that support these grammatical processes. We hypothesized that a core network of brain regions supports grammatical processes, and that additional brain regions are activated depending on the…

  18. On-board processing satellite network architectures for broadband ISDN

    NASA Technical Reports Server (NTRS)

    Inukai, Thomas; Faris, Faris; Shyy, Dong-Jye

    1992-01-01

    Onboard baseband processing architectures for future satellite broadband integrated services digital networks (B-ISDN's) are addressed. To assess the feasibility of implementing satellite B-ISDN services, critical design issues, such as B-ISDN traffic characteristics, transmission link design, and a trade-off between onboard circuit and fast packet switching, are analyzed. Examples of the two types of switching mechanisms and potential onboard network control functions are presented. A sample network architecture is also included to illustrate a potential onboard processing system.

  19. Toward an Analytic Framework of Interdisciplinary Reasoning and Communication (IRC) Processes in Science

    NASA Astrophysics Data System (ADS)

    Shen, Ji; Sung, Shannon; Zhang, Dongmei

    2015-11-01

    Students need to think and work across disciplinary boundaries in the twenty-first century. However, it is unclear what interdisciplinary thinking means and how to analyze interdisciplinary interactions in teamwork. In this paper, drawing on multiple theoretical perspectives and empirical analysis of discourse contents, we formulate a theoretical framework that helps analyze interdisciplinary reasoning and communication (IRC) processes in interdisciplinary collaboration. Specifically, we propose four interrelated IRC processes-integration, translation, transfer, and transformation, and develop a corresponding analytic framework. We apply the framework to analyze two meetings of a project that aims to develop interdisciplinary science assessment items. The results illustrate that the framework can help interpret the interdisciplinary meeting dynamics and patterns. Our coding process and results also suggest that these IRC processes can be further examined in terms of interconnected sub-processes. We also discuss the implications of using the framework in conceptualizing, practicing, and researching interdisciplinary learning and teaching in science education.

  20. Solving a layout design problem by analytic hierarchy process (AHP) and data envelopment analysis (DEA) approach

    NASA Astrophysics Data System (ADS)

    Tuzkaya, Umut R.; Eser, Arzum; Argon, Goner

    2004-02-01

    Today, growing amounts of waste due to fast consumption rate of products started an irreversible environmental pollution and damage. A considerable part of this waste is caused by packaging material. With the realization of this fact, various waste policies have taken important steps. Here we considered a firm, where waste Aluminum constitutes majority of raw materials for this fir0m. In order to achieve a profitable recycling process, plant layout should be well designed. In this study, we propose a two-step approach involving Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA) to solve facility layout design problems. A case example is considered to demonstrate the results achieved.

  1. An analytical hierarchy process for decision making of high-level-waste management

    SciTech Connect

    Wang, J.H.C.; Jang, W.

    1995-12-01

    To prove the existence value of nuclear technology for the world of post cold war, demonstration of safe rad-waste disposal is essential. High-level-waste (HLW) certainly is the key issue to be resolved. To assist a rational and persuasive process on various disposal options, an Analytical Hierarchy Process (AHP) for the decision making of HLW management is presented. The basic theory and rationale are discussed, and applications are shown to illustrate the usefulness of the AHP. The authors wish that the AHP can provide a better direction for the current doomed situations of Taiwan nuclear industry, and to exchange with other countries for sharing experiences on the HLW management.

  2. Electro-spun organic nanofibers elaboration process investigations using comparative analytical solutions.

    PubMed

    Colantoni, A; Boubaker, K

    2014-01-30

    In this paper Enhanced Variational Iteration Method, EVIM is proposed, along with the BPES, for solving Bratu equation which appears in the particular elecotrospun nanofibers fabrication process framework. Elecotrospun organic nanofibers, with diameters less than 1/4 microns have been used in non-wovens and filtration industries for a broad range of filtration applications in the last decade. Electro-spinning process has been associated to Bratu equation through thermo-electro-hydrodynamics balance equations. Analytical solutions have been proposed, discussed and compared. PMID:24299778

  3. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  4. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  5. Understanding wax screen-printing: a novel patterning process for microfluidic cloth-based analytical devices.

    PubMed

    Liu, Min; Zhang, Chunsun; Liu, Feifei

    2015-09-01

    In this work, we first introduce the fabrication of microfluidic cloth-based analytical devices (μCADs) using a wax screen-printing approach that is suitable for simple, inexpensive, rapid, low-energy-consumption and high-throughput preparation of cloth-based analytical devices. We have carried out a detailed study on the wax screen-printing of μCADs and have obtained some interesting results. Firstly, an analytical model is established for the spreading of molten wax in cloth. Secondly, a new wax screen-printing process has been proposed for fabricating μCADs, where the melting of wax into the cloth is much faster (∼5 s) and the heating temperature is much lower (75 °C). Thirdly, the experimental results show that the patterning effects of the proposed wax screen-printing method depend to a certain extent on types of screens, wax melting temperatures and melting time. Under optimized conditions, the minimum printing width of hydrophobic wax barrier and hydrophilic channel is 100 μm and 1.9 mm, respectively. Importantly, the developed analytical model is also well validated by these experiments. Fourthly, the μCADs fabricated by the presented wax screen-printing method are used to perform a proof-of-concept assay of glucose or protein in artificial urine with rapid high-throughput detection taking place on a 48-chamber cloth-based device and being performed by a visual readout. Overall, the developed cloth-based wax screen-printing and arrayed μCADs should provide a new research direction in the development of advanced sensor arrays for detection of a series of analytes relevant to many diverse applications. PMID:26388382

  6. Basic emotion processing and the adolescent brain: Task demands, analytic approaches, and trajectories of changes.

    PubMed

    Del Piero, Larissa B; Saxbe, Darby E; Margolin, Gayla

    2016-06-01

    Early neuroimaging studies suggested that adolescents show initial development in brain regions linked with emotional reactivity, but slower development in brain structures linked with emotion regulation. However, the increased sophistication of adolescent brain research has made this picture more complex. This review examines functional neuroimaging studies that test for differences in basic emotion processing (reactivity and regulation) between adolescents and either children or adults. We delineated different emotional processing demands across the experimental paradigms in the reviewed studies to synthesize the diverse results. The methods for assessing change (i.e., analytical approach) and cohort characteristics (e.g., age range) were also explored as potential factors influencing study results. Few unifying dimensions were found to successfully distill the results of the reviewed studies. However, this review highlights the potential impact of subtle methodological and analytic differences between studies, need for standardized and theory-driven experimental paradigms, and necessity of analytic approaches that are can adequately test the trajectories of developmental change that have recently been proposed. Recommendations for future research highlight connectivity analyses and non-linear developmental trajectories, which appear to be promising approaches for measuring change across adolescence. Recommendations are made for evaluating gender and biological markers of development beyond chronological age. PMID:27038840

  7. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  8. Machine learning and predictive data analytics enabling metrology and process control in IC fabrication

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.

    2015-03-01

    Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.

  9. Laser processes and analytics for high power 3D battery materials

    NASA Astrophysics Data System (ADS)

    Pfleging, W.; Zheng, Y.; Mangang, M.; Bruns, M.; Smyrek, P.

    2016-03-01

    Laser processes for cutting, modification and structuring of energy storage materials such as electrodes, separator materials and current collectors have a great potential in order to minimize the fabrication costs and to increase the performance and operational lifetime of high power lithium-ion-batteries applicable for stand-alone electric energy storage devices and electric vehicles. Laser direct patterning of battery materials enable a rather new technical approach in order to adjust 3D surface architectures and porosity of composite electrode materials such as LiCoO2, LiMn2O4, LiFePO4, Li(NiMnCo)O2, and Silicon. The architecture design, the increase of active surface area, and the porosity of electrodes or separator layers can be controlled by laser processes and it was shown that a huge impact on electrolyte wetting, lithium-ion diffusion kinetics, cell life-time and cycling stability can be achieved. In general, the ultrafast laser processing can be used for precise surface texturing of battery materials. Nevertheless, regarding cost-efficient production also nanosecond laser material processing can be successfully applied for selected types of energy storage materials. A new concept for an advanced battery manufacturing including laser materials processing is presented. For developing an optimized 3D architecture for high power composite thick film electrodes electrochemical analytics and post mortem analytics using laser-induced breakdown spectroscopy were performed. Based on mapping of lithium in composite electrodes, an analytical approach for studying chemical degradation in structured and unstructured lithium-ion batteries will be presented.

  10. Whole-brain analytic measures of network communication reveal increased structure-function correlation in right temporal lobe epilepsy.

    PubMed

    Wirsich, Jonathan; Perry, Alistair; Ridley, Ben; Proix, Timothée; Golos, Mathieu; Bénar, Christian; Ranjeva, Jean-Philippe; Bartolomei, Fabrice; Breakspear, Michael; Jirsa, Viktor; Guye, Maxime

    2016-01-01

    The in vivo structure-function relationship is key to understanding brain network reorganization due to pathologies. This relationship is likely to be particularly complex in brain network diseases such as temporal lobe epilepsy, in which disturbed large-scale systems are involved in both transient electrical events and long-lasting functional and structural impairments. Herein, we estimated this relationship by analyzing the correlation between structural connectivity and functional connectivity in terms of analytical network communication parameters. As such, we targeted the gradual topological structure-function reorganization caused by the pathology not only at the whole brain scale but also both in core and peripheral regions of the brain. We acquired diffusion (dMRI) and resting-state fMRI (rsfMRI) data in seven right-lateralized TLE (rTLE) patients and fourteen healthy controls and analyzed the structure-function relationship by using analytical network communication metrics derived from the structural connectome. In rTLE patients, we found a widespread hypercorrelated functional network. Network communication analysis revealed greater unspecific branching of the shortest path (search information) in the structural connectome and a higher global correlation between the structural and functional connectivity for the patient group. We also found evidence for a preserved structural rich-club in the patient group. In sum, global augmentation of structure-function correlation might be linked to a smaller functional repertoire in rTLE patients, while sparing the central core of the brain which may represent a pathway that facilitates the spread of seizures. PMID:27330970

  11. Developing an intelligence analysis process through social network analysis

    NASA Astrophysics Data System (ADS)

    Waskiewicz, Todd; LaMonica, Peter

    2008-04-01

    Intelligence analysts are tasked with making sense of enormous amounts of data and gaining an awareness of a situation that can be acted upon. This process can be extremely difficult and time consuming. Trying to differentiate between important pieces of information and extraneous data only complicates the problem. When dealing with data containing entities and relationships, social network analysis (SNA) techniques can be employed to make this job easier. Applying network measures to social network graphs can identify the most significant nodes (entities) and edges (relationships) and help the analyst further focus on key areas of concern. Strange developed a model that identifies high value targets such as centers of gravity and critical vulnerabilities. SNA lends itself to the discovery of these high value targets and the Air Force Research Laboratory (AFRL) has investigated several network measures such as centrality, betweenness, and grouping to identify centers of gravity and critical vulnerabilities. Using these network measures, a process for the intelligence analyst has been developed to aid analysts in identifying points of tactical emphasis. Organizational Risk Analyzer (ORA) and Terrorist Modus Operandi Discovery System (TMODS) are the two applications used to compute the network measures and identify the points to be acted upon. Therefore, the result of leveraging social network analysis techniques and applications will provide the analyst and the intelligence community with more focused and concentrated analysis results allowing them to more easily exploit key attributes of a network, thus saving time, money, and manpower.

  12. Letter of Intent for RPP Characterization Program Process Engineering and Hanford Analytical Services and Characterization Project

    SciTech Connect

    ADAMS, M.R.

    2000-02-25

    The Characterization Project level of success achieved by the River Protection Project (RPP) is determined by the effectiveness of several organizations across RPP working together. The requirements, expectations, interrelationships, and performance criteria for each of these organizations were examined in order to understand the performances necessary to achieve characterization objectives. This Letter of Intent documents the results of the above examination. It formalizes the details of interfaces, working agreements, and requirements for obtaining and transferring tank waste samples from the Tank Farm System (RPP Process Engineering, Characterization Project Operations, and RPP Quality Assurance) to the characterization laboratory complex (222-S Laboratory, Waste Sampling and Characterization Facility, and the Hanford Analytical Service Program) and for the laboratory complex analysis and reporting of analytical results.

  13. A performance data network for solar process heat systems

    SciTech Connect

    Barker, G.; Hale, M.J.

    1996-03-01

    A solar process heat (SPH) data network has been developed to access remote-site performance data from operational solar heat systems. Each SPH system in the data network is outfitted with monitoring equipment and a datalogger. The datalogger is accessed via modem from the data network computer at the National Renewable Energy Laboratory (NREL). The dataloggers collect both ten-minute and hourly data and download it to the data network every 24-hours for archiving, processing, and plotting. The system data collected includes energy delivered (fluid temperatures and flow rates) and site meteorological conditions, such as solar insolation and ambient temperature. The SPH performance data network was created for collecting performance data from SPH systems that are serving in industrial applications or from systems using technologies that show promise for industrial applications. The network will be used to identify areas of SPH technology needing further development, to correlate computer models with actual performance, and to improve the credibility of SPH technology. The SPH data network also provides a centralized bank of user-friendly performance data that will give prospective SPH users an indication of how actual systems perform. There are currently three systems being monitored and archived under the SPH data network: two are parabolic trough systems and the third is a flat-plate system. The two trough systems both heat water for prisons; the hot water is used for personal hygiene, kitchen operations, and laundry. The flat plate system heats water for meat processing at a slaughter house. We plan to connect another parabolic trough system to the network during the first months of 1996. We continue to look for good examples of systems using other types of collector technologies and systems serving new applications (such as absorption chilling) to include in the SPH performance data network.

  14. Recovery processes and dynamics in single and interdependent networks

    NASA Astrophysics Data System (ADS)

    Majdandzic, Antonio

    Systems composed of dynamical networks --- such as the human body with its biological networks or the global economic network consisting of regional clusters --- often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread, and recovery. Here we develop a model for such systems and find phase diagrams for single and interacting networks. By investigating networks with a small number of nodes, where finite-size effects are pronounced, we describe the spontaneous recovery phenomenon present in these systems. In the case of interacting networks the phase diagram is very rich and becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions, and two forbidden transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyze an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model.

  15. IJA: an efficient algorithm for query processing in sensor networks.

    PubMed

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375

  16. Simulation of dynamic processes with adaptive neural networks.

    SciTech Connect

    Tzanos, C. P.

    1998-02-03

    Many industrial processes are highly non-linear and complex. Their simulation with first-principle or conventional input-output correlation models is not satisfactory, either because the process physics is not well understood, or it is so complex that direct simulation is either not adequately accurate, or it requires excessive computation time, especially for on-line applications. Artificial intelligence techniques (neural networks, expert systems, fuzzy logic) or their combination with simple process-physics models can be effectively used for the simulation of such processes. Feedforward (static) neural networks (FNNs) can be used effectively to model steady-state processes. They have also been used to model dynamic (time-varying) processes by adding to the network input layer input nodes that represent values of input variables at previous time steps. The number of previous time steps is problem dependent and, in general, can be determined after extensive testing. This work demonstrates that for dynamic processes that do not vary fast with respect to the retraining time of the neural network, an adaptive feedforward neural network can be an effective simulator that is free of the complexities introduced by the use of input values at previous time steps.

  17. Scalable Networked Information Processing Environment (SNIPE)

    SciTech Connect

    Fagg, G.E.; Moore, K.; Dongarra, J.J. |; Geist, A.

    1997-11-01

    SNIPE is a metacomputing system that aims to provide a reliable, secure, fault tolerant environment for long term distributed computing applications and data stores across the global Internet. This system combines global naming and replication of both processing and data to support large scale information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments.

  18. Competing Contact Processes on Homogeneous Networks with Tunable Clusterization

    NASA Astrophysics Data System (ADS)

    Rybak, Marcin; Kułakowski, Krzysztof

    2013-03-01

    We investigate two homogeneous networks: the Watts-Strogatz network with mean degree ⟨k⟩ = 4 and the Erdös-Rényi network with ⟨k⟩ = 10. In both kinds of networks, the clustering coefficient C is a tunable control parameter. The network is an area of two competing contact processes, where nodes can be in two states, S or D. A node S becomes D with probability 1 if at least two its mutually linked neighbors are D. A node D becomes S with a given probability p if at least one of its neighbors is S. The competition between the processes is described by a phase diagram, where the critical probability pc depends on the clustering coefficient C. For p > pc the rate of state S increases in time, seemingly to dominate in the whole system. Below pc, the majority of nodes is in the D-state. The numerical results indicate that for the Watts-Strogatz network the D-process is activated at the finite value of the clustering coefficient C, close to 0.3. On the contrary, for the Erdös-Rényi network the transition is observed at the whole investigated range of C.

  19. Quantum Information Processing with Modular Networks

    NASA Astrophysics Data System (ADS)

    Crocker, Clayton; Inlek, Ismail V.; Hucul, David; Sosnova, Ksenia; Vittorini, Grahame; Monroe, Chris

    2015-05-01

    Trapped atomic ions are qubit standards for the production of entangled states in quantum information science and metrology applications. Trapped ions can exhibit very long coherence times, external fields can drive strong local interactions via phonons, and remote qubits can be entangled via photons. Transferring quantum information across spatially separated ion trap modules for a scalable quantum network architecture relies on the juxtaposition of both phononic and photonic buses. We report the successful combination of these protocols within and between two ion trap modules on a unit structure of this architecture where the remote entanglement generation rate exceeds the experimentally measured decoherence rate. Additionally, we report an experimental implementation of a technique to maintain phase coherence between spatially and temporally distributed quantum gate operations, a crucial prerequisite for scalability. Finally, we discuss our progress towards addressing the issue of uncontrolled cross-talk between photonic qubits and memory qubits by implementing a second ion species, Barium, to generate the photonic link. This work is supported by the ARO with funding from the IARPA MQCO program, the DARPA Quiness Program, the ARO MURI on Hybrid Quantum Circuits, the AFOSR MURI on Quantum Transduction, and the NSF Physics Frontier Center at JQI.

  20. The Rondonia Lightning Detection Network: Network Description, Science Objectives, Data Processing/Archival Methodology, and First Results

    NASA Technical Reports Server (NTRS)

    Blakelee, Richard

    1999-01-01

    A four station Advanced Lightning Direction Finder (ALDF) network was recently established in the state of Rondonia in western Brazil through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of-arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/Marshall Space Flight Center (MSFC) in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the internet. The network will remain deployed for several years to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measurement Mission (TRMM) satellite which was launched in November 1997. The measurements will also be used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-term observations from this network will contribute in establishing a regional lightning climatological data base, supplementing other data bases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at NASA/MSFC are now being applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The processing methodology and the initial results from an analysis of the first 6 months of network operations will be presented.

  1. The Rondonia Lightning Detection Network: Network Description, Science Objectives, Data Processing/Archival Methodology, and First Results

    NASA Technical Reports Server (NTRS)

    Blakeslee, Rich; Bailey, Jeff; Koshak, Bill

    1999-01-01

    A four station Advanced Lightning Direction Finder (ALDF) network was recently established in the state of Rondonia in western Brazil through a collaboration of U.S. and Brazilian participants from NASA, INPE, INMET, and various universities. The network utilizes ALDF IMPACT (Improved Accuracy from Combined Technology) sensors to provide cloud-to-ground lightning observations (i.e., stroke/flash locations, signal amplitude, and polarity) using both time-of-arrival and magnetic direction finding techniques. The observations are collected, processed and archived at a central site in Brasilia and at the NASA/ Marshall Space Flight Center (MSFC) in Huntsville, Alabama. Initial, non-quality assured quick-look results are made available in near real-time over the internet. The network will remain deployed for several years to provide ground truth data for the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite which was launched in November 1997. The measurements will also be used to investigate the relationship between the electrical, microphysical and kinematic properties of tropical convection. In addition, the long-term observations from this network will contribute in establishing a regional lightning climatological data base, supplementing other data bases in Brazil that already exist or may soon be implemented. Analytic inversion algorithms developed at NASA/Marshall Space Flight Center (MSFC) are now being applied to the Rondonian ALDF lightning observations to obtain site error corrections and improved location retrievals. The processing methodology and the initial results from an analysis of the first 6 months of network operations will be presented.

  2. Towards the understanding of network information processing in biology

    NASA Astrophysics Data System (ADS)

    Singh, Vijay

    Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.

  3. Analytical Solution of Steady State Equations for Chemical Reaction Networks with Bilinear Rate Laws

    PubMed Central

    Halász, Ádám M.; Lai, Hong-Jian; McCabe, Meghan M.; Radhakrishnan, Krishnan; Edwards, Jeremy S.

    2014-01-01

    True steady states are a rare occurrence in living organisms, yet their knowledge is essential for quasi-steady state approximations, multistability analysis, and other important tools in the investigation of chemical reaction networks (CRN) used to describe molecular processes on the cellular level. Here we present an approach that can provide closed form steady-state solutions to complex systems, resulting from CRN with binary reactions and mass-action rate laws. We map the nonlinear algebraic problem of finding steady states onto a linear problem in a higher dimensional space. We show that the linearized version of the steady state equations obeys the linear conservation laws of the original CRN. We identify two classes of problems for which complete, minimally parameterized solutions may be obtained using only the machinery of linear systems and a judicious choice of the variables used as free parameters. We exemplify our method, providing explicit formulae, on CRN describing signal initiation of two important types of RTK receptor-ligand systems, VEGF and EGF-ErbB1. PMID:24334389

  4. Experimental and analytical investigation of the seizure process in aluminum-silicon alloy/steel tribocontacts

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou

    1998-12-01

    This research is an experimental and analytical investigation of the scuffing/seizure mechanism in Al-Si alloy/steel tribocontacts. An analytical model is developed based on analyses and experiments to predict scuffing/seizure failure in Al-Si alloy/steel tribocontacts, which can be applied to tribo-components in engines, refrigerators and air conditioners. The wear and scuffing/seizure experiments have been conducted through a block-on-ring tester for 339 and ESE-M2A137 Al-Si alloys under the dry and boundary lubrication conditions. The experimental research consists of: (a) wear debris generation and EDX analysis, (b) wear surface morphological analysis, (c) scuffing/seizure mechanism and process analysis, (d) scuffing/seizure PV curves under the dry contact and boundary lubrication, and (e) effects of several main factors on scuffing/seizure. The analytical research includes the following: (a) the investigation of the scuffing/seizure mechanisms in the Al-Si alloy/steel tribocontacts, (b) 3-D asperity contact pressures for longitudinal, transverse and isotropic surface roughness profiles, (c) 3-D surface asperity contact temperature rise due to the friction, (d) failure analyses of the various lubricating films, (e) analyses of the temperature dependence of surface tangential traction and shear strength in a surface layer of Al-Si alloy, (f) the scuffing/seizure failure analytical model under dry contact and boundary lubrication. The analytical model is based on the new hypothesis of three defense lines against scuffing/seizure failure: the adsorbed oil film, oxide film and the ratio of surface tangential traction with the shear strength in a surface layer. These two films together with a surface layer itself form three defense lines against scuffing/seizure. The surface tangential traction exceeds the bulk shear strength in a surface layer of Al-Si alloy is the necessary and sufficient condition for the scuffing/seizure occurrence. The analytical model has a

  5. A DNA network as an information processing system.

    PubMed

    Santini, Cristina Costa; Bath, Jonathan; Turberfield, Andrew J; Tyrrell, Andy M

    2012-01-01

    Biomolecular systems that can process information are sought for computational applications, because of their potential for parallelism and miniaturization and because their biocompatibility also makes them suitable for future biomedical applications. DNA has been used to design machines, motors, finite automata, logic gates, reaction networks and logic programs, amongst many other structures and dynamic behaviours. Here we design and program a synthetic DNA network to implement computational paradigms abstracted from cellular regulatory networks. These show information processing properties that are desirable in artificial, engineered molecular systems, including robustness of the output in relation to different sources of variation. We show the results of numerical simulations of the dynamic behaviour of the network and preliminary experimental analysis of its main components. PMID:22606034

  6. Cortical network architecture for context processing in primate brain

    PubMed Central

    Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka

    2015-01-01

    Context is information linked to a situation that can guide behavior. In the brain, context is encoded by sensory processing and can later be retrieved from memory. How context is communicated within the cortical network in sensory and mnemonic forms is unknown due to the lack of methods for high-resolution, brain-wide neuronal recording and analysis. Here, we report the comprehensive architecture of a cortical network for context processing. Using hemisphere-wide, high-density electrocorticography, we measured large-scale neuronal activity from monkeys observing videos of agents interacting in situations with different contexts. We extracted five context-related network structures including a bottom-up network during encoding and, seconds later, cue-dependent retrieval of the same network with the opposite top-down connectivity. These findings show that context is represented in the cortical network as distributed communication structures with dynamic information flows. This study provides a general methodology for recording and analyzing cortical network neuronal communication during cognition. DOI: http://dx.doi.org/10.7554/eLife.06121.001 PMID:26416139

  7. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  8. Information processing in neural networks with the complex dynamic thresholds

    NASA Astrophysics Data System (ADS)

    Kirillov, S. Yu.; Nekorkin, V. I.

    2016-06-01

    A control mechanism of the information processing in neural networks is investigated, based on the complex dynamic threshold of the neural excitation. The threshold properties are controlled by the slowly varying synaptic current. The dynamic threshold shows high sensitivity to the rate of the synaptic current variation. It allows both to realize flexible selective tuning of the network elements and to provide nontrivial regimes of neural coding.

  9. A results-based process for evaluation of diverse visual analytics tools

    NASA Astrophysics Data System (ADS)

    Rubin, Gary; Berger, David H.

    2013-05-01

    With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.

  10. Development of balanced key performance indicators for emergency departments strategic dashboards following analytic hierarchical process.

    PubMed

    Safdari, Reza; Ghazisaeedi, Marjan; Mirzaee, Mahboobeh; Farzi, Jebrail; Goodini, Azadeh

    2014-01-01

    Dynamic reporting tools, such as dashboards, should be developed to measure emergency department (ED) performance. However, choosing an effective balanced set of performance measures and key performance indicators (KPIs) is a main challenge to accomplish this. The aim of this study was to develop a balanced set of KPIs for use in ED strategic dashboards following an analytic hierarchical process. The study was carried out in 2 phases: constructing ED performance measures based on balanced scorecard perspectives and incorporating them into analytic hierarchical process framework to select the final KPIs. The respondents placed most importance on ED internal processes perspective especially on measures related to timeliness and accessibility of care in ED. Some measures from financial, customer, and learning and growth perspectives were also selected as other top KPIs. Measures of care effectiveness and care safety were placed as the next priorities too. The respondents placed least importance on disease-/condition-specific "time to" measures. The methodology can be presented as a reference model for development of KPIs in various performance related areas based on a consistent and fair approach. Dashboards that are designed based on such a balanced set of KPIs will help to establish comprehensive performance measurements and fair benchmarks and comparisons. PMID:25350022

  11. Lateralized goal framing: How health messages are influenced by valence and contextual/analytic processing.

    PubMed

    McCormick, Michael; Seta, John J

    2016-05-01

    The effectiveness of health messages has been shown to vary due to the positive or negative framing of information, often known as goal framing. In two experiments we altered, the strength of the goal framing manipulation by selectively activating the processing style of the left or right hemisphere (RH). In Experiment 1, we found support for the contextual/analytic perspective; a significant goal framing effect was observed when the contextual processing style of the RH - but not the analytic processing style of the left hemisphere (LH) - was initially activated. In Experiment 2, support for the valence hypothesis was found when a message that had a higher level of personal involvement was used than that in Experiment 1. When the LH was initially activated, there was an advantage for the gain- vs. loss-framed message; however, an opposite pattern - an advantage for the loss-framed message - was obtained when the RH was activated. These are the first framing results that support the valence hypothesis. We discuss the theoretical and applied implications of these experiments. PMID:26600087

  12. Impact of Recent Hardware and Software Trends on High Performance Transaction Processing and Analytics

    NASA Astrophysics Data System (ADS)

    Mohan, C.

    In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.

  13. Evaluation Methodology for Advance Heat Exchanger Concepts Using Analytical Hierarchy Process

    SciTech Connect

    Piyush Sabharwall; Eung Soo Kim

    2012-07-01

    The primary purpose of this study is to aid in the development and selection of the secondary/process heat exchanger (SHX) for power production and process heat application for a Next Generation Nuclear Reactors (NGNR). The potential options for use as an SHX are explored such as shell and tube, printed circuit heat exchanger. A shell and tube (helical coiled) heat exchanger is a recommended for a demonstration reactor because of its reliability while the reactor design is being further developed. The basic setup for the selection of the SHX has been established with evaluation goals, alternatives, and criteria. This study describes how these criteria and the alternatives are evaluated using the analytical hierarchy process (AHP).

  14. Testing and Analytical Modeling for Purging Process of a Cryogenic Line

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2015-01-01

    To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. Test article, a 3.35m long with the diameter of 20 cm incline line, was filled with liquid (LH2)or gaseous hydrogen (GH2) and then purged with gaseous helium (GHe). Total of 10 tests were conducted. Influences of GHe flow rates and initial temperatures were evaluated. Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer, was utilized to model and simulate selective tests.

  15. Near infrared spectroscopy and process analytical technology to master the process of busulfan paediatric capsules in a university hospital.

    PubMed

    Paris, I; Janoly-Dumenil, A; Paci, A; Mercier, L; Bourget, P; Brion, F; Chaminade, P; Rieutord, A

    2006-06-16

    The prescription of unlicensed oral medicines in paediatrics leads the hospital pharmacists to compound hard capsules, such as busulfan, an alkylating agent prescribed in preparative regimens for bone marrow transplantation. In this study, we have investigated how the general principle of process analytical technology (PAT) can be implemented at the small size of our hospital pharmacy manufacturing unit. Near infrared spectroscopy (NIRS) was calibrated for raw material identification, blend uniformity analysis and final content uniformity of busulfan hard capsules of 11 different strengths. Measurements were performed on capsules from 2 to 40 mg (n=440). After optimisation, accuracy and linearity of the NIRS quantitative method was demonstrated after comparison with a previously validated quantitative high performance thin layer chromatography (HPTLC) method. Such a comparison led to attractive NIRS precision: +/-0.7 to +/-1.0 mg for capsules from 2 to 40 mg, respectively. As NIRS is a rapid and non-destructive technique, the individual control of a whole batch of busulfan paediatric capsules intended to be administrated is possible. Actually, mastering the process of busulfan paediatric capsules with the NIRS integrated into the notion of PAT is a powerful analytical tool to assess the process quality and to perform content uniformity of at least 5mg busulfan-containing capsules. PMID:16621419

  16. Parallel plan execution with self-processing networks

    NASA Technical Reports Server (NTRS)

    Dautrechy, C. Lynne; Reggia, James A.

    1989-01-01

    A critical issue for space operations is how to develop and apply advanced automation techniques to reduce the cost and complexity of working in space. In this context, it is important to examine how recent advances in self-processing networks can be applied for planning and scheduling tasks. For this reason, the feasibility of applying self-processing network models to a variety of planning and control problems relevant to spacecraft activities is being explored. Goals are to demonstrate that self-processing methods are applicable to these problems, and that MIRRORS/II, a general purpose software environment for implementing self-processing models, is sufficiently robust to support development of a wide range of application prototypes. Using MIRRORS/II and marker passing modelling techniques, a model of the execution of a Spaceworld plan was implemented. This is a simplified model of the Voyager spacecraft which photographed Jupiter, Saturn, and their satellites. It is shown that plan execution, a task usually solved using traditional artificial intelligence (AI) techniques, can be accomplished using a self-processing network. The fact that self-processing networks were applied to other space-related tasks, in addition to the one discussed here, demonstrates the general applicability of this approach to planning and control problems relevant to spacecraft activities. It is also demonstrated that MIRRORS/II is a powerful environment for the development and evaluation of self-processing systems.

  17. Process and analytical studies of enhanced low severity co-processing using selective coal pretreatment

    SciTech Connect

    Baldwin, R.M.; Miller, R.L.

    1991-12-01

    The findings in the first phase were as follows: 1. Both reductive (non-selective) alkylation and selective oxygen alkylation brought about an increase in liquefaction reactivity for both coals. 2. Selective oxygen alkylation is more effective in enhancing the reactivity of low rank coals. In the second phase of studies, the major findings were as follows: 1. Liquefaction reactivity increases with increasing level of alkylation for both hydroliquefaction and co-processing reaction conditions. 2. the increase in reactivity found for O-alkylated Wyodak subbituminous coal is caused by chemical changes at phenolic and carboxylic functional sites. 3. O-methylation of Wyodak subbituminous coal reduced the apparent activation energy for liquefaction of this coal.

  18. Modeling socio-cultural processes in network-centric environments

    NASA Astrophysics Data System (ADS)

    Santos, Eunice E.; Santos, Eugene, Jr.; Korah, John; George, Riya; Gu, Qi; Kim, Keumjoo; Li, Deqing; Russell, Jacob; Subramanian, Suresh

    2012-05-01

    The major focus in the field of modeling & simulation for network centric environments has been on the physical layer while making simplifications for the human-in-the-loop. However, the human element has a big impact on the capabilities of network centric systems. Taking into account the socio-behavioral aspects of processes such as team building, group decision-making, etc. are critical to realistically modeling and analyzing system performance. Modeling socio-cultural processes is a challenge because of the complexity of the networks, dynamism in the physical and social layers, feedback loops and uncertainty in the modeling data. We propose an overarching framework to represent, model and analyze various socio-cultural processes within network centric environments. The key innovation in our methodology is to simultaneously model the dynamism in both the physical and social layers while providing functional mappings between them. We represent socio-cultural information such as friendships, professional relationships and temperament by leveraging the Culturally Infused Social Network (CISN) framework. The notion of intent is used to relate the underlying socio-cultural factors to observed behavior. We will model intent using Bayesian Knowledge Bases (BKBs), a probabilistic reasoning network, which can represent incomplete and uncertain socio-cultural information. We will leverage previous work on a network performance modeling framework called Network-Centric Operations Performance and Prediction (N-COPP) to incorporate dynamism in various aspects of the physical layer such as node mobility, transmission parameters, etc. We validate our framework by simulating a suitable scenario, incorporating relevant factors and providing analyses of the results.

  19. Development of analytic intermodal freight networks for use within a GIS

    SciTech Connect

    Southworth, F.; Xiong, D.; Middendorf, D.

    1997-05-01

    The paper discusses the practical issues involved in constructing intermodal freight networks that can be used within GIS platforms to support inter-regional freight routing and subsequent (for example, commodity flow) analysis. The procedures described can be used to create freight-routable and traffic flowable interstate and intermodal networks using some combination of highway, rail, water and air freight transportation. Keys to realistic freight routing are the identification of intermodal transfer locations and associated terminal functions, a proper handling of carrier-owned and operated sub-networks within each of the primary modes of transport, and the ability to model the types of carrier services being offered.

  20. Uncovering the role of elementary processes in network evolution

    PubMed Central

    Ghoshal, Gourab; Chi, Liping; Barabási, Albert-László

    2013-01-01

    The growth and evolution of networks has elicited considerable interest from the scientific community and a number of mechanistic models have been proposed to explain their observed degree distributions. Various microscopic processes have been incorporated in these models, among them, node and edge addition, vertex fitness and the deletion of nodes and edges. The existing models, however, focus on specific combinations of these processes and parameterize them in a way that makes it difficult to elucidate the role of the individual elementary mechanisms. We therefore formulated and solved a model that incorporates the minimal processes governing network evolution. Some contribute to growth such as the formation of connections between existing pair of vertices, while others capture deletion; the removal of a node with its corresponding edges, or the removal of an edge between a pair of vertices. We distinguish between these elementary mechanisms, identifying their specific role on network evolution. PMID:24108146

  1. High-speed parallel-processing networks for advanced architectures

    SciTech Connect

    Morgan, D.R.

    1988-06-01

    This paper describes various parallel-processing architecture networks that are candidates for eventual airborne use. An attempt at projecting which type of network is suitable or optimum for specific metafunction or stand-alone applications is made. However, specific algorithms will need to be developed and bench marks executed before firm conclusions can be drawn. Also, a conceptual projection of how these processors can be built in small, flyable units through the use of wafer-scale integration is offered. The use of the PAVE PILLAR system architecture to provide system level support for these tightly coupled networks is described. The author concludes that: (1) extremely high processing speeds implemented in flyable hardware is possible through parallel-processing networks if development programs are pursued; (2) dramatic speed enhancements through parallel processing requires an excellent match between the algorithm and computer-network architecture; (3) matching several high speed parallel oriented algorithms across the aircraft system to a limited set of hardware modules may be the most cost-effective approach to achieving speed enhancements; and (4) software-development tools and improved operating systems will need to be developed to support efficient parallel-processor use.

  2. A Process Analytical Technology (PAT) approach to control a new API manufacturing process: development, validation and implementation.

    PubMed

    Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric

    2014-03-01

    Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. PMID:24468350

  3. Inferring Transition Rates of Networks from Populations in Continuous-Time Markov Processes.

    PubMed

    Dixit, Purushottam D; Jain, Abhinav; Stock, Gerhard; Dill, Ken A

    2015-11-10

    We are interested inferring rate processes on networks. In particular, given a network's topology, the stationary populations on its nodes, and a few global dynamical observables, can we infer all the transition rates between nodes? We draw inferences using the principle of maximum caliber (maximum path entropy). We have previously derived results for discrete-time Markov processes. Here, we treat continuous-time processes, such as dynamics among metastable states of proteins. The present work leads to a particularly important analytical result: namely, that when the network is constrained only by a mean jump rate, the rate matrix is given by a square-root dependence of the rate, kab ∝ (πb/πa)(1/2), on πa and πb, the stationary-state populations at nodes a and b. This leads to a fast way to estimate all of the microscopic rates in the system. As an illustration, we show that the method accurately predicts the nonequilibrium transition rates in an in silico gene expression network and transition probabilities among the metastable states of a small peptide at equilibrium. We note also that the method makes sensible predictions for so-called extra-thermodynamic relationships, such as those of Bronsted, Hammond, and others. PMID:26574334

  4. Network analysis of corticocortical connections reveals ventral and dorsal processing streams in mouse visual cortex

    PubMed Central

    Wang, Quanxin; Sporns, Olaf; Burkhalter, Andreas

    2012-01-01

    Much of the information used for visual perception and visually guided actions is processed in complex networks of connections within the cortex. To understand how this works in the normal brain and to determine the impact of disease, mice are promising models. In primate visual cortex, information is processed in a dorsal stream specialized for visuospatial processing and guided action and a ventral stream for object recognition. Here, we traced the outputs of 10 visual areas and used quantitative graph analytic tools of modern network science to determine, from the projection strengths in 39 cortical targets, the community structure of the network. We found a high density of the cortical graph that exceeded that previously shown in monkey. Each source area showed a unique distribution of projection weights across its targets (i.e. connectivity profile) that was well-fit by a lognormal function. Importantly, the community structure was strongly dependent on the location of the source area: outputs from medial/anterior extrastriate areas were more strongly linked to parietal, motor and limbic cortex, whereas lateral extrastriate areas were preferentially connected to temporal and parahippocampal cortex. These two subnetworks resemble dorsal and ventral cortical streams in primates, demonstrating that the basic layout of cortical networks is conserved across species. PMID:22457489

  5. Beyond business process redesign: redefining Baxter's business network.

    PubMed

    Short, J E; Venkatraman, N

    1992-01-01

    Business process redesign has focused almost exclusively on improving the firm's internal operations. Although internal efficiency and effectiveness are important objectives, the authors argue that business network redesign--reconceptualizing the role of the firm and its key business processes in the larger business network--is of greater strategic importance. To support their argument, they analyze the evolution of Baxter's ASAP system, one of the most publicized but inadequately understood strategic information systems of the 1980s. They conclude by examining whether ASAP's early successes have positioned the firm well for the changing hospital supplies marketplace of the 1990s. PMID:10122293

  6. BiNA: A Visual Analytics Tool for Biological Network Data

    PubMed Central

    Gerasch, Andreas; Faber, Daniel; Küntzer, Jan; Niermann, Peter; Kohlbacher, Oliver; Lenhof, Hans-Peter; Kaufmann, Michael

    2014-01-01

    Interactive visual analysis of biological high-throughput data in the context of the underlying networks is an essential task in modern biomedicine with applications ranging from metabolic engineering to personalized medicine. The complexity and heterogeneity of data sets require flexible software architectures for data analysis. Concise and easily readable graphical representation of data and interactive navigation of large data sets are essential in this context. We present BiNA - the Biological Network Analyzer - a flexible open-source software for analyzing and visualizing biological networks. Highly configurable visualization styles for regulatory and metabolic network data offer sophisticated drawings and intuitive navigation and exploration techniques using hierarchical graph concepts. The generic projection and analysis framework provides powerful functionalities for visual analyses of high-throughput omics data in the context of networks, in particular for the differential analysis and the analysis of time series data. A direct interface to an underlying data warehouse provides fast access to a wide range of semantically integrated biological network databases. A plugin system allows simple customization and integration of new analysis algorithms or visual representations. BiNA is available under the 3-clause BSD license at http://bina.unipax.info/. PMID:24551056

  7. Analytical Model for the Diffusion Process in a In-Situ Combustion Tube

    NASA Astrophysics Data System (ADS)

    Gutierrez, Patricia; Reyes, Adrian

    2015-03-01

    The in-situ combustion process (ISC) is basically an air or oxygen enriched gas injection oil recovery process, inside an extraction well. In contrast to a conventional gas injection process, an ISC process consists in using heat to create a combustion front that raises the fuel temperature, decreasing its viscosity, making extraction easier. The oil is taken toward the productor by means of a vigorous gas thrust as well as a water thrust. To improve and enhance this technique in the field wells, it has been widely perform experimental laboratory tests, in which an in-situ combustion tube is designed to simulate the extraction process. In the present work we propose to solve analytically the problem, with a parabolic partial differential equation associated to the convection-diffusion phenomenon, equation which describes the in-situ combustion process. The whole mathematical problem is established by completing this equation with the correspong boundary and initial conditions, the thickness of the combustion zone, flow velocity, and more parameters. The theoretically obtained results are compared with those reported in literature. We further, fit the parameter of our model to the mentioned data taken from the literature.

  8. IT vendor selection model by using structural equation model & analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  9. 77 FR 7214 - Notice of Availability: Programmatic Environmental Assessment for Mail Processing Network...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... Availability: Programmatic Environmental Assessment for Mail Processing Network Rationalization Initiative (Formerly Known as the ``Network Optimization'' Initiative), Nationwide AGENCY: Postal Service. ACTION... available a Programmatic Environmental Assessment (PEA) for the Mail Processing Network...

  10. Complete Condensation of Zero Range Process in Fitness Networks

    NASA Astrophysics Data System (ADS)

    Su, Gui-Feng; Li, Xiao-Wen; Zhang, Xiao-Bing; Zhang, Yi; Li, Xue

    2015-12-01

    In current paper we study the so-called “complete condensation” of zero range process on the fitness network. It is found that under the high temperature limit, the condensation behavior on the fitness model converges to that of the scale-free network, as expected. However, at some temperatures below the critical temprature of Bose-Einstein condensate phase on the fitness network, the complete condensation occurs as well for some values of δ > δc, which is impossible on scale-free network according to the criterion. Supported by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry (SRF for ROCS, SEM) of China, and National Natural Science Foundation of China under Grant No. 11505115

  11. Natural Language Processing Neural Network Considering Deep Cases

    NASA Astrophysics Data System (ADS)

    Sagara, Tsukasa; Hagiwara, Masafumi

    In this paper, we propose a novel neural network considering deep cases. It can learn knowledge from natural language documents and can perform recall and inference. Various techniques of natural language processing using Neural Network have been proposed. However, natural language sentences used in these techniques consist of about a few words, and they cannot handle complicated sentences. In order to solve these problems, the proposed network divides natural language sentences into a sentence layer, a knowledge layer, ten kinds of deep case layers and a dictionary layer. It can learn the relations among sentences and among words by dividing sentences. The advantages of the method are as follows: (1) ability to handle complicated sentences; (2) ability to restructure sentences; (3) usage of the conceptual dictionary, Goi-Taikei, as the long term memory in a brain. Two kinds of experiments were carried out by using goo dictionary and Wikipedia as knowledge sources. Superior performance of the proposed neural network has been confirmed.

  12. Network application and framework for quality of information processing

    NASA Astrophysics Data System (ADS)

    Marcus, Kelvin; Cook, Trevor; Scott, Lisa; Toth, Andrew

    2012-06-01

    To improve the effectiveness of network-centric decision making, we present a distributed network application and framework that provides users with actionable intelligence reports to support counter insurgency operations. ARL's Quality of Information (QoI) Intelligence Report Application uses QoI metrics like timeliness, accuracy, and precision combined with associated network performance data, such as throughput and latency, and mission-specific information requirements to deliver high quality data to users; that is data delivered in a manner which best supports the ability to make more informed decisions as it relates to the current mission. This application serves as a testing platform for integrated experimentation and validation of QoI processing techniques and methodologies. In this paper, we present the software-system framework and architecture, and show an example scenario that highlights how the framework aids in network integration and enables better data-to-decision.

  13. Applications of neural networks to process control and modeling

    SciTech Connect

    Barnes, C.W.; Brown, S.K.; Flake, G.W.; Jones, R.D.; O'Rourke, M.K.; Lee, Y.C.

    1991-01-01

    Modeling and control of physical processes are universal parts of modern life, from control of chemical plants to riding a bicycle. Often, an effective model of the process is not known so that traditional control theory is of little use. If a process can be represented by a set of a data which captures it behavior over a range of parameter settings, a neural net can inductively model the process and form the basis of an optimization procedure. We present a neural network architecture which is particularly effective in process modeling and control. We discuss its effectiveness in several application areas as well as some of the non-ideal characteristics present in real control problems which effect the form and style of the network architecture and learning algorithm. 8 refs., 6 figs.

  14. Quantification of Process Induced Disorder in Milled Samples Using Different Analytical Techniques

    PubMed Central

    Zimper, Ulrike; Aaltonen, Jaakko; McGoverin, Cushla M.; Gordon, Keith C.; Krauel-Goellner, Karen; Rades, Thomas

    2010-01-01

    The aim of this study was to compare three different analytical methods to detect and quantify the amount of crystalline disorder/ amorphousness in two milled model drugs. X-ray powder diffraction (XRPD), differential scanning calorimetry (DSC) and Raman spectroscopy were used as analytical methods and indomethacin and simvastatin were chosen as the model compounds. These compounds partly converted from crystalline to disordered forms by milling. Partial least squares regression (PLS) was used to create calibration models for the XRPD and Raman data, which were subsequently used to quantify the milling-induced crystalline disorder/ amorphousness under different process conditions. In the DSC measurements the change in heat capacity at the glass transition was used for quantification. Differently prepared amorphous indomethacin standards (prepared by either melt quench cooling or cryo milling) were compared by principal component analysis (PCA) to account for the fact that the choice of standard ultimately influences the quantification outcome. Finally, the calibration models were built using binary mixtures of crystalline and quench cooled amorphous drug materials. The results imply that the outcome with respect to crystalline disorder for milled drugs depends on the analytical method used and the calibration standard chosen as well as on the drug itself. From the data presented here, it appears that XRPD tends to give a higher percentage of crystalline disorder than Raman spectroscopy and DSC for the same samples. For the samples milled under the harshest milling conditions applied (60 min, sixty 4 mm balls, 25 Hz) a crystalline disorder / amorphous content of 44.0% (XRPD), 10.8% (Raman spectroscopy) and 17.8% (DSC) were detected for indomethacin. For simvastatin 18.3% (XRPD), 15.5% (Raman spectroscopy) and 0% (DSC, no glass transition) crystalline disorder/ amorphousness were detected.

  15. Scenes for Social Information Processing in Adolescence: Item and factor analytic procedures for psychometric appraisal.

    PubMed

    Vagos, Paula; Rijo, Daniel; Santos, Isabel M

    2016-04-01

    Relatively little is known about measures used to investigate the validity and applications of social information processing theory. The Scenes for Social Information Processing in Adolescence includes items built using a participatory approach to evaluate the attribution of intent, emotion intensity, response evaluation, and response decision steps of social information processing. We evaluated a sample of 802 Portuguese adolescents (61.5% female; mean age = 16.44 years old) using this instrument. Item analysis and exploratory and confirmatory factor analytic procedures were used for psychometric examination. Two measures for attribution of intent were produced, including hostile and neutral; along with 3 emotion measures, focused on negative emotional states; 8 response evaluation measures; and 4 response decision measures, including prosocial and impaired social behavior. All of these measures achieved good internal consistency values and fit indicators. Boys seemed to favor and choose overt and relational aggression behaviors more often; girls conveyed higher levels of neutral attribution, sadness, and assertiveness and passiveness. The Scenes for Social Information Processing in Adolescence achieved adequate psychometric results and seems a valuable alternative for evaluating social information processing, even if it is essential to continue investigation into its internal and external validity. PMID:26214013

  16. Specificity, promiscuity, and the structure of complex information processing networks

    NASA Astrophysics Data System (ADS)

    Myers, Christopher

    2006-03-01

    Both the top-down designs of engineered systems and the bottom-up serendipities of biological evolution must negotiate tradeoffs between specificity and control: overly specific interactions between components can make systems brittle and unevolvable, while more generic interactions can require elaborate control in order to aggregate specificity from distributed pieces. Complex information processing systems reveal network organizations that navigate this landscape of constraints: regulatory and signaling networks in cells involve the coordination of molecular interactions that are surprisingly promiscuous, and object-oriented design in software systems emphasizes the polymorphic composition of objects of minimal necessary specificity [C.R. Myers, Phys Rev E 68, 046116 (2003)]. Models of information processing arising both in systems biology and engineered computation are explored to better understand how particular network organizations can coordinate the activity of promiscuous components to achieve robust and evolvable function.

  17. Diffusion processes of fragmentary information on scale-free networks

    NASA Astrophysics Data System (ADS)

    Li, Xun; Cao, Lang

    2016-05-01

    Compartmental models of diffusion over contact networks have proven representative of real-life propagation phenomena among interacting individuals. However, there is a broad class of collective spreading mechanisms departing from compartmental representations, including those for diffusive objects capable of fragmentation and transmission unnecessarily as a whole. Here, we consider a continuous-state susceptible-infected-susceptible (SIS) model as an ideal limit-case of diffusion processes of fragmentary information on networks, where individuals possess fractions of the information content and update them by selectively exchanging messages with partners in the vicinity. Specifically, we incorporate local information, such as neighbors' node degrees and carried contents, into the individual partner choice, and examine the roles of a variety of such strategies in the information diffusion process, both qualitatively and quantitatively. Our method provides an effective and flexible route of modulating continuous-state diffusion dynamics on networks and has potential in a wide array of practical applications.

  18. Analytical investigation of torque and flux ripple in induction motor control scheme using wavelet network

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Zhang, Hong; Qin, Aili

    2008-10-01

    An effective scheme of parameter identification based on wavelet neural network is presented for improving dynamic performance of direct torque control system. The wavelet transform is localized in time-frequency domains, yielding wavelet coefficients at different scales. This gives the wavelet transform much greater compact support for analysis of signals with localized transient components. The input nodes of wavelet neural network are current error and change in the current error and the output node is the stator resistance error. To fulfill the network structure parameter, the improved least squares algorithm is used for initialization. The stator flux vector and electromagnetic torque are acquired accurately by the parameter estimator once the instants are detected. This function can make induction motor operate well in low region and can optimize the inverter control strategy. The simulation results show that the proposed method can efficiently reduce the torque ripple and current ripple.

  19. Analytic treatment of tipping points for social consensus in large random networks.

    PubMed

    Zhang, W; Lim, C; Szymanski, B K

    2012-12-01

    We introduce a homogeneous pair approximation to the naming game (NG) model by deriving a six-dimensional Open Dynamics Engine (ODE) for the two-word naming game. Our ODE reveals the change in dynamical behavior of the naming game as a function of the average degree {k} of an uncorrelated network. This result is in good agreement with the numerical results. We also analyze the extended NG model that allows for presence of committed nodes and show that there is a shift of the tipping point for social consensus in sparse networks. PMID:23367920

  20. Performance of the analytical solutions for Taylor dispersion process in open channel flow

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Wu, Zi; Fu, Xudong; Wang, Guangqian

    2015-09-01

    The present paper provides a systematical analysis for concentration distribution of Taylor dispersion in laminar open channel flow, seeking fundamental understandings for the physical process of solute transport that generally applies to natural rivers. As a continuation and a direct numerical verification of the previous theoretical work (Wu, Z., Chen, G.Q., 2014. Journal of Hydrology, 519: 1974-1984.), in this paper we attempt to understand that to what extent the obtained analytical solutions are valid for the multi-dimensional concentration distribution, which is vital for the key conclusion of the so-called slow-decaying transient effect. It is shown that as a first estimation, even asymptotically, the longitudinal skewness of the concentration distribution should be incorporated to predict the vertical concentration correctly. Thus the traditional truncation of the concentration expansion is considered to be insufficient for the first estimation. The analytical solution by the two-scale perturbation analysis with modifications up to the second order is shown to be a most economical solution to give a reasonably good prediction.

  1. An analysis of a developmentally delayed young girl. Coordinating analytic and developmental processes.

    PubMed

    Olesker, Wendy

    2003-01-01

    Clinical material is presented from a multi-year treatment of a five-year-old girl with a variety of developmental interferences, making it necessary to consider whether standard technique would suffice. History includes the fact that she was adopted five days after birth and told as early as possible about her adoption; she was placed in a restrictive brace from four months to twenty months because of congenital hip displasia. Sandy's ability to let in the outside world was limited by her intense denial, not looking, not taking in, and by her detachment. Her passivity--whether a defense (modeled on her experience of physical restraint) or an arrest--was a formidable obstacle to the development of active transference moments. I use this case as an opportunity to look at the role of developmental sequences in the context of the analytic process. While I consciously did not do anything different than I would with any child analytic patient, I intuitively stressed certain kinds of interventions. PMID:14982015

  2. Brain Network Interactions in Auditory, Visual and Linguistic Processing

    ERIC Educational Resources Information Center

    Horwitz, Barry; Braun, Allen R.

    2004-01-01

    In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are…

  3. An application of neural networks to process and materials control

    SciTech Connect

    Howell, J.A.; Whiteson, R.

    1991-01-01

    Process control consists of two basic elements: a model of the process and knowledge of the desired control algorithm. In some cases the level of the control algorithm is merely supervisory, as in an alarm-reporting or anomaly-detection system. If the model of the process is known, then a set of equations may often be solved explicitly to provide the control algorithm. Otherwise, the model has to be discovered through empirical studies. Neural networks have properties that make them useful in this application. They can learn (make internal models from experience or observations). The problem of anomaly detection in materials control systems fits well into this general control framework. To successfully model a process with a neutral network, a good set of observables must be chosen. These observables must in some sense adequately span the space of representable events, so that a signature metric can be built for normal operation. In this way, a non-normal event, one that does not fit within the signature, can be detected. In this paper, we discuss the issues involved in applying a neural network model to anomaly detection in materials control systems. These issues include data selection and representation, network architecture, prediction of events, the use of simulated data, and software tools. 10 refs., 4 figs., 1 tab.

  4. Multi-loop networked process control: a synchronized approach.

    PubMed

    Das, M; Ghosh, R; Goswami, B; Chandra, A K; Balasubramanian, R; Luksch, P; Gupta, A

    2009-01-01

    Modern day process control uses digital controllers which are based on the principle of distributed rather than centralized control. Distributing controllers, sensors and actuators across a plant entails considerable wiring which can be reduced substantially by integrating the components of a control loop over a network. The other advantages include greater flexibility and higher reliability with lower hardware redundancy. The controllers and sensors are on a network and can take over the function of a failed component automatically, without the need of manual reconfiguration, thus eliminating the need of having a redundant component for each and every component. Though elaborate techniques have been developed for Single Input Single Output (SISO) systems, the major challenge lies in extending these ideas to control a practical process plant where de-centralized control is actually achieved through control of individual SISO control loops derived through de-coupling of the original system. Multiple loops increase network load and hence the sampling times associated with the control loops and makes synchronization difficult. This paper presents a methodology by which network based process control can be applied to practical process plants, with a simple direct synchronization mechanism. PMID:19028386

  5. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    ERIC Educational Resources Information Center

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  6. Signal Processing in Periodically Forced Gradient Frequency Neural Networks

    PubMed Central

    Kim, Ji Chul; Large, Edward W.

    2015-01-01

    Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858

  7. Signal processing techniques for synchronization of wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Lee, Jaehan; Wu, Yik-Chung; Chaudhari, Qasim; Qaraqe, Khalid; Serpedin, Erchin

    2010-11-01

    Clock synchronization is a critical component in wireless sensor networks, as it provides a common time frame to different nodes. It supports functions such as fusing voice and video data from different sensor nodes, time-based channel sharing, and sleep wake-up scheduling, etc. Early studies on clock synchronization for wireless sensor networks mainly focus on protocol design. However, clock synchronization problem is inherently related to parameter estimation, and recently, studies of clock synchronization from the signal processing viewpoint started to emerge. In this article, a survey of latest advances on clock synchronization is provided by adopting a signal processing viewpoint. We demonstrate that many existing and intuitive clock synchronization protocols can be interpreted by common statistical signal processing methods. Furthermore, the use of advanced signal processing techniques for deriving optimal clock synchronization algorithms under challenging scenarios will be illustrated.

  8. Distributed process manager for an engineering network computer

    SciTech Connect

    Gait, J.

    1987-08-01

    MP is a manager for systems of cooperating processes in a local area network of engineering workstations. MP supports transparent continuation by maintaining multiple copies of each process on different workstations. Computational bandwidth is optimized by executing processes in parallel on different workstations. Responsiveness is high because workstations compete among themselves to respond to requests. The technique is to select a master from among a set of replicates of a process by a competitive election between the copies. Migration of the master when a fault occurs or when response slows down is effected by inducing the election of a new master. Competitive response stabilizes system behavior under load, so MP exhibits realtime behaviors.

  9. U.S. EPA's National Dioxin Air Monitoring Network: Analytical Issues

    EPA Science Inventory

    The U.S. EPA has established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric chlorinated dibenzo-p-dioxins (CDDs), furans (CDFs), and coplanar polychlorinated biphenyls (PCBs) at rural and non-impacted locatio...

  10. Tools for Large-Scale Data Analytic Examination of Relational and Epistemic Networks in Engineering Education

    ERIC Educational Resources Information Center

    Madhavan, Krishna; Johri, Aditya; Xian, Hanjun; Wang, G. Alan; Liu, Xiaomo

    2014-01-01

    The proliferation of digital information technologies and related infrastructure has given rise to novel ways of capturing, storing and analyzing data. In this paper, we describe the research and development of an information system called Interactive Knowledge Networks for Engineering Education Research (iKNEER). This system utilizes a framework…

  11. Analytical Study of different types Of network failure detection and possible remedies

    NASA Astrophysics Data System (ADS)

    Saxena, Shikha; Chandra, Somnath

    2012-07-01

    Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.

  12. Using the analytical hierarchy process to assess the environmental vulnerabilities of basins in Taiwan.

    PubMed

    Chang, Chia-Ling; Chao, Yu-Chi

    2012-05-01

    Every year, Taiwan endures typhoons and earthquakes; these natural hazards often induce landslides and debris flows. Therefore, watershed management strategies must consider the environmental vulnerabilities of local basins. Because many factors affect basin ecosystems, this study applied multiple criteria analysis and the analytical hierarchy process (AHP) to evaluate seven criteria in three phases (geographic phase, hydrologic phase, and societal phase). This study focused on five major basins in Taiwan: the Tan-Shui River Basin, the Ta-Chia River Basin, the Cho-Shui River Basin, the Tseng-Wen River Basin, and the Kao-Ping River Basin. The objectives were a comprehensive examination of the environmental characteristics of these basins and a comprehensive assessment of their environmental vulnerabilities. The results of a survey and AHP analysis showed that landslide area is the most important factor for basin environmental vulnerability. Of all these basins, the Cho-Shui River Basin in central Taiwan has the greatest environmental vulnerability. PMID:21713488

  13. Testing and analytical modelling for the purging process of a cryogenic line

    NASA Astrophysics Data System (ADS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2015-12-01

    To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of the upper stage, two test series were conducted. The test article, 3.35 m long with a 20-cm-diameter incline line, was filled with liquid or gaseous hydrogen and then purged with gaseous helium (GHe). A total of 10 tests were conducted. The influences of GHe flow rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program (GFSSP), an in-house general purpose fluid system analyzer computer program, was utilized to model and simulate selective tests. The test procedures, modelling descriptions, and the results are presented in the accompanying text.

  14. Priority survey between indicators and analytic hierarchy process analysis for green chemistry technology assessment

    PubMed Central

    Kim, Sungjune; Hong, Seokpyo; Ahn, Kilsoo; Gong, Sungyong

    2015-01-01

    Objectives This study presents the indicators and proxy variables for the quantitative assessment of green chemistry technologies and evaluates the relative importance of each assessment element by consulting experts from the fields of ecology, chemistry, safety, and public health. Methods The results collected were subjected to an analytic hierarchy process to obtain the weights of the indicators and the proxy variables. Results These weights may prove useful in avoiding having to resort to qualitative means in absence of weights between indicators when integrating the results of quantitative assessment by indicator. Conclusions This study points to the limitations of current quantitative assessment techniques for green chemistry technologies and seeks to present the future direction for quantitative assessment of green chemistry technologies. PMID:26206364

  15. Analytic hierarchy process as module for productivity evaluation and decision-making of the operation theater.

    PubMed

    Ezzat, Abdelrahman E M; Hamoud, Hesham S

    2016-01-01

    The analytic hierarchy process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgments of experts to derive priority scales, these scales that measure intangibles in relative terms. The aim of the article was to develop a model for productivity measurement of the operation theater (OT), which could be applied as a model for quality improvement and decision-making. AHP is used in this article to evolve such a model. The steps consist of identifying the critical success factors for measuring the productivity of OT, identifying subfactors that inflauence the critical factors, comparing the pairwise, deriving their relative importance and ratings, and calculating the cumulative effect according to the attributes in OT. The cumulative productivitycan be calculated by the end and can be compared Ideal productivity to measure the productive of OT in percentage fraction. Hence, the productivity could be calculated. Hence, AHP is a very useful model to measure the productivity in OT. PMID:26955599

  16. Analytic hierarchy process as module for productivity evaluation and decision-making of the operation theater

    PubMed Central

    Ezzat, Abdelrahman E. M.; Hamoud, Hesham S.

    2016-01-01

    The analytic hierarchy process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgments of experts to derive priority scales, these scales that measure intangibles in relative terms. The aim of the article was to develop a model for productivity measurement of the operation theater (OT), which could be applied as a model for quality improvement and decision-making. AHP is used in this article to evolve such a model. The steps consist of identifying the critical success factors for measuring the productivity of OT, identifying subfactors that inflauence the critical factors, comparing the pairwise, deriving their relative importance and ratings, and calculating the cumulative effect according to the attributes in OT. The cumulative productivitycan be calculated by the end and can be compared Ideal productivity to measure the productive of OT in percentage fraction. Hence, the productivity could be calculated. Hence, AHP is a very useful model to measure the productivity in OT. PMID:26955599

  17. Testing and Analytical Modeling for Purging Process of a Cryogenic Line

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2015-01-01

    To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. The test article, a 3.35 m long with the diameter of 20 cm incline line, was filled with liquid or gaseous hydrogen and then purged with gaseous helium (GHe). Total of 10 tests were conducted. The influences of GHe flow rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer computer program, was utilized to model and simulate selective tests. The test procedures, modeling descriptions, and the results are presented in the following sections.

  18. Testing and Analytical Modeling for Purging Process of a Cryogenic Line

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2013-01-01

    To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. The test article, a 3.35 m long with the diameter of 20 cm incline line, was filled with liquid or gaseous hydrogen and then purged with gaseous helium (GHe). Total of 10 tests were conducted. The influences of GHe flow rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer computer program, was utilized to model and simulate selective tests. The test procedures, modeling descriptions, and the results are presented in the following sections.

  19. Congestion estimation technique in the optical network unit registration process.

    PubMed

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results. PMID:27367066

  20. Reaction-diffusion processes on interconnected scale-free networks

    NASA Astrophysics Data System (ADS)

    Garas, Antonios

    2015-08-01

    We study the two-particle annihilation reaction A +B →∅ on interconnected scale-free networks, using different interconnecting strategies. We explore how the mixing of particles and the process evolution are influenced by the number of interconnecting links, by their functional properties, and by the interconnectivity strategies in use. We show that the reaction rates on this system are faster than what was observed in other topologies, due to the better particle mixing that suppresses the segregation effect, in line with previous studies performed on single scale-free networks.

  1. Epidemic process on activity-driven modular networks

    NASA Astrophysics Data System (ADS)

    Han, Dun; Sun, Mei; Li, Dandan

    2015-08-01

    In this paper, we propose two novel models of epidemic spreading by considering the activity-driven and the network modular. Firstly, we consider the susceptible-infected-susceptible (SIS) contagion model and derive analytically the epidemic threshold. The results indicate that the epidemic threshold only involves with the value of the spread rate and the recovery rate. In addition, the asymptotic refractory density of infected nodes in the different communities exhibits different trends with the change of the modularity-factor. Then, the infected-driven vaccination model is presented. Simulation results illustrate that the final density of vaccination will increase with the increase of the response strength of vaccination. Moreover, the final infected density in the original-infected-community shows different trends with the change of the response strength of vaccination and the spreading rate. The infected-driven vaccination is a good way to control the epidemic spreading.

  2. Automatic data processing and crustal modeling on Brazilian Seismograph Network

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.; Chimpliganond, C.; Peres Rocha, M.; Franca, G.; Marotta, G. S.; Von Huelsen, M. G.

    2014-12-01

    The Brazilian Seismograph Network (RSBR) is a joint project of four Brazilian research institutions with the support of Petrobras and its main goal is to monitor the seismic activities, generate alerts of seismic hazard and provide data for Brazilian tectonic and structure research. Each institution operates and maintain their seismic network, sharing their data in an virtual private network. These networks have seismic stations transmitting in real time (or near real time) raw data to their respective data centers, where the seismogram files are then shared with other institutions. Currently RSBR has 57 broadband stations, some of them operating since 1994, transmitting data through mobile phone data networks or satellite links. Station management, data acquisition and storage and earthquake data processing at the Seismological Observatory of the University of Brasilia is automatically performed by SeisComP3 (SC3). However, the SC3 data processing is limited to event detection, location and magnitude. An automatic crustal modeling system was designed process raw seismograms and generate 1D S-velocity profiles. This system automatically calculates receiver function (RF) traces, Vp/Vs ratio (h-k stack) and surface waves dispersion (SWD) curves. These traces and curves are then used to calibrate the lithosphere seismic velocity models using a joint inversion scheme The results can be reviewed by an analyst, change processing parameters and selecting/neglecting RF traces and SWD curves used in lithosphere model calibration. The results to be obtained from this system will be used to generate and update a quasi-3D crustal model of Brazil's territory.

  3. Novel Applications of Gas-Phase Analytical Methods to Semiconductor Process Emissions

    NASA Astrophysics Data System (ADS)

    Goolsby, Brian; Vartanian, Victor H.

    2003-09-01

    The semiconductor industry currently faces technical challenges in transistor design as traditional materials used for decades are being driven to their physical limits. High-k materials (k>7 for Si3N4) are being developed as gate oxides for sub 100 nm MOSFETs to prevent electron tunneling between source and drain. Organometallic precursors under consideration could produce hazardous byproducts. Low-k materials (k<3.9 for SiO2) are being developed as insulators or barriers in the dielectric stack to reduce RC time delays and cross talk between adjacent conductors. Precursors containing carbon or fluorine may increase the emission of CF4 during chamber cleans. Heavily doped polysilicon or metals currently in use as gate electrodes may be replaced with metals or metal oxides having greater corrosion resistance or other advantageous properties. All of these new materials must be characterized from the standoint of process byproduct emissions and abatement performance. Gas-phase analysis is critical to the safe and timely incorporation of these novel materials. Several new applications of Fourier transform infra-red spectroscopy (FTIR) are presented, including techniques being applied to address some of the current challenges facing the semiconductor industry. This report describes the characterization of various chemical vapor deposition (CVD) processes. Applications of gas-phase analytical methods to process optimization are also described.

  4. THz spectroscopy: An emerging technology for pharmaceutical development and pharmaceutical Process Analytical Technology (PAT) applications

    NASA Astrophysics Data System (ADS)

    Wu, Huiquan; Khan, Mansoor

    2012-08-01

    As an emerging technology, THz spectroscopy has gained increasing attention in the pharmaceutical area during the last decade. This attention is due to the fact that (1) it provides a promising alternative approach for in-depth understanding of both intermolecular interaction among pharmaceutical molecules and pharmaceutical product quality attributes; (2) it provides a promising alternative approach for enhanced process understanding of certain pharmaceutical manufacturing processes; and (3) the FDA pharmaceutical quality initiatives, most noticeably, the Process Analytical Technology (PAT) initiative. In this work, the current status and progress made so far on using THz spectroscopy for pharmaceutical development and pharmaceutical PAT applications are reviewed. In the spirit of demonstrating the utility of first principles modeling approach for addressing model validation challenge and reducing unnecessary model validation "burden" for facilitating THz pharmaceutical PAT applications, two scientific case studies based on published THz spectroscopy measurement results are created and discussed. Furthermore, other technical challenges and opportunities associated with adapting THz spectroscopy as a pharmaceutical PAT tool are highlighted.

  5. Analytical methods to characterize heterogeneous raw material for thermal spray process: cored wire Inconel 625

    NASA Astrophysics Data System (ADS)

    Lindner, T.; Bonebeau, S.; Drehmann, R.; Grund, T.; Pawlowski, L.; Lampke, T.

    2016-03-01

    In wire arc spraying, the raw material needs to exhibit sufficient formability and ductility in order to be processed. By using an electrically conductive, metallic sheath, it is also possible to handle non-conductive and/or brittle materials such as ceramics. In comparison to massive wire, a cored wire has a heterogeneous material distribution. Due to this fact and the complex thermodynamic processes during wire arc spraying, it is very difficult to predict the resulting chemical composition in the coating with sufficient accuracy. An Inconel 625 cored wire was used to investigate this issue. In a comparative study, the analytical results of the raw material were compared to arc sprayed coatings and droplets, which were remelted in an arc furnace under argon atmosphere. Energy-dispersive X-ray spectroscopy (EDX) and X-ray fluorescence (XRF) analysis were used to determine the chemical composition. The phase determination was performed by X-ray diffraction (XRD). The results were related to the manufacturer specifications and evaluated in respect to differences in the chemical composition. The comparison between the feedstock powder, the remelted droplets and the thermally sprayed coatings allows to evaluate the influence of the processing methods on the resulting chemical and phase composition.

  6. Performance analysis for wireless networks: an analytical approach by multifarious Sym Teredo.

    PubMed

    Punithavathani, D Shalini; Radley, Sheryl

    2014-01-01

    IPv4-IPv6 transition rolls out numerous challenges to the world of Internet as the Internet is drifting from IPv4 to IPv6. IETF recommends few transition techniques which includes dual stack and translation and tunneling. By means of tunneling the IPv6 packets over IPv4 UDP, Teredo maintains IPv4/IPv6 dual stack node in isolated IPv4 networks behindhand network address translation (NAT). However, the proposed tunneling protocol works with the symmetric and asymmetric NATs. In order to make a Teredo support several symmetric NATs along with several asymmetric NATs, we propose multifarious Sym Teredo (MTS), which is an extension of Teredo with a capability of navigating through several symmetric NATs. The work preserves the Teredo architecture and also offers a backward compatibility with the original Teredo protocol. PMID:25506611

  7. Performance Analysis for Wireless Networks: An Analytical Approach by Multifarious Sym Teredo

    PubMed Central

    Punithavathani, D. Shalini; Radley, Sheryl

    2014-01-01

    IPv4-IPv6 transition rolls out numerous challenges to the world of Internet as the Internet is drifting from IPv4 to IPv6. IETF recommends few transition techniques which includes dual stack and translation and tunneling. By means of tunneling the IPv6 packets over IPv4 UDP, Teredo maintains IPv4/IPv6 dual stack node in isolated IPv4 networks behindhand network address translation (NAT). However, the proposed tunneling protocol works with the symmetric and asymmetric NATs. In order to make a Teredo support several symmetric NATs along with several asymmetric NATs, we propose multifarious Sym Teredo (MTS), which is an extension of Teredo with a capability of navigating through several symmetric NATs. The work preserves the Teredo architecture and also offers a backward compatibility with the original Teredo protocol. PMID:25506611

  8. Critical behavior of the contact process in a multiscale network

    NASA Astrophysics Data System (ADS)

    Ferreira, Silvio C.; Martins, Marcelo L.

    2007-09-01

    Inspired by dengue and yellow fever epidemics, we investigated the contact process (CP) in a multiscale network constituted by one-dimensional chains connected through a Barabási-Albert scale-free network. In addition to the CP dynamics inside the chains, the exchange of individuals between connected chains (travels) occurs at a constant rate. A finite epidemic threshold and an epidemic mean lifetime diverging exponentially in the subcritical phase, concomitantly with a power law divergence of the outbreak’s duration, were found. A generalized scaling function involving both regular and SF components was proposed for the quasistationary analysis and the associated critical exponents determined, demonstrating that the CP on this hybrid network and nonvanishing travel rates establishes a new universality class.

  9. Collectivism culture, HIV stigma and social network support in Anhui, China: a path analytic model.

    PubMed

    Zang, Chunpeng; Guida, Jennifer; Sun, Yehuan; Liu, Hongjie

    2014-08-01

    HIV stigma is rooted in culture and, therefore, it is essential to investigate it within the context of culture. The objective of this study was to examine the interrelationships among individualism-collectivism, HIV stigma, and social network support. A social network study was conducted among 118 people living with HIVAIDS in China, who were infected by commercial plasma donation, a nonstigmatized behavior. The Individualism-Collectivism Interpersonal Assessment Inventory (ICIAI) was used to measure cultural norms and values in the context of three social groups, family members, friends, and neighbors. Path analyses revealed (1) a higher level of family ICIAI was significantly associated with a higher level of HIV self-stigma (β=0.32); (2) a higher level of friend ICIAI was associated with a lower level of self-stigma (β=-035); (3) neighbor ICIAI was associated with public stigma (β=-0.61); (4) self-stigman was associated with social support from neighbors (β=-0.27); and (5) public stigma was associated with social support from neighbors (β=-0.24). This study documents that HIV stigma may mediate the relationship between collectivist culture and social network support, providing an empirical basis for interventions to include aspects of culture into HIV intervention strategies. PMID:24853730

  10. [Changes in positive mood states by analytic and creative information processing].

    PubMed

    Otto, J H; Schmitz, B B

    1993-01-01

    Reviews concerning research on the influence of mood on behavior show (a) that mainly the influence of mood on behavior was investigated and (b) that achievements in memory and cognitive tasks were of central concern (Fiedler, 1988; Isen, 1987). Social behavior was analyzed as a function of these factors. Recent reviews restrict themselves to positive feeling states. Summarizing, Fiedler (1988) describes the information processing style in positive feeling states as "loosening" to capture its qualitative aspects. This study investigates the opposite direction of influence, i.e. the effect of cognitive style on positive feeling states. This study restricts itself to positive feeling states. The compatibility thesis which postulates a necessary interaction of feeling state and information processing style is tested. Equivalent states and productions have to go together to generate the mood effects. In a 2 x 2 x 5 mixed design 70 female students (non-psychologists) served as subjects. Using a 20-minute mood induction procedure (autobiographical recollection methodology) a positive or neutral feeling state was elicited in half of the participants. During the next 10 minutes half of each group worked either on a verbal creativity test (Schoppe, 1975) or on an intelligence test (Amthauer, 1973) to establish an creative or analytic style of information processing. Repeated sampling of a measurement repetition factor served as baseline assessments, manipulation checks, and measurements of the feeling states during the task completion of the creativity or intelligence test. The feeling states were assessed by means of a short version (BSK-1982) of the "Eigenschaftswörterliste" (Janke & Debus, 1978). The results confirm the compatibility thesis. Only the group in which a positive feeling state and a creative processing style interact reported a positive mood throughout the task completion. Unexpectedly, a slight deterioration of mood was found in the group with a neutral

  11. Epidemic processes over adaptive state-dependent networks

    NASA Astrophysics Data System (ADS)

    Ogura, Masaki; Preciado, Victor M.

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures.

  12. Epidemic processes over adaptive state-dependent networks.

    PubMed

    Ogura, Masaki; Preciado, Victor M

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures. PMID:27415289

  13. Network Detection in Raster Data Using Marked Point Processes

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Kruse, C.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2016-06-01

    We propose a new approach for the automatic detection of network structures in raster data. The model for the network structure is represented by a graph whose nodes and edges correspond to junction-points and to connecting line segments, respectively; nodes and edges are further described by certain parameters. We embed this model in the probabilistic framework of marked point processes and determine the most probable configuration of objects by stochastic sampling. That is, different graph configurations are constructed randomly by modifying the graph entity parameters, by adding and removing nodes and edges to/ from the current graph configuration. Each configuration is then evaluated based on the probabilities of the changes and an energy function describing the conformity with a predefined model. By using the Reversible Jump Markov Chain Monte Carlo sampler, a global optimum of the energy function is determined. We apply our method to the detection of river and tidal channel networks in digital terrain models. In comparison to our previous work, we introduce constraints concerning the flow direction of water into the energy function. Our goal is to analyse the influence of different parameter settings on the results of network detection in both, synthetic and real data. Our results show the general potential of our method for the detection of river networks in different types of terrain.

  14. Temporal Sequence of Hemispheric Network Activation during Semantic Processing: A Functional Network Connectivity Analysis

    ERIC Educational Resources Information Center

    Assaf, Michal; Jagannathan, Kanchana; Calhoun, Vince; Kraut, Michael; Hart, John, Jr.; Pearlson, Godfrey

    2009-01-01

    To explore the temporal sequence of, and the relationship between, the left and right hemispheres (LH and RH) during semantic memory (SM) processing we identified the neural networks involved in the performance of functional MRI semantic object retrieval task (SORT) using group independent component analysis (ICA) in 47 healthy individuals. SORT…

  15. Self-Organized Information Processing in Neuronal Networks: Replacing Layers in Deep Networks by Dynamics

    NASA Astrophysics Data System (ADS)

    Kirst, Christoph

    It is astonishing how the sub-parts of a brain co-act to produce coherent behavior. What are mechanism that coordinate information processing and communication and how can those be changed flexibly in order to cope with variable contexts? Here we show that when information is encoded in the deviations around a collective dynamical reference state of a recurrent network the propagation of these fluctuations is strongly dependent on precisely this underlying reference. Information here 'surfs' on top of the collective dynamics and switching between states enables fast and flexible rerouting of information. This in turn affects local processing and consequently changes in the global reference dynamics that re-regulate the distribution of information. This provides a generic mechanism for self-organized information processing as we demonstrate with an oscillatory Hopfield network that performs contextual pattern recognition. Deep neural networks have proven to be very successful recently. Here we show that generating information channels via collective reference dynamics can effectively compress a deep multi-layer architecture into a single layer making this mechanism a promising candidate for the organization of information processing in biological neuronal networks.

  16. An analytical method for 14C in environmental water based on a wet-oxidation process.

    PubMed

    Huang, Yan-Jun; Guo, Gui-Yin; Wu, Lian-Sheng; Zhang, Bing; Chen, Chao-Feng; Zhang, Hai-Ying; Qin, Hong-Juan; Shang-Guan, Zhi-Hong

    2015-04-01

    An analytical method for (14)C in environmental water based on a wet-oxidation process was developed. The method can be used to determine the activity concentrations of organic and inorganic (14)C in environmental water, or total (14)C, including in drinking water, surface water, rainwater and seawater. The wet-oxidation of the organic component allows the conversion of organic carbon to an inorganic form, and the extraction of the inorganic (14)C can be achieved by acidification and nitrogen purging. Environmental water with a volume of 20 L can be used for the wet-oxidation and extraction, and a detection limit of about 0.02 Bq/g(C) can be achieved for water with carbon content above 15 mg(C)/L, obviously lower than the natural level of (14)C in the environment. The collected carbon is sufficient for measurement with a low level liquid scintillation counter (LSC) for typical samples. Extraction or recovery experiments for inorganic carbon and organic carbon from typical materials, including analytical reagents of organic benzoquinone, sucrose, glutamic acid, nicotinic acid, humic acid, ethane diol, et cetera., were conducted with excellent results based on measurement on a total organic carbon analyzer and LSC. The recovery rate for inorganic carbon ranged tween 98.7%-99.0% with a mean of 98.9(± 0.1)%, for organic carbon recovery ranged between 93.8% and 100.0% with a mean of 97.1(± 2.6)%. Verification and an uncertainty budget of the method are also presented for a representative environmental water. The method is appropriate for (14)C analysis in environmental water, and can be applied also to the analysis of liquid effluent from nuclear facilities. PMID:25590997

  17. Investigation of potential analytical methods for redox control of the vitrification process. [Moessbauer

    SciTech Connect

    Goldman, D.S.

    1985-11-01

    An investigation was conducted to evaluate several analytical techniques to measure ferrous/ferric ratios in simulated and radioactive nuclear waste glasses for eventual redox control of the vitrification process. Redox control will minimize the melt foaming that occurs under highly oxidizing conditions and the metal precipitation that occurs under highly reducing conditions. The analytical method selected must have a rapid response for production problems with minimal complexity and analyst involvement. The wet-chemistry, Moessbauer spectroscopy, glass color analysis, and ion chromatography techniques were explored, with particular emphasis being placed on the Moessbauer technique. In general, all of these methods can be used for nonradioactive samples. The Moessbauer method can readily analyze glasses containing uranium and thorium. A shielded container was designed and built to analyze fully radioactive glasses with the Moessbauer spectrometer in a hot cell environment. However, analyses conducted with radioactive waste glasses containing /sup 90/Sr and /sup 137/Cs were unsuccessful, presumably due to background radiation problems caused by the samples. The color of glass powder can be used to analyze the ferrous/ferric ratio for low chromium glasses, but this method may not be as precise as the others. Ion chromatography was only tested on nonradioactive glasses, but this technique appears to have the required precision due to its analysis of both Fe/sup +2/ and Fe/sup +3/ and its anticipated adaptability for radioactivity samples. This development would be similar to procedures already in use for shielded inductively coupled plasma emission (ICP) spectrometry. Development of the ion chromatography method is therefore recommended; conventional wet-chemistry is recommended as a backup procedure.

  18. The power of event-driven analytics in Large Scale Data Processing

    ScienceCinema

    None

    2011-04-25

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  19. The power of event-driven analytics in Large Scale Data Processing

    SciTech Connect

    2011-02-24

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  20. Analytical solution and scaling of fluctuations in complex networks traversed by damped, interacting random walkers

    NASA Astrophysics Data System (ADS)

    Hamaneh, Mehdi Bagheri; Haber, Jonah; Yu, Yi-Kuo

    2015-11-01

    A general model for random walks (RWs) on networks is proposed. It incorporates damping and time-dependent links, and it includes standard (undamped, noninteracting) RWs (SRWs), coalescing RWs, and coalescing-branching RWs as special cases. The exact, time-dependent solutions for the average numbers of visits (w ) to nodes and their fluctuations (σ2) are given, and the long-term σ -w relation is studied. Although σ ∝w1 /2 for SRWs, this power law can be fragile when coalescing-branching interaction is present. Damping, however, often strengthens it but with an exponent generally different from 1 /2 .

  1. Structural damage localization by outlier analysis of signal-processed mode shapes - Analytical and experimental validation

    NASA Astrophysics Data System (ADS)

    Ulriksen, M. D.; Damkilde, L.

    2016-02-01

    Contrary to global modal parameters such as eigenfrequencies, mode shapes inherently provide structural information on a local level. Therefore, this particular modal parameter and its derivatives are utilized extensively for damage identification. Typically, more or less advanced mathematical methods are employed to identify damage-induced discontinuities in the spatial mode shape signals, hereby, potentially, facilitating damage detection and/or localization. However, by being based on distinguishing damage-induced discontinuities from other signal irregularities, an intrinsic deficiency in these methods is the high sensitivity towards measurement noise. In the present paper, a damage localization method which, compared to the conventional mode shape-based methods, has greatly enhanced robustness towards measurement noise is proposed. The method is based on signal processing of a spatial mode shape by means of continuous wavelet transformation (CWT) and subsequent application of a generalized discrete Teager-Kaiser energy operator (GDTKEO) to identify damage-induced mode shape discontinuities. In order to evaluate whether the identified discontinuities are in fact damage-induced, outlier analysis is conducted by applying the Mahalanobis metric to major principal scores of the sensor-located bands of the signal-processed mode shape. The method is tested analytically and benchmarked with other mode shape-based damage localization approaches on the basis of a free-vibrating beam and validated experimentally in the context of a residential-sized wind turbine blade subjected to an impulse load.

  2. Analytic hierarchy process helps select site for limestone quarry expansion in Barbados.

    PubMed

    Dey, Prasanta Kumar; Ramcharan, Eugene K

    2008-09-01

    Site selection is a key activity for quarry expansion to support cement production, and is governed by factors such as resource availability, logistics, costs, and socio-economic-environmental factors. Adequate consideration of all the factors facilitates both industrial productivity and sustainable economic growth. This study illustrates the site selection process that was undertaken for the expansion of limestone quarry operations to support cement production in Barbados. First, alternate sites with adequate resources to support a 25-year development horizon were identified. Second, technical and socio-economic-environmental factors were then identified. Third, a database was developed for each site with respect to each factor. Fourth, a hierarchical model in analytic hierarchy process (AHP) framework was then developed. Fifth, the relative ranking of the alternate sites was then derived through pair wise comparison in all the levels and through subsequent synthesizing of the results across the hierarchy through computer software (Expert Choice). The study reveals that an integrated framework using the AHP can help select a site for the quarry expansion project in Barbados. PMID:17854976

  3. An Accelerated Analytical Process for the Development of STR Profiles for Casework Samples.

    PubMed

    Laurin, Nancy; Frégeau, Chantal J

    2015-07-01

    Significant efforts are being devoted to the development of methods enabling rapid generation of short tandem repeat (STR) profiles in order to reduce turnaround times for the delivery of human identification results from biological evidence. Some of the proposed solutions are still costly and low throughput. This study describes the optimization of an analytical process enabling the generation of complete STR profiles (single-source or mixed profiles) for human identification in approximately 5 h. This accelerated process uses currently available reagents and standard laboratory equipment. It includes a 30-min lysis step, a 27-min DNA extraction using the Promega Maxwell(®) 16 System, DNA quantification in <1 h using the Qiagen Investigator(®) Quantiplex HYres kit, fast amplification (<26 min) of the loci included in AmpFℓSTR(®) Identifiler(®), and analysis of the profiles on the 3500-series Genetic Analyzer. This combination of fast individual steps produces high-quality profiling results and offers a cost-effective alternative approach to rapid DNA analysis. PMID:25782346

  4. On the location selection problem using analytic hierarchy process and multi-choice goal programming

    NASA Astrophysics Data System (ADS)

    Ho, Hui-Ping; Chang, Ching-Ter; Ku, Cheng-Yuan

    2013-01-01

    Location selection is a crucial decision in cost/benefit analysis of restaurants, coffee shops and others. However, it is difficult to be solved because there are many conflicting multiple goals in the problem of location selection. In order to solve the problem, this study integrates analytic hierarchy process (AHP) and multi-choice goal programming (MCGP) as a decision aid to obtain an appropriate house from many alternative locations that better suit the preferences of renters under their needs. This study obtains weights from AHP and implements it upon each goal using MCGP for the location selection problem. According to the function of multi-aspiration provided by MCGP, decision makers can set multi-aspiration for each location goal to rank the candidate locations. Compared to the unaided selection processes, the integrated approach of AHP and MCGP is a better scientific and efficient method than traditional methods in finding a suitable location for buying or renting a house for business, especially under multiple qualitative and quantitative criteria within a shorter evaluation time. In addition, a real case is provided to demonstrate the usefulness of the proposed method. The results show that the proposed method is able to provide better quality decision than normal manual methods.

  5. Using neural networks in remote sensing monitoring of exogenous processes

    NASA Astrophysics Data System (ADS)

    Sharapov, Ruslan; Varlamov, Alexey

    2015-03-01

    In paper considered the problem of using remote sensing monitoring of the exogenous processes. The satellite observations can used in tasks of detection of newly formed landslides, landslips and karst collapses. Practice shows that the satellite images of the same area, taken at different times, can have significant differences from each other. For this reason, it is necessary to perform the images correction to bring them into the same species, removing impact of changes in weather conditions, etc. In addition, it is needed to detect the clouds in the images. Clouds interfere with the analysis of images. The detection of exogenous processes manifestations can be make after these actions. For image correction and object detection can be used the neural networks. In paper are given the algorithm for image correction and the structure of a neural network.

  6. Process for forming synapses in neural networks and resistor therefor

    DOEpatents

    Fu, Chi Y.

    1996-01-01

    Customizable neural network in which one or more resistors form each synapse. All the resistors in the synaptic array are identical, thus simplifying the processing issues. Highly doped, amorphous silicon is used as the resistor material, to create extremely high resistances occupying very small spaces. Connected in series with each resistor in the array is at least one severable conductor whose uppermost layer has a lower reflectivity of laser energy than typical metal conductors at a desired laser wavelength.

  7. Process for forming synapses in neural networks and resistor therefor

    DOEpatents

    Fu, C.Y.

    1996-07-23

    Customizable neural network in which one or more resistors form each synapse is disclosed. All the resistors in the synaptic array are identical, thus simplifying the processing issues. Highly doped, amorphous silicon is used as the resistor material, to create extremely high resistances occupying very small spaces. Connected in series with each resistor in the array is at least one severable conductor whose uppermost layer has a lower reflectivity of laser energy than typical metal conductors at a desired laser wavelength. 5 figs.

  8. Understanding disease processes by partitioned dynamic Bayesian networks.

    PubMed

    Bueno, Marcos L P; Hommersom, Arjen; Lucas, Peter J F; Lappenschaar, Martijn; Janzing, Joost G E

    2016-06-01

    For many clinical problems in patients the underlying pathophysiological process changes in the course of time as a result of medical interventions. In model building for such problems, the typical scarcity of data in a clinical setting has been often compensated by utilizing time homogeneous models, such as dynamic Bayesian networks. As a consequence, the specificities of the underlying process are lost in the obtained models. In the current work, we propose the new concept of partitioned dynamic Bayesian networks to capture distribution regime changes, i.e. time non-homogeneity, benefiting from an intuitive and compact representation with the solid theoretical foundation of Bayesian network models. In order to balance specificity and simplicity in real-world scenarios, we propose a heuristic algorithm to search and learn these non-homogeneous models taking into account a preference for less complex models. An extensive set of experiments were ran, in which simulating experiments show that the heuristic algorithm was capable of constructing well-suited solutions, in terms of goodness of fit and statistical distance to the original distributions, in consonance with the underlying processes that generated data, whether it was homogeneous or non-homogeneous. Finally, a study case on psychotic depression was conducted using non-homogeneous models learned by the heuristic, leading to insightful answers for clinically relevant questions concerning the dynamics of this mental disorder. PMID:27182055

  9. Optimising chemical named entity recognition with pre-processing analytics, knowledge-rich features and heuristics

    PubMed Central

    2015-01-01

    Background The development of robust methods for chemical named entity recognition, a challenging natural language processing task, was previously hindered by the lack of publicly available, large-scale, gold standard corpora. The recent public release of a large chemical entity-annotated corpus as a resource for the CHEMDNER track of the Fourth BioCreative Challenge Evaluation (BioCreative IV) workshop greatly alleviated this problem and allowed us to develop a conditional random fields-based chemical entity recogniser. In order to optimise its performance, we introduced customisations in various aspects of our solution. These include the selection of specialised pre-processing analytics, the incorporation of chemistry knowledge-rich features in the training and application of the statistical model, and the addition of post-processing rules. Results Our evaluation shows that optimal performance is obtained when our customisations are integrated into the chemical entity recogniser. When its performance is compared with that of state-of-the-art methods, under comparable experimental settings, our solution achieves competitive advantage. We also show that our recogniser that uses a model trained on the CHEMDNER corpus is suitable for recognising names in a wide range of corpora, consistently outperforming two popular chemical NER tools. Conclusion The contributions resulting from this work are two-fold. Firstly, we present the details of a chemical entity recognition methodology that has demonstrated performance at a competitive, if not superior, level as that of state-of-the-art methods. Secondly, the developed suite of solutions has been made publicly available as a configurable workflow in the interoperable text mining workbench Argo. This allows interested users to conveniently apply and evaluate our solutions in the context of other chemical text mining tasks. PMID:25810777

  10. A combined approach of simulation and analytic hierarchy process in assessing production facility layouts

    NASA Astrophysics Data System (ADS)

    Ramli, Razamin; Cheng, Kok-Min

    2014-07-01

    One of the important areas of concern in order to obtain a competitive level of productivity in a manufacturing system is the layout design and material transportation system (conveyor system). However, changes in customers' requirements have triggered the need to design other alternatives of the manufacturing layout for existing production floor. Hence, this paper discusses effective alternatives of the process layout specifically, the conveyor system layout. Subsequently, two alternative designs for the conveyor system were proposed with the aims to increase the production output and minimize space allocation. The first proposed layout design includes the installation of conveyor oven in the particular manufacturing room based on priority, and the second one is the one without the conveyor oven in the layout. Simulation technique was employed to design the new facility layout. Eventually, simulation experiments were conducted to understand the performance of each conveyor layout design based on operational characteristics, which include predicting the output of layouts. Utilizing the Analytic Hierarchy Process (AHP), the newly and improved layout designs were assessed before the final selection was done. As a comparison, the existing conveyor system layout was included in the assessment process. Relevant criteria involved in this layout design problem were identified as (i) usage of space of each design, (ii) operator's utilization rates, (iii) return of investment (ROI) of the layout, and (iv) output of the layout. In the final stage of AHP analysis, the overall priority of each alternative layout was obtained and thus, a selection for final use by the management was made based on the highest priority value. This efficient planning and designing of facility layout in a particular manufacturing setting is able to minimize material handling cost, minimize overall production time, minimize investment in equipment, and optimize utilization of space.

  11. Cultural congruence with psychotherapy efficacy: A network meta-analytic examination in China.

    PubMed

    Xu, Hui; Tracey, Terence J G

    2016-04-01

    We used network meta-analysis to examine the relative efficacy of 3 treatment modalities in China (i.e., cognitive-psychoeducational therapy, humanistic-experiential therapy, and indigenous therapy) on the basis of a comprehensive review of randomized control trials (n = 235). The cultural congruence hypothesis derived from the contextual model argues that psychotherapy efficacy varies by the extent to which therapy modalities match the cultural context in its description of pathology and healing modalities. Given the experiential-subjective emphasis of Chinese culture, we proposed indigenous therapy and humanistic-experiential therapy being more effective than cognitive-psychoeducational therapy. Results based on indirect and direct comparisons supported the hypothesized differences in effectiveness. Treatments that more closely matched Chinese understandings of pathology and change experience were more effective. The practical and theoretical implications of the present study were discussed along with limitations. PMID:26914062

  12. Cancer diagnostics using neural network sorting of processed images

    NASA Astrophysics Data System (ADS)

    Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.

    1996-03-01

    A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.

  13. Graphics processing unit-based alignment of protein interaction networks.

    PubMed

    Xie, Jiang; Zhou, Zhonghua; Ma, Jin; Xiang, Chaojuan; Nie, Qing; Zhang, Wu

    2015-08-01

    Network alignment is an important bridge to understanding human protein-protein interactions (PPIs) and functions through model organisms. However, the underlying subgraph isomorphism problem complicates and increases the time required to align protein interaction networks (PINs). Parallel computing technology is an effective solution to the challenge of aligning large-scale networks via sequential computing. In this study, the typical Hungarian-Greedy Algorithm (HGA) is used as an example for PIN alignment. The authors propose a HGA with 2-nearest neighbours (HGA-2N) and implement its graphics processing unit (GPU) acceleration. Numerical experiments demonstrate that HGA-2N can find alignments that are close to those found by HGA while dramatically reducing computing time. The GPU implementation of HGA-2N optimises the parallel pattern, computing mode and storage mode and it improves the computing time ratio between the CPU and GPU compared with HGA when large-scale networks are considered. By using HGA-2N in GPUs, conserved PPIs can be observed, and potential PPIs can be predicted. Among the predictions based on 25 common Gene Ontology terms, 42.8% can be found in the Human Protein Reference Database. Furthermore, a new method of reconstructing phylogenetic trees is introduced, which shows the same relationships among five herpes viruses that are obtained using other methods. PMID:26243827

  14. Form, Function, and Information Processing in Stochastic Regulatory Networks

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2009-03-01

    The ability of a biological network to transduce signals, e.g., from chemical information about the abundance of small molecules into regulatory information about the rate of mRNA expression, is thwarted by numerous sources of noise. A great amount has been learned and conjectured in the last decade about the extent to which the form of a network --- specified by the connectivity and sign of regulation --- constrains or guides the networks function --- the particular noisy input-output relation(s) the network is capable of executing. In parallel, a great amount of research has sought to elucidate the role of inescapable or 'intrinsic' noise arising from the finite copy number of the participating molecules, which sets physical limits on information processing in small cells. I'll discuss how information theory may help illuminate these topics by providing a framework for quantifying function which does not rely on specifying the particular task to be performed a priori, as well as by providing a measure for the extent to which form follows function. En route I hope to show how stochastic chemical kinetics, modeled by the (linear) master equation describing the probability of copy counts for all reactants, benefits from the same spectral approaches fundamental to solving the (linear) diffusion equation.

  15. Pattern-recalling processes in quantum Hopfield networks far from saturation

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-ichi

    2011-05-01

    As a mathematical model of associative memories, the Hopfield model was now well-established and a lot of studies to reveal the pattern-recalling process have been done from various different approaches. As well-known, a single neuron is itself an uncertain, noisy unit with a finite unnegligible error in the input-output relation. To model the situation artificially, a kind of 'heat bath' that surrounds neurons is introduced. The heat bath, which is a source of noise, is specified by the 'temperature'. Several studies concerning the pattern-recalling processes of the Hopfield model governed by the Glauber-dynamics at finite temperature were already reported. However, we might extend the 'thermal noise' to the quantum-mechanical variant. In this paper, in terms of the stochastic process of quantum-mechanical Markov chain Monte Carlo method (the quantum MCMC), we analytically derive macroscopically deterministic equations of order parameters such as 'overlap' in a quantum-mechanical variant of the Hopfield neural networks (let us call quantum Hopfield model or quantum Hopfield networks). For the case in which non-extensive number p of patterns are embedded via asymmetric Hebbian connections, namely, p/N → 0 for the number of neuron N → ∞ ('far from saturation'), we evaluate the recalling processes for one of the built-in patterns under the influence of quantum-mechanical noise.

  16. Efficient Signal Processing in Random Networks that Generate Variability: A Comparison of Internally Generated and Externally Induced Variability

    NASA Astrophysics Data System (ADS)

    Dasgupta, Sakyasingha; Nishikawa, Isao; Aihara, Kazuyuki; Toyoizumi, Taro

    Source of cortical variability and its influence on signal processing remain an open question. We address the latter, by studying two types of balanced randomly connected networks of quadratic I-F neurons, with irregular spontaneous activity: (a) a deterministic network with strong connections generating noise by chaotic dynamics (b) a stochastic network with weak connections receiving noisy input. They are analytically tractable in the limit of large network-size and channel time-constant. Despite different sources of noise, spontaneous activity of these networks are identical unless majority of neurons are simultaneously recorded. However, the two networks show remarkably different sensitivity to external stimuli. In the former, input reverberates internally and can be read out over long time, but in the latter, inputs rapidly decay. This is further enhanced with activity-dependent plasticity at input synapses producing marked difference in decoding inputs from neural activity. We show, this leads to distinct performance of the two networks to integrate temporally separate signals from multiple sources, with the deterministic chaotic network activity serving as reservoir for Monte Carlo sampling to perform near optimal Bayesian integration, unlike its stochastic counterpart.

  17. Internal quality control system for non-stationary, non-ergodic analytical processes based upon exponentially weighted estimation of process means and process standard deviation.

    PubMed

    Jansen, Rob T P; Laeven, Mark; Kardol, Wim

    2002-06-01

    The analytical processes in clinical laboratories should be considered to be non-stationary, non-ergodic and probably non-stochastic processes. Both the process mean and the process standard deviation vary. The variation can be different at different levels of concentration. This behavior is shown in five examples of different analytical systems: alkaline phosphatase on the Hitachi 911 analyzer (Roche), vitamin B12 on the Access analyzer (Beckman), prothrombin time and activated partial thromboplastin time on the STA Compact analyzer (Roche) and PO2 on the ABL 520 analyzer (Radiometer). A model is proposed to assess the status of a process. An exponentially weighted moving average and standard deviation was used to estimate process mean and standard deviation. Process means were estimated overall and for each control level. The process standard deviation was estimated in terms of within-run standard deviation. Limits were defined in accordance with state of the art- or biological variance-derived cut-offs. The examples given are real, not simulated, data. Individual control sample results were normalized to a target value and target standard deviation. The normalized values were used in the exponentially weighted algorithm. The weighting factor was based on a process time constant, which was estimated from the period between two calibration or maintenance procedures. The proposed system was compared with Westgard rules. The Westgard rules perform well, despite the underlying presumption of ergodicity. This is mainly caused by the introduction of the starting rule of 12s, which proves essential to prevent a large number of rule violations. The probability of reporting a test result with an analytical error that exceeds the total allowable error was calculated for the proposed system as well as for the Westgard rules. The proposed method performed better. The proposed algorithm was implemented in a computer program running on computers to which the analyzers were

  18. Fuzzy Analytic Hierarchy Process-based Chinese Resident Best Fitness Behavior Method Research

    PubMed Central

    Wang, Dapeng; Zhang, Lan

    2015-01-01

    With explosive development in Chinese economy and science and technology, people’s pursuit of health becomes more and more intense, therefore Chinese resident sports fitness activities have been rapidly developed. However, different fitness events popularity degrees and effects on body energy consumption are different, so bases on this, the paper researches on fitness behaviors and gets Chinese residents sports fitness behaviors exercise guide, which provides guidance for propelling to national fitness plan’s implementation and improving Chinese resident fitness scientization. The paper starts from the perspective of energy consumption, it mainly adopts experience method, determines Chinese resident favorite sports fitness event energy consumption through observing all kinds of fitness behaviors energy consumption, and applies fuzzy analytic hierarchy process to make evaluation on bicycle riding, shadowboxing practicing, swimming, rope skipping, jogging, running, aerobics these seven fitness events. By calculating fuzzy rate model’s membership and comparing their sizes, it gets fitness behaviors that are more helpful for resident health, more effective and popular. Finally, it gets conclusions that swimming is a best exercise mode and its membership is the highest. Besides, the memberships of running, rope skipping and shadowboxing practicing are also relative higher. It should go in for bodybuilding by synthesizing above several kinds of fitness events according to different physical conditions; different living conditions so that can better achieve the purpose of fitness exercises. PMID:26981163

  19. Fuzzy Analytic Hierarchy Process-based Chinese Resident Best Fitness Behavior Method Research.

    PubMed

    Wang, Dapeng; Zhang, Lan

    2015-01-01

    With explosive development in Chinese economy and science and technology, people's pursuit of health becomes more and more intense, therefore Chinese resident sports fitness activities have been rapidly developed. However, different fitness events popularity degrees and effects on body energy consumption are different, so bases on this, the paper researches on fitness behaviors and gets Chinese residents sports fitness behaviors exercise guide, which provides guidance for propelling to national fitness plan's implementation and improving Chinese resident fitness scientization. The paper starts from the perspective of energy consumption, it mainly adopts experience method, determines Chinese resident favorite sports fitness event energy consumption through observing all kinds of fitness behaviors energy consumption, and applies fuzzy analytic hierarchy process to make evaluation on bicycle riding, shadowboxing practicing, swimming, rope skipping, jogging, running, aerobics these seven fitness events. By calculating fuzzy rate model's membership and comparing their sizes, it gets fitness behaviors that are more helpful for resident health, more effective and popular. Finally, it gets conclusions that swimming is a best exercise mode and its membership is the highest. Besides, the memberships of running, rope skipping and shadowboxing practicing are also relative higher. It should go in for bodybuilding by synthesizing above several kinds of fitness events according to different physical conditions; different living conditions so that can better achieve the purpose of fitness exercises. PMID:26981163

  20. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture. PMID:27441149

  1. Evaluation of generic types of drilling fluid using a risk-based analytic hierarchy process.

    PubMed

    Sadiq, Rehan; Husain, Tahir; Veitch, Brian; Bose, Neil

    2003-12-01

    The composition of drilling muds is based on a mixture of clays and additives in a base fluid. There are three generic categories of base fluid--water, oil, and synthetic. Water-based fluids (WBFs) are relatively environmentally benign, but drilling performance is better with oil-based fluids (OBFs). The oil and gas industry developed synthetic-based fluids (SBFs), such as vegetable esters, olefins, ethers, and others, which provide drilling performance comparable to OBFs, but with lower environmental and occupational health effects. The primary objective of this paper is to present a methodology to guide decision-making in the selection and evaluation of three generic types of drilling fluids using a risk-based analytic hierarchy process (AHP). In this paper a comparison of drilling fluids is made considering various activities involved in the life cycle of drilling fluids. This paper evaluates OBFs, WBFs, and SBFs based on four major impacts--operations, resources, economics, and liabilities. Four major activities--drilling, discharging offshore, loading and transporting, and disposing onshore--cause the operational impacts. Each activity involves risks related to occupational injuries (safety), general public health, environmental impact, and energy use. A multicriteria analysis strategy was used for the selection and evaluation of drilling fluids using a risk-based AHP. A four-level hierarchical structure is developed to determine the final relative scores, and the SBFs are found to be the best option. PMID:15160901

  2. Combined surface analytical methods to characterize degradative processes in anti-stiction films in MEMS devices.

    SciTech Connect

    Tallant, David Robert; Zavadil, Kevin Robert; Ohlhausen, James Anthony; Hankins, Matthew Granholm; Kent, Michael Stuart

    2005-03-01

    The performance and reliability of microelectromechanical (MEMS) devices can be highly dependent on the control of the surface energetics in these structures. Examples of this sensitivity include the use of surface modifying chemistries to control stiction, to minimize friction and wear, and to preserve favorable electrical characteristics in surface micromachined structures. Silane modification of surfaces is one classic approach to controlling stiction in Si-based devices. The time-dependent efficacy of this modifying treatment has traditionally been evaluated by studying the impact of accelerated aging on device performance and conducting subsequent failure analysis. Our interest has been in identifying aging related chemical signatures that represent the early stages of processes like silane displacement or chemical modification that eventually lead to device performance changes. We employ a series of classic surface characterization techniques along with multivariate statistical methods to study subtle changes in the silanized silicon surface and relate these to degradation mechanisms. Examples include the use of spatially resolved time-of-flight secondary ion mass spectrometric, photoelectron spectroscopic, photoluminescence imaging, and scanning probe microscopic techniques to explore the penetration of water through a silane monolayer, the incorporation of contaminant species into a silane monolayer, and local displacement of silane molecules from the Si surface. We have applied this analytical methodology at the Si coupon level up to MEMS devices. This approach can be generalized to other chemical systems to address issues of new materials integration into micro- and nano-scale systems.

  3. Markov-CA model using analytical hierarchy process and multiregression technique

    NASA Astrophysics Data System (ADS)

    Omar, N. Q.; Sanusi, S. A. M.; Hussin, W. M. W.; Samat, N.; Mohammed, K. S.

    2014-06-01

    The unprecedented increase in population and rapid rate of urbanisation has led to extensive land use changes. Cellular automata (CA) are increasingly used to simulate a variety of urban dynamics. This paper introduces a new CA based on an integration model built-in multi regression and multi-criteria evaluation to improve the representation of CA transition rule. This multi-criteria evaluation is implemented by utilising data relating to the environmental and socioeconomic factors in the study area in order to produce suitability maps (SMs) using an analytical hierarchical process, which is a well-known method. Before being integrated to generate suitability maps for the periods from 1984 to 2010 based on the different decision makings, which have become conditioned for the next step of CA generation. The suitability maps are compared in order to find the best maps based on the values of the root equation (R2). This comparison can help the stakeholders make better decisions. Thus, the resultant suitability map derives a predefined transition rule for the last step for CA model. The approach used in this study highlights a mechanism for monitoring and evaluating land-use and land-cover changes in Kirkuk city, Iraq owing changes in the structures of governments, wars, and an economic blockade over the past decades. The present study asserts the high applicability and flexibility of Markov-CA model. The results have shown that the model and its interrelated concepts are performing rather well.

  4. Estimation of the soil strength parameters in Tertiary volcanic regolith (NE Turkey) using analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Ersoy, Hakan; Karsli, Melek Betül; Çellek, Seda; Kul, Bilgehan; Baykan, İdris; Parsons, Robert L.

    2013-12-01

    Costly and time consuming testing techniques and the difficulties in providing undisturbed samples for these tests have led researchers to estimate strength parameters of soils with simple index tests. However, the paper focuses on estimation of strength parameters of soils as a function of the index properties. Analytical hierarchy process and multiple regression analysis based methodology were performed on datasets obtained from soil tests on 41 samples in Tertiary volcanic regolith. While the hierarchy model focused on determining the most important index properties affecting on strength parameters, regression analysis established meaningful relationships between strength parameters and index properties. The negative polynomial correlations between the friction angle and plasticity properties, and the positive exponential relations between the cohesion and plasticity properties were determined. These relations are characterized by a regression coefficient of 0.80. However, Terzaghi bearing capacity formulas were used to test the model. It is important to see whether there is any statistically significant relation between the calculated and the observed bearing capacity values for model testing. Based on the model, the positive linear correlation characterized by the regression coefficient of 0.86 were determined between bearing capacity values obtained by direct and indirect methods.

  5. Approach of Decision Making Based on the Analytic Hierarchy Process for Urban Landscape Management

    NASA Astrophysics Data System (ADS)

    Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan

    2013-03-01

    This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.

  6. Empirical investigation of radiologists' priorities for PACS selection: an analytical hierarchy process approach.

    PubMed

    Joshi, Vivek; Lee, Kyootai; Melson, David; Narra, Vamsi R

    2011-08-01

    Picture archiving and communication systems (PACS) are being widely adopted in radiology practice. The objective of this study was to find radiologists' perspective on the relative importance of the required features when selecting or developing a PACS. Important features for PACS were identified based on the literature and consultation/interviews with radiologists. These features were categorized and organized into a logical hierarchy consisting of the main dimensions and sub-dimensions. An online survey was conducted to obtain data from 58 radiologists about their relative preferences. Analytical hierarchy process methodology was used to determine the relative priority weights for different dimensions along with the consistency of responses. System continuity and functionality was found to be the most important dimension, followed by system performance and architecture, user interface for workflow management, user interface for image manipulation, and display quality. Among the sub-dimensions, the top two features were: security, backup, and downtime prevention; and voice recognition, transcription, and reporting. Structured reporting was also given very high priority. The results point to the dimensions that can be critical discriminators between different PACS and highlight the importance of faster integration of the emerging developments in radiology into PACS. PMID:20824302

  7. Spatial Analytic Hierarchy Process Model for Flood Forecasting: An Integrated Approach

    NASA Astrophysics Data System (ADS)

    Nasir Matori, Abd; Umar Lawal, Dano; Yusof, Khamaruzaman Wan; Hashim, Mustafa Ahmad; Balogun, Abdul-Lateef

    2014-06-01

    Various flood influencing factors such as rainfall, geology, slope gradient, land use, soil type, drainage density, temperature etc. are generally considered for flood hazard assessment. However, lack of appropriate handling/integration of data from different sources is a challenge that can make any spatial forecasting difficult and inaccurate. Availability of accurate flood maps and thorough understanding of the subsurface conditions can adequately enhance flood disasters management. This study presents an approach that attempts to provide a solution to this drawback by combining Geographic Information System (GIS)-based Analytic Hierarchy Process (AHP) model as spatial forecasting tools. In achieving the set objectives, spatial forecasting of flood susceptible zones in the study area was made. A total number of five set of criteria/factors believed to be influencing flood generation in the study area were selected. Priority weights were assigned to each criterion/factor based on Saaty's nine point scale of preference and weights were further normalized through the AHP. The model was integrated into a GIS system in order to produce a flood forecasting map.

  8. The Prioritization of Clinical Risk Factors of Obstructive Sleep Apnea Severity Using Fuzzy Analytic Hierarchy Process

    PubMed Central

    Maranate, Thaya; Pongpullponsak, Adisak; Ruttanaumpawan, Pimon

    2015-01-01

    Recently, there has been a problem of shortage of sleep laboratories that can accommodate the patients in a timely manner. Delayed diagnosis and treatment may lead to worse outcomes particularly in patients with severe obstructive sleep apnea (OSA). For this reason, the prioritization in polysomnography (PSG) queueing should be endorsed based on disease severity. To date, there have been conflicting data whether clinical information can predict OSA severity. The 1,042 suspected OSA patients underwent diagnostic PSG study at Siriraj Sleep Center during 2010-2011. A total of 113 variables were obtained from sleep questionnaires and anthropometric measurements. The 19 groups of clinical risk factors consisting of 42 variables were categorized into each OSA severity. This study aimed to array these factors by employing Fuzzy Analytic Hierarchy Process approach based on normalized weight vector. The results revealed that the first rank of clinical risk factors in Severe, Moderate, Mild, and No OSA was nighttime symptoms. The overall sensitivity/specificity of the approach to these groups was 92.32%/91.76%, 89.52%/88.18%, 91.08%/84.58%, and 96.49%/81.23%, respectively. We propose that the urgent PSG appointment should include clinical risk factors of Severe OSA group. In addition, the screening for Mild from No OSA patients in sleep center setting using symptoms during sleep is also recommended (sensitivity = 87.12% and specificity = 72.22%). PMID:26221183

  9. Optimal evaluation of infectious medical waste disposal companies using the fuzzy analytic hierarchy process

    SciTech Connect

    Ho, Chao Chung

    2011-07-15

    Ever since Taiwan's National Health Insurance implemented the diagnosis-related groups payment system in January 2010, hospital income has declined. Therefore, to meet their medical waste disposal needs, hospitals seek suppliers that provide high-quality services at a low cost. The enactment of the Waste Disposal Act in 1974 had facilitated some improvement in the management of waste disposal. However, since the implementation of the National Health Insurance program, the amount of medical waste from disposable medical products has been increasing. Further, of all the hazardous waste types, the amount of infectious medical waste has increased at the fastest rate. This is because of the increase in the number of items considered as infectious waste by the Environmental Protection Administration. The present study used two important findings from previous studies to determine the critical evaluation criteria for selecting infectious medical waste disposal firms. It employed the fuzzy analytic hierarchy process to set the objective weights of the evaluation criteria and select the optimal infectious medical waste disposal firm through calculation and sorting. The aim was to propose a method of evaluation with which medical and health care institutions could objectively and systematically choose appropriate infectious medical waste disposal firms.

  10. Using analytic hierarchy process approach in ontological multicriterial decision making - Preliminary considerations

    NASA Astrophysics Data System (ADS)

    Wasielewska, K.; Ganzha, M.

    2012-10-01

    In this paper we consider combining ontologically demarcated information with Saaty's Analytic Hierarchy Process (AHP) [1] for the multicriterial assessment of offers during contract negotiations. The context for the proposal is provided by the Agents in Grid project (AiG; [2]), which aims at development of an agent-based infrastructure for efficient resource management in the Grid. In the AiG project, software agents representing users can either (1) join a team and earn money, or (2) find a team to execute a job. Moreover, agents form teams, managers of which negotiate with clients and workers terms of potential collaboration. Here, ontologically described contracts (Service Level Agreements) are the results of autonomous multiround negotiations. Therefore, taking into account relatively complex nature of the negotiated contracts, multicriterial assessment of proposals plays a crucial role. The AHP method is based on pairwise comparisons of criteria and relies on the judgement of a panel of experts. It measures how well does an offer serve the objective of a decision maker. In this paper, we propose how the AHP method can be used to assess ontologically described contract proposals.

  11. Selection of reference standard during method development using the analytical hierarchy process.

    PubMed

    Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun

    2015-03-25

    Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. PMID:25636165

  12. Exit probability of the one-dimensional q-voter model: Analytical results and simulations for large networks

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2014-05-01

    We discuss the exit probability of the one-dimensional q-voter model and present tools to obtain estimates about this probability, both through simulations in large networks (around 107 sites) and analytically in the limit where the network is infinitely large. We argue that the result E(ρ )=ρq/ρq+(1-ρ)q, that was found in three previous works [F. Slanina, K. Sznajd-Weron, and P. Przybyła, Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006; R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007, for the case q =2; and P. Przybyła, K. Sznajd-Weron, and M. Tabiszewski, Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117, for q >2] using small networks (around 103 sites), is a good approximation, but there are noticeable deviations that appear even for small systems and that do not disappear when the system size is increased (with the notable exception of the case q =2). We also show that, under some simple and intuitive hypotheses, the exit probability must obey the inequality ρq/ρq+(1-ρ)≤E(ρ)≤ρ/ρ +(1-ρ)q in the infinite size limit. We believe this settles in the negative the suggestion made [S. Galam and A. C. R. Martins, Europhys. Lett. 95, 48005 (2001), 10.1209/0295-5075/95/48005] that this result would be a finite size effect, with the exit probability actually being a step function. We also show how the result that the exit probability cannot be a step function can be reconciled with the Galam unified frame, which was also a source of controversy.

  13. Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.

  14. Tough, processable semi-interpenetrating polymer networks from monomer reactants

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor)

    1994-01-01

    A high temperature semi-interpenetrating polymer network (semi-IPN) was developed which had significantly improved processability, damage tolerance, and mechanical performance, when compared to the commercial Thermid materials. This simultaneous semi-IPN was prepared by mixing the monomer precursors of Thermid AL-600 (a thermoset) and NR-150B2 (a thermoplastic) and allowing the monomers to react randomly upon heating. This reaction occurs at a rate which decreases the flow and broadens the processing window. Upon heating at a higher temperature, there is an increase in flow. Because of the improved flow properties, broadened processing window and enhanced toughness, high strength polymer matrix composites, adhesives and molded articles can now be prepared from the acetylene end-capped polyimides which were previously inherently brittle and difficult to process.

  15. Real-time hierarchically distributed processing network interaction simulation

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Wu, C.

    1987-01-01

    The Telerobot Testbed is a hierarchically distributed processing system which is linked together through a standard, commercial Ethernet. Standard Ethernet systems are primarily designed to manage non-real-time information transfer. Therefore, collisions on the net (i.e., two or more sources attempting to send data at the same time) are managed by randomly rescheduling one of the sources to retransmit at a later time interval. Although acceptable for transmitting noncritical data such as mail, this particular feature is unacceptable for real-time hierarchical command and control systems such as the Telerobot. Data transfer and scheduling simulations, such as token ring, offer solutions to collision management, but do not appropriately characterize real-time data transfer/interactions for robotic systems. Therefore, models like these do not provide a viable simulation environment for understanding real-time network loading. A real-time network loading model is being developed which allows processor-to-processor interactions to be simulated, collisions (and respective probabilities) to be logged, collision-prone areas to be identified, and network control variable adjustments to be reentered as a means of examining and reducing collision-prone regimes that occur in the process of simulating a complete task sequence.

  16. Competing contact processes in the Watts-Strogatz network

    NASA Astrophysics Data System (ADS)

    Rybak, Marcin; Malarz, Krzysztof; Kułakowski, Krzysztof

    2016-06-01

    We investigate two competing contact processes on a set of Watts-Strogatz networks with the clustering coefficient tuned by rewiring. The base for network construction is one-dimensional chain of N sites, where each site i is directly linked to nodes labelled as i ± 1 and i ± 2. So initially, each node has the same degree k i = 4. The periodic boundary conditions are assumed as well. For each node i the links to sites i + 1 and i + 2 are rewired to two randomly selected nodes so far not-connected to node i. An increase of the rewiring probability q influences the nodes degree distribution and the network clusterization coefficient 𝓒. For given values of rewiring probability q the set 𝓝(q)={𝓝1,𝓝2,...,𝓝 M } of M networks is generated. The network's nodes are decorated with spin-like variables s i ∈ { S,D }. During simulation each S node having a D-site in its neighbourhood converts this neighbour from D to S state. Conversely, a node in D state having at least one neighbour also in state D-state converts all nearest-neighbours of this pair into D-state. The latter is realized with probability p. We plot the dependence of the nodes S final density n S T on initial nodes S fraction n S 0. Then, we construct the surface of the unstable fixed points in (𝓒, p, n S 0) space. The system evolves more often toward n S T for (𝓒, p, n S 0) points situated above this surface while starting simulation with (𝓒, p, n S 0) parameters situated below this surface leads system to n S T =0. The points on this surface correspond to such value of initial fraction n S * of S nodes (for fixed values 𝓒 and p) for which their final density is n S T=1/2.

  17. Elementary processes governing the evolution of road networks

    NASA Astrophysics Data System (ADS)

    Strano, Emanuele; Nicosia, Vincenzo; Latora, Vito; Porta, Sergio; Barthélemy, Marc

    2012-03-01

    Urbanisation is a fundamental phenomenon whose quantitative characterisation is still inadequate. We report here the empirical analysis of a unique data set regarding almost 200 years of evolution of the road network in a large area located north of Milan (Italy). We find that urbanisation is characterised by the homogenisation of cell shapes, and by the stability throughout time of high-centrality roads which constitute the backbone of the urban structure, confirming the importance of historical paths. We show quantitatively that the growth of the network is governed by two elementary processes: (i) `densification', corresponding to an increase in the local density of roads around existing urban centres and (ii) `exploration', whereby new roads trigger the spatial evolution of the urbanisation front. The empirical identification of such simple elementary mechanisms suggests the existence of general, simple properties of urbanisation and opens new directions for its modelling and quantitative description.

  18. The Martian valley networks: Origin by niveo-fluvial processes

    NASA Technical Reports Server (NTRS)

    Rice, J. W., Jr.

    1993-01-01

    The valley networks may hold the key to unlocking the paleoclimatic history of Mars. These enigmatic landforms may be regarded as the Martian equivalent of the Rosetta Stone. Therefore, a more thorough understanding of their origin and evolution is required. However, there is still no consensus among investigators regarding the formation (runoff vs. sapping) of these features. Recent climatic modeling precludes warm (0 degrees C) globally averaged surface temperatures prior to 2 b.y. when solar luminosity was 25-30 percent less than present levels. This paper advocates snowmelt as the dominant process responsible for the formation of the dendritic valley networks. Evidence for Martian snowfall and subsequent melt has been discussed in previous studies.

  19. Elementary processes governing the evolution of road networks

    PubMed Central

    Strano, Emanuele; Nicosia, Vincenzo; Latora, Vito; Porta, Sergio; Barthélemy, Marc

    2012-01-01

    Urbanisation is a fundamental phenomenon whose quantitative characterisation is still inadequate. We report here the empirical analysis of a unique data set regarding almost 200 years of evolution of the road network in a large area located north of Milan (Italy). We find that urbanisation is characterised by the homogenisation of cell shapes, and by the stability throughout time of high–centrality roads which constitute the backbone of the urban structure, confirming the importance of historical paths. We show quantitatively that the growth of the network is governed by two elementary processes: (i) ‘densification’, corresponding to an increase in the local density of roads around existing urban centres and (ii) ‘exploration’, whereby new roads trigger the spatial evolution of the urbanisation front. The empirical identification of such simple elementary mechanisms suggests the existence of general, simple properties of urbanisation and opens new directions for its modelling and quantitative description. PMID:22389765

  20. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  1. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  2. Distributed Signal Processing for Wireless EEG Sensor Networks.

    PubMed

    Bertrand, Alexander

    2015-11-01

    Inspired by ongoing evolutions in the field of wireless body area networks (WBANs), this tutorial paper presents a conceptual and exploratory study of wireless electroencephalography (EEG) sensor networks (WESNs), with an emphasis on distributed signal processing aspects. A WESN is conceived as a modular neuromonitoring platform for high-density EEG recordings, in which each node is equipped with an electrode array, a signal processing unit, and facilities for wireless communication. We first address the advantages of such a modular approach, and we explain how distributed signal processing algorithms make WESNs more power-efficient, in particular by avoiding data centralization. We provide an overview of distributed signal processing algorithms that are potentially applicable in WESNs, and for illustration purposes, we also provide a more detailed case study of a distributed eye blink artifact removal algorithm. Finally, we study the power efficiency of these distributed algorithms in comparison to their centralized counterparts in which all the raw sensor signals are centralized in a near-end or far-end fusion center. PMID:25850092

  3. Competing spreading processes on multiplex networks: Awareness and epidemics

    NASA Astrophysics Data System (ADS)

    Granell, Clara; Gómez, Sergio; Arenas, Alex

    2014-07-01

    Epidemiclike spreading processes on top of multilayered interconnected complex networks reveal a rich phase diagram of intertwined competition effects. A recent study by the authors [C. Granell et al., Phys. Rev. Lett. 111, 128701 (2013)., 10.1103/PhysRevLett.111.128701] presented an analysis of the interrelation between two processes accounting for the spreading of an epidemic, and the spreading of information awareness to prevent infection, on top of multiplex networks. The results in the case in which awareness implies total immunization to the disease revealed the existence of a metacritical point at which the critical onset of the epidemics starts, depending on completion of the awareness process. Here we present a full analysis of these critical properties in the more general scenario where the awareness spreading does not imply total immunization, and where infection does not imply immediate awareness of it. We find the critical relation between the two competing processes for a wide spectrum of parameters representing the interaction between them. We also analyze the consequences of a massive broadcast of awareness (mass media) on the final outcome of the epidemic incidence. Importantly enough, the mass media make the metacritical point disappear. The results reveal that the main finding, i.e., existence of a metacritical point, is rooted in the competition principle and holds for a large set of scenarios.

  4. Survey of Technetium Analytical Production Methods Supporting Hanford Nuclear Materials Processing

    SciTech Connect

    TROYER, G.L.

    1999-11-03

    This document provides a historical survey of analytical methods used for measuring {sup 99}Tc in nuclear fuel reprocessing materials and wastes at Hanford. Method challenges including special sludge matrices tested are discussed. Special problems and recommendations are presented.

  5. Combined Surface Analytical Methods to Characterize Degradative Processes in Anti-Stiction Films in MEMS Devices

    NASA Astrophysics Data System (ADS)

    Zavadil, Kevin

    2005-03-01

    The performance and reliability of microelectromechanical (MEMS) devices can be highly dependent on the control of the surface energetics in these structures. Examples of this sensitivity include the use of surface modifying chemistries to control stiction, to minimize friction and wear, and to preserve favorable electrical characteristics in surface micromachined structures. Silane modification of surfaces is one classic approach to controlling stiction in Si-based devices. The time-dependent efficacy of this modifying treatment has traditionally been evaluated by studying the impact of accelerated aging on device performance and conducting subsequent failure analysis. Our interest has been in identifying aging related chemical signatures that represent the early stages of processes like silane displacement or chemical modification that eventually lead to device performance changes. We employ a series of classic surface characterization techniques along with multivariate statistical methods to study subtle changes in the silanized silicon surface and relate these to degradation mechanisms. Examples include the use of spatially resolved time-of-flight secondary ion mass spectrometric, photoelectron spectroscopic, photoluminescence imaging, and scanning probe microscopic techniques to explore the penetration of water through a silane monolayer, the incorporation of contaminant species into a silane monolayer, and local displacement of silane molecules from the Si surface. We have applied this analytical methodology at the Si coupon level up to MEMS devices. This approach can be generalized to other chemical systems to address issues of new materials integration into micro- and nano-scale systems. * This work was supported by the United States Department of Energy under Contract DE-AC04-94AL85000. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security

  6. Analytical Hierarchy Process modeling for malaria risk zones in Vadodara district, Gujarat

    NASA Astrophysics Data System (ADS)

    Bhatt, B.; Joshi, J. P.

    2014-11-01

    Malaria epidemic is one of the complex spatial problems around the world. According to WHO, an estimated 6, 27, 000 deaths occurred due to malaria in 2012. In many developing nations with diverse ecological regions, it is still a large cause of human mortality. Owing to the incompleteness of epidemiological data and their spatial origin, the quantification of disease incidence burdening basic public health planning is a major constrain especially in developing countries. The present study focuses on the integrated Geospatial and Multi-Criteria Evaluation (AHP) technique to determine malaria risk zones. The study is conducted in Vadodara district, including 12 Taluka among which 4 Taluka are predominantly tribal. The influence of climatic and physical environmental factors viz., rainfall, hydro geomorphology; drainage, elevation, and land cover are used to score their share in the evaluation of malariogenic condition. This was synthesized on the basis of preference over each factor and the total weights of each data and data layer were computed and visualized. The district was divided into three viz., high, moderate and low risk zones .It was observed that a geographical area of 1885.2sq.km comprising 30.3% fall in high risk zone. The risk zones identified on the basis of these parameters and assigned weights shows a close resemblance with ground condition. As the API distribution for 2011overlaid corresponds to the risk zones identified. The study demonstrates the significance and prospect of integrating Geospatial tools and Analytical Hierarchy Process for malaria risk zones and dynamics of malaria transmission.

  7. Land Suitability Assessment on a Watershed of Loess Plateau Using the Analytic Hierarchy Process

    PubMed Central

    Yi, Xiaobo; Wang, Li

    2013-01-01

    In order to reduce soil erosion and desertification, the Sloping Land Conversion Program has been conducted in China for more than 15 years, and large areas of farmland have been converted to forest and grassland. However, this large-scale vegetation-restoration project has faced some key problems (e.g. soil drying) that have limited the successful development of the current ecological-recovery policy. Therefore, it is necessary to know about the land use, vegetation, and soil, and their inter-relationships in order to identify the suitability of vegetation restoration. This study was conducted at the watershed level in the ecologically vulnerable region of the Loess Plateau, to evaluate the land suitability using the analytic hierarchy process (AHP). The results showed that (1) the area unsuitable for crops accounted for 73.3% of the watershed, and the main factors restricting cropland development were soil physical properties and soil nutrients; (2) the area suitable for grassland was about 86.7% of the watershed, with the remaining 13.3% being unsuitable; (3) an area of 3.95 km2, accounting for 66.7% of the watershed, was unsuitable for forest. Overall, the grassland was found to be the most suitable land-use to support the aims of the Sloping Land Conversion Program in the Liudaogou watershed. Under the constraints of soil water shortage and nutrient deficits, crops and forests were considered to be inappropriate land uses in the study area, especially on sloping land. When selecting species for re-vegetation, non-native grass species with high water requirements should be avoided so as to guarantee the sustainable development of grassland and effective ecological functioning. Our study provides local land managers and farmers with valuable information about the inappropriateness of growing trees in the study area along with some information on species selection for planting in the semi-arid area of the Loess Plateau. PMID:23922723

  8. Applying the Analytic Hierarchy Process to Oil Sands Environmental Compliance Risk Management

    NASA Astrophysics Data System (ADS)

    Roux, Izak Johannes, III

    Oil companies in Alberta, Canada, invested $32 billion on new oil sands projects in 2013. Despite the size of this investment, there is a demonstrable deficiency in the uniformity and understanding of environmental legislation requirements that manifest into increased project compliance risks. This descriptive study developed 2 prioritized lists of environmental regulatory compliance risks and mitigation strategies and used multi-criteria decision theory for its theoretical framework. Information from compiled lists of environmental compliance risks and mitigation strategies was used to generate a specialized pairwise survey, which was piloted by 5 subject matter experts (SMEs). The survey was validated by a sample of 16 SMEs, after which the Analytic Hierarchy Process (AHP) was used to rank a total of 33 compliance risks and 12 mitigation strategy criteria. A key finding was that the AHP is a suitable tool for ranking of compliance risks and mitigation strategies. Several working hypotheses were also tested regarding how SMEs prioritized 1 compliance risk or mitigation strategy compared to another. The AHP showed that regulatory compliance, company reputation, environmental compliance, and economics ranked the highest and that a multi criteria mitigation strategy for environmental compliance ranked the highest. The study results will inform Alberta oil sands industry leaders about the ranking and utility of specific compliance risks and mitigations strategies, enabling them to focus on actions that will generate legislative and public trust. Oil sands leaders implementing a risk management program using the risks and mitigation strategies identified in this study will contribute to environmental conservation, economic growth, and positive social change.

  9. State-trace analysis: dissociable processes in a connectionist network?

    PubMed

    Yeates, Fayme; Wills, Andy J; Jones, Fergal W; McLaren, Ian P L

    2015-07-01

    Some argue the common practice of inferring multiple processes or systems from a dissociation is flawed (Dunn, 2003). One proposed solution is state-trace analysis (Bamber, 1979), which involves plotting, across two or more conditions of interest, performance measured by either two dependent variables, or two conditions of the same dependent measure. The resulting analysis is considered to provide evidence that either (a) a single process underlies performance (one function is produced) or (b) there is evidence for more than one process (more than one function is produced). This article reports simulations using the simple recurrent network (SRN; Elman, 1990) in which changes to the learning rate produced state-trace plots with multiple functions. We also report simulations using a single-layer error-correcting network that generate plots with a single function. We argue that the presence of different functions on a state-trace plot does not necessarily support a dual-system account, at least as typically defined (e.g. two separate autonomous systems competing to control responding); it can also indicate variation in a single parameter within theories generally considered to be single-system accounts. PMID:25307272

  10. Statistical process control using optimized neural networks: a case study.

    PubMed

    Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid

    2014-09-01

    The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. PMID:24210290

  11. Marketing Mix Formulation for Higher Education: An Integrated Analysis Employing Analytic Hierarchy Process, Cluster Analysis and Correspondence Analysis

    ERIC Educational Resources Information Center

    Ho, Hsuan-Fu; Hung, Chia-Chi

    2008-01-01

    Purpose: The purpose of this paper is to examine how a graduate institute at National Chiayi University (NCYU), by using a model that integrates analytic hierarchy process, cluster analysis and correspondence analysis, can develop effective marketing strategies. Design/methodology/approach: This is primarily a quantitative study aimed at…

  12. A Pilot Study in the Application of the Analytic Hierarchy Process to Predict Student Performance in Mathematics

    ERIC Educational Resources Information Center

    Warwick, Jon

    2007-01-01

    The decline in the development of mathematical skills in students prior to university entrance has been a matter of concern to UK higher education staff for a number of years. This article describes a pilot study that uses the Analytic Hierarchy Process to quantify the mathematical experiences of computing students prior to the start of a first…

  13. EVALUATION OF AN ESCA (ELECTRON SPECTROSCOPY FOR CHEMICAL ANALYSIS)/LEACHATE ANALYTICAL SCHEME TO CHARACTERIZE PROCESS STREAM WASTES

    EPA Science Inventory

    The report gives results of an evaluation of the ability of an ESCA/leachate analytical scheme to characterize solid waste from combustion processes and hazardous waste incinerators. Samples were analyzed for surface elemental composition by electron spectroscopy for chemical ana...

  14. APPLICATION OF THE ANALYTIC HIERARCHY PROCESS TO COMPARE ALTERNATIVES FOR THE LONG-TERM MANAGEMENT OF SURPLUS MERCURY

    EPA Science Inventory

    This paper describes a systematic method for comparing options for the long-term management of surplus elemental mercury in the U.S., using the Analytic Hierarchy Process (AHP) as embodied in commercially available Expert Choice software. A limited scope multi-criteria decision-a...

  15. Precision and bias of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1983; and January 1980 through September 1984

    USGS Publications Warehouse

    Schroder, L.J.; Bricker, A.W.; Willoughby, T.C.

    1985-01-01

    Blind-audit samples with known analyte concentrations have been prepared by the U.S. Geological Survey and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The difference between the National Atmospheric Deposition Program and National Trends Network reported analyte concentrations and known analyte concentrations have been calculated, and the bias has been determined. Calcium, magnesium , sodium, and chloride were biased at the 99-percent confidence limit; potassium and sulfate were unbiased at the 99-percent confidence limit, for 1983 results. Relative-percent differences between the measured and known analyte concentration for calcium , magnesium, sodium, potassium, chloride, and sulfate have been calculated for 1983. The median relative percent difference for calcium was 17.0; magnesium was 6.4; sodium was 10.8; potassium was 6.4; chloride was 17.2; and sulfate was -5.3. These relative percent differences should be used to correct the 1983 data before user-analysis of the data. Variances have been calculated for calcium, magnesium, sodium, potassium, chloride, and sulfate determinations. These variances should be applicable to natural-sample analyte concentrations reported by the National Atmospheric Deposition Program and National Trends Network for calendar year 1983. (USGS)

  16. Aberrant network connectivity during error processing in patients with schizophrenia

    PubMed Central

    Voegler, Rolf; Becker, Michael P.I.; Nitsch, Alexander; Miltner, Wolfgang H.R.; Straube, Thomas

    2016-01-01

    Background Neuroimaging methods have pointed to deficits in the interaction of large-scale brain networks in patients with schizophrenia. Abnormal connectivity of the right anterior insula (AI), a central hub of the salience network, is frequently reported and may underlie patients’ deficits in adaptive salience processing and cognitive control. While most previous studies used resting state approaches, we examined right AI interactions in a task-based fMRI study. Methods Patients with schizophrenia and healthy controls performed an adaptive version of the Eriksen Flanker task that was specifically designed to ensure a comparable number of errors between groups. Results We included 27 patients with schizophrenia and 27 healthy controls in our study. The between-groups comparison replicated the classic finding of reduced activation in the midcingulate cortex (MCC) in patients with schizophrenia during the commission of errors while controlling for confounding factors, such as task performance and error frequency, which have been neglected in many previous studies. Subsequent psychophysiological interaction analysis revealed aberrant functional connectivity (FC) between the right AI and regions in the inferior frontal gyrus and temporoparietal junction. Additionally, FC between the MCC and the dorsolateral prefrontal cortex was reduced. Limitations As we examined a sample of medicated patients, effects of antipsychotic medication may have influenced our results. Conclusion Overall, it appears that schizophrenia is associated with impairment of networks associated with detection of errors, refocusing of attention, superordinate guiding of cognitive control and their respective coordination. PMID:26836622

  17. Universality classes of the generalized epidemic process on random networks

    NASA Astrophysics Data System (ADS)

    Chung, Kihong; Baek, Yongjoo; Ha, Meesoon; Jeong, Hawoong

    2016-05-01

    We present a self-contained discussion of the universality classes of the generalized epidemic process (GEP) on Poisson random networks, which is a simple model of social contagions with cooperative effects. These effects lead to rich phase transitional behaviors that include continuous and discontinuous transitions with tricriticality in between. With the help of a comprehensive finite-size scaling theory, we numerically confirm static and dynamic scaling behaviors of the GEP near continuous phase transitions and at tricriticality, which verifies the field-theoretical results of previous studies. We also propose a proper criterion for the discontinuous transition line, which is shown to coincide with the bond percolation threshold.

  18. Near Real Time Analytics of Human Sensor Networks in the Realm of Big Data

    NASA Astrophysics Data System (ADS)

    Aulov, O.; Halem, M.

    2012-12-01

    With the prolific development of social media, emergency responders have an increasing interest in harvesting social media from outlets such as Flickr, Twitter, and Facebook, in order to assess the scale and specifics of extreme events including wild fires, earthquakes, terrorist attacks, oil spills, etc. A number of experimental platforms have successfully been implemented to demonstrate the utilization of social media data in extreme events, including Twitter Earthquake Detector, which relied on tweets for earthquake monitoring; AirTwitter, which used tweets for air quality reporting; and our previous work, using Flickr data as boundary value forcings to improve the forecast of oil beaching in the aftermath of the Deepwater Horizon oil spill. The majority of these platforms addressed a narrow, specific type of emergency and harvested data from a particular outlet. We demonstrate an interactive framework for monitoring, mining and analyzing a plethora of heterogeneous social media sources for a diverse range of extreme events. Our framework consists of three major parts: a real time social media aggregator, a data processing and analysis engine, and a web-based visualization and reporting tool. The aggregator gathers tweets, Facebook comments from fan pages, Google+ posts, forum discussions, blog posts (such as LiveJournal and Blogger.com), images from photo-sharing platforms (such as Flickr, Picasa), videos from video-sharing platforms (youtube, Vimeo), and so forth. The data processing and analysis engine pre-processes the aggregated information and annotates it with geolocation and sentiment information. In many cases, the metadata of the social media posts does not contain geolocation information—-however, a human reader can easily guess from the body of the text what location is discussed. We are automating this task by use of Named Entity Recognition (NER) algorithms and a gazetteer service. The visualization and reporting tool provides a web-based, user

  19. Modeling Nitrogen Processing in Northeast US River Networks

    NASA Astrophysics Data System (ADS)

    Whittinghill, K. A.; Stewart, R.; Mineau, M.; Wollheim, W. M.; Lammers, R. B.

    2013-12-01

    Due to increased nitrogen (N) pollution from anthropogenic sources, the need for aquatic ecosystem services such as N removal has also increased. River networks provide a buffering mechanism that retains or removes anthropogenic N inputs. However, the effectiveness of N removal in rivers may decline with increased loading and, consequently, excess N is eventually delivered to estuaries. We used a spatially distributed river network N removal model developed within the Framework for Aquatic Modeling in the Earth System (FrAMES) to examine the geography of N removal capacity of Northeast river systems under various land use and climate conditions. FrAMES accounts for accumulation and routing of runoff, water temperatures, and serial biogeochemical processing using reactivity derived from the Lotic Intersite Nitrogen Experiment (LINX2). Nonpoint N loading is driven by empirical relationships with land cover developed from previous research in Northeast watersheds. Point source N loading from wastewater treatment plants is estimated as a function of the population served and the volume of water discharged. We tested model results using historical USGS discharge data and N data from historical grab samples and recently initiated continuous measurements from in-situ aquatic sensors. Model results for major Northeast watersheds illustrate hot spots of ecosystem service activity (i.e. N removal) using high-resolution maps and basin profiles. As expected, N loading increases with increasing suburban or agricultural land use area. Network scale N removal is highest during summer and autumn when discharge is low and river temperatures are high. N removal as the % of N loading increases with catchment size and decreases with increasing N loading, suburban land use, or agricultural land use. Catchments experiencing the highest network scale N removal generally have N inputs (both point and non-point sources) located in lower order streams. Model results can be used to better

  20. Risk assessment in the upstream crude oil supply chain: Leveraging analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Briggs, Charles Awoala

    For an organization to be successful, an effective strategy is required, and if implemented appropriately the strategy will result in a sustainable competitive advantage. The importance of decision making in the oil industry is reflected in the magnitude and nature of the industry. Specific features of the oil industry supply chain, such as its longer chain, the complexity of its transportation system, its complex production and storage processes, etc., pose challenges to its effective management. Hence, understanding the risks, the risk sources, and their potential impacts on the oil industry's operations will be helpful in proposing a risk management model for the upstream oil supply chain. The risk-based model in this research uses a three-level analytic hierarchy process (AHP), a multiple-attribute decision-making technique, to underline the importance of risk analysis and risk management in the upstream crude oil supply chain. Level 1 represents the overall goal of risk management; Level 2 is comprised of the various risk factors; and Level 3 represents the alternative criteria of the decision maker as indicated on the hierarchical structure of the crude oil supply chain. Several risk management experts from different oil companies around the world were surveyed, and six major types of supply chain risks were identified: (1) exploration and production, (2) environmental and regulatory compliance, (3) transportation, (4) availability of oil, (5) geopolitical, and (6) reputational. Also identified are the preferred methods of managing risks which include; (1) accept and control the risks, (2) avoid the risk by stopping the activity, or (3) transfer or share the risks to other companies or insurers. The results from the survey indicate that the most important risk to manage is transportation risk with a priority of .263, followed by exploration/production with priority of .198, with an overall inconsistency of .03. With respect to major objectives the most

  1. Assessment of economic instruments for countries with low municipal waste management performance: An approach based on the analytic hierarchy process.

    PubMed

    Kling, Maximilian; Seyring, Nicole; Tzanova, Polia

    2016-09-01

    Economic instruments provide significant potential for countries with low municipal waste management performance in decreasing landfill rates and increasing recycling rates for municipal waste. In this research, strengths and weaknesses of landfill tax, pay-as-you-throw charging systems, deposit-refund systems and extended producer responsibility schemes are compared, focusing on conditions in countries with low waste management performance. In order to prioritise instruments for implementation in these countries, the analytic hierarchy process is applied using results of a literature review as input for the comparison. The assessment reveals that pay-as-you-throw is the most preferable instrument when utility-related criteria are regarded (wb = 0.35; analytic hierarchy process distributive mode; absolute comparison) mainly owing to its waste prevention effect, closely followed by landfill tax (wb = 0.32). Deposit-refund systems (wb = 0.17) and extended producer responsibility (wb = 0.16) rank third and fourth, with marginal differences owing to their similar nature. When cost-related criteria are additionally included in the comparison, landfill tax seems to provide the highest utility-cost ratio. Data from literature concerning cost (contrary to utility-related criteria) is currently not sufficiently available for a robust ranking according to the utility-cost ratio. In general, the analytic hierarchy process is seen as a suitable method for assessing economic instruments in waste management. Independent from the chosen analytic hierarchy process mode, results provide valuable indications for policy-makers on the application of economic instruments, as well as on their specific strengths and weaknesses. Nevertheless, the instruments need to be put in the country-specific context along with the results of this analytic hierarchy process application before practical decisions are made. PMID:27121417

  2. Brain network interactions in auditory, visual and linguistic processing.

    PubMed

    Horwitz, Barry; Braun, Allen R

    2004-05-01

    In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are ideal for enabling one to assess interregional functional interactions. Two ways to use these types of data to assess network interactions are presented. First, using PET, we demonstrate that anterior and posterior perisylvian language areas have stronger functional connectivity during spontaneous narrative production than during other less linguistically demanding production tasks. Second, we show how one can use large-scale neural network modeling to relate neural activity to the hemodynamically-based data generated by fMRI and PET. We review two versions of a model of object processing - one for visual and one for auditory objects. The regions comprising the models include primary and secondary sensory cortex, association cortex in the temporal lobe, and prefrontal cortex. Each model incorporates specific assumptions about how neurons in each of these areas function, and how neurons in the different areas are interconnected with each other. Each model is able to perform a delayed match-to-sample task for simple objects (simple shapes for the visual model; tonal contours for the auditory model). We find that the simulated electrical activities in each region are similar to those observed in nonhuman primates performing analogous tasks, and the absolute values of the simulated integrated synaptic activity in each brain region match human fMRI/PET data. Thus, this type of modeling provides a way to understand the neural bases for the sensorimotor and cognitive tasks of interest. PMID:15068921

  3. Fault-tolerant interconnection network and image-processing applications for the PASM parallel processing system

    SciTech Connect

    Adams, G.B. III

    1984-01-01

    The demand for very high speed data processing coupled with falling hardware costs has made large-scale parallel and distributed computer systems both desirable and feasible. Two modes of parallel processing are single instruction stream-multiple data stream (SIMD) and multiple instruction stream-multiple data stream (MIMD). PASM, a partitionable SIMD/MIMD system, is a reconfigurable multimicroprocessor system being designed for image processing and pattern recognition. An important component of these systems is the interconnection network, the mechanism for communication among the computation nodes and memories. Assuring high reliability for such complex systems is a significant task. Thus, a crucial practical aspect of an interconnection network is fault tolerance. In answer to this need, the Extra Stage Cube (ESC), a fault-tolerant, multistage cube-type interconnection network, is define. The fault tolerance of the ESC is explored for both single and multiple faults, routing tags are defined, and consideration is given to permuting data and partitioning the ESC in the presence of faults. The ESC is compared with other fault-tolerant multistage networks. Finally, reliability of the ESC and an enhanced version of it are investigated.

  4. Phase Transitions in the Quadratic Contact Process on Complex Networks

    NASA Astrophysics Data System (ADS)

    Varghese, Chris; Durrett, Rick

    2013-03-01

    The quadratic contact process (QCP) is a natural extension of the well studied linear contact process where a single infected (1) individual can infect a susceptible (0) neighbor and infected individuals are allowed to recover (1 --> 0). In the QCP, a combination of two 1's is required to effect a 0 --> 1 change. We extend the study of the QCP, which so far has been limited to lattices, to complex networks as a model for the change in a population via sexual reproduction and death. We define two versions of the QCP - vertex centered (VQCP) and edge centered (EQCP) with birth events 1 - 0 - 1 --> 1 - 1 - 1 and 1 - 1 - 0 --> 1 - 1 - 1 respectively, where ` -' represents an edge. We investigate the effects of network topology by considering the QCP on regular, Erdős-Rényi and power law random graphs. We perform mean field calculations as well as simulations to find the steady state fraction of occupied vertices as a function of the birth rate. We find that on the homogeneous graphs (regular and Erdős-Rényi) there is a discontinuous phase transition with a region of bistability, whereas on the heavy tailed power law graph, the transition is continuous. The critical birth rate is found to be positive in the former but zero in the latter.

  5. Rapid Odor Processing in the Honeybee Antennal Lobe Network

    PubMed Central

    Krofczik, Sabine; Menzel, Randolf; Nawrot, Martin P.

    2008-01-01

    In their natural environment, many insects need to identify and evaluate behaviorally relevant odorants on a rich and dynamic olfactory background. Behavioral studies have demonstrated that bees recognize learned odors within <200 ms, indicating a rapid processing of olfactory input in the sensory pathway. We studied the role of the honeybee antennal lobe network in constructing a fast and reliable code of odor identity using in vivo intracellular recordings of individual projection neurons (PNs) and local interneurons (LNs). We found a complementary ensemble code where odor identity is encoded in the spatio-temporal pattern of response latencies as well as in the pattern of activated and inactivated PN firing. This coding scheme rapidly reaches a stable representation within 50–150 ms after stimulus onset. Testing an odor mixture versus its individual compounds revealed different representations in the two morphologically distinct types of lateral- and median PNs (l- and m-PNs). Individual m-PNs mixture responses were dominated by the most effective compound (elemental representation) whereas l-PNs showed suppressed responses to the mixture but not to its individual compounds (synthetic representation). The onset of inhibition in the membrane potential of l-PNs coincided with the responses of putative inhibitory interneurons that responded significantly faster than PNs. Taken together, our results suggest that processing within the LN network of the AL is an essential component of constructing the antennal lobe population code. PMID:19221584

  6. Incomplete fuzzy data processing systems using artificial neural network

    NASA Technical Reports Server (NTRS)

    Patyra, Marek J.

    1992-01-01

    In this paper, the implementation of a fuzzy data processing system using an artificial neural network (ANN) is discussed. The binary representation of fuzzy data is assumed, where the universe of discourse is decartelized into n equal intervals. The value of a membership function is represented by a binary number. It is proposed that incomplete fuzzy data processing be performed in two stages. The first stage performs the 'retrieval' of incomplete fuzzy data, and the second stage performs the desired operation on the retrieval data. The method of incomplete fuzzy data retrieval is proposed based on the linear approximation of missing values of the membership function. The ANN implementation of the proposed system is presented. The system was computationally verified and showed a relatively small total error.

  7. Coastal vulnerability assessment of Puducherry coast, India using analytical hierarchical process

    NASA Astrophysics Data System (ADS)

    Mani Murali, R.; Ankita, M.; Amrita, S.; Vethamony, P.

    2013-03-01

    Increased frequency of natural hazards such as storm surge, tsunami and cyclone, as a consequence of change in global climate, is predicted to have dramatic effects on the coastal communities and ecosystems by virtue of the devastation they cause during and after their occurrence. The tsunami of December 2004 and the Thane cyclone of 2011 caused extensive human and economic losses along the coastline of Puducherry and Tamil Nadu. The devastation caused by these events highlighted the need for vulnerability assessment to ensure better understanding of the elements causing different hazards and to consequently minimize the after-effects of the future events. This paper advocates an Analytical Hierarchical Process (AHP) based approach to coastal vulnerability studies as an improvement to the existing methodologies for vulnerability assessment. The paper also encourages the inclusion of socio-economic parameters along with the physical parameters to calculate the coastal vulnerability index using AHP derived weights. Seven physical-geological parameters (slope, geomorphology, elevation, shoreline change, sea level rise, significant wave height and tidal range) and four socio-economic factors (population, Land-use/Land-cover (LU/LC), roads and location of tourist places) are considered to measure the Physical Vulnerability Index (PVI) as well as the Socio-economic Vulnerability Index (SVI) of the Puducherry coast. Based on the weights and scores derived using AHP, vulnerability maps are prepared to demarcate areas with very low, medium and high vulnerability. A combination of PVI and SVI values are further utilized to compute the Coastal Vulnerability Index (CVI). Finally, the various coastal segments are grouped into the 3 vulnerability classes to obtain the final coastal vulnerability map. The entire coastal extent between Muthiapet and Kirumampakkam as well as the northern part of Kalapet is designated as the high vulnerability zone which constitutes 50% of the

  8. Coastal vulnerability assessment of Puducherry coast, India, using the analytical hierarchical process

    NASA Astrophysics Data System (ADS)

    Mani Murali, R.; Ankita, M.; Amrita, S.; Vethamony, P.

    2013-12-01

    As a consequence of change in global climate, an increased frequency of natural hazards such as storm surges, tsunamis and cyclones, is predicted to have dramatic affects on the coastal communities and ecosystems by virtue of the devastation they cause during and after their occurrence. The tsunami of December 2004 and the Thane cyclone of 2011 caused extensive human and economic losses along the coastline of Puducherry and Tamil Nadu. The devastation caused by these events highlighted the need for vulnerability assessment to ensure better understanding of the elements causing different hazards and to consequently minimize the after- effects of the future events. This paper demonstrates an analytical hierarchical process (AHP)-based approach to coastal vulnerability studies as an improvement to the existing methodologies for vulnerability assessment. The paper also encourages the inclusion of socio-economic parameters along with the physical parameters to calculate the coastal vulnerability index using AHP-derived weights. Seven physical-geological parameters (slope, geomorphology, elevation, shoreline change, sea level rise, significant wave height and tidal range) and four socio-economic factors (population, land use/land cover (LU/LC), roads and location of tourist areas) are considered to measure the physical vulnerability index (PVI) as well as the socio-economic vulnerability index (SVI) of the Puducherry coast. Based on the weights and scores derived using AHP, vulnerability maps are prepared to demarcate areas with very low, medium and high vulnerability. A combination of PVI and SVI values are further utilized to compute the coastal vulnerability index (CVI). Finally, the various coastal segments are grouped into the 3 vulnerability classes to obtain the coastal vulnerability map. The entire coastal extent between Muthiapet and Kirumampakkam as well as the northern part of Kalapet is designated as the high vulnerability zone, which constitutes 50% of the

  9. Dynamic Processes in Network Goods: Modeling, Analysis and Applications

    ERIC Educational Resources Information Center

    Paothong, Arnut

    2013-01-01

    The network externality function plays a very important role in the study of economic network industries. Moreover, the consumer group dynamic interactions coupled with network externality concept is going to play a dominant role in the network goods in the 21st century. The existing literature is stemmed on a choice of externality function with…

  10. A numerically analytical approach to studying oscillation processes for earth's poles

    NASA Astrophysics Data System (ADS)

    Markov, Yu. G.; Perepelkin, V. V.; Krylov, S. S.

    2015-08-01

    The fine dynamic effects that make it possible to improve the accuracy of predicting the trajectory of pole motion are revealed by using a numerically analytical approach in modeling the pole oscillatory motion, the key component of which is the Chandler component.

  11. Learning Analytics: A Case Study of the Process of Design of Visualizations

    ERIC Educational Resources Information Center

    Olmos, Martin; Corrin, Linda

    2012-01-01

    The ability to visualize student engagement and experience data provides valuable opportunities for learning support and curriculum design. With the rise of the use of learning analytics to provide "actionable intelligence" on students' learning, the challenge is to create visualizations of the data, which are clear and useful to the intended…

  12. Anticipatory network models of multicriteria decision-making processes

    NASA Astrophysics Data System (ADS)

    Skulimowski, Andrzej M. J.

    2014-01-01

    In this article, we will investigate the properties of a compromise solution selection method based on modelling the consequences of a decision as factors influencing the decision making in subsequent problems. Specifically, we assume that the constraints and preference structures in the (k + 1)st multicriteria optimisation problem depend on the values of criteria in the k-th problem. To make a decision in the initial problem, the decision maker should take into account the anticipated outcomes of each linked future decision problem. This model can be extended to a network of linked decision problems, such that causal relations are defined between the time-ordered nodes. Multiple edges starting from a decision node correspond to different future scenarios of consequences at this node. In addition, we will define the relation of anticipatory feedback, assuming that some decision makers take into account the anticipated future consequences of their decisions described by a network of optimisers - a class of information processing units introduced in this article. Both relations (causal and anticipatory) form a feedback information model, which makes possible a selection of compromise solutions taking into account the anticipated consequences. We provide constructive algorithms to solve discrete multicriteria decision problems that admit the above preference information structure. An illustrative example is presented in Section 4. Various applications of the above model, including the construction of technology foresight scenarios, are discussed in the final section of this article.

  13. Developmental process emerges from extended brain-body-behavior networks

    PubMed Central

    Byrge, Lisa; Sporns, Olaf; Smith, Linda B.

    2014-01-01

    Studies of brain connectivity have focused on two modes of networks: structural networks describing neuroanatomy and the intrinsic and evoked dependencies of functional networks at rest and during tasks. Each mode constrains and shapes the other across multiple time scales, and each also shows age-related changes. Here we argue that understanding how brains change across development requires understanding the interplay between behavior and brain networks: changing bodies and activities modify the statistics of inputs to the brain; these changing inputs mold brain networks; these networks, in turn, promote further change in behavior and input. PMID:24862251

  14. An Algorithm for Network Real Time Kinematic Processing

    NASA Astrophysics Data System (ADS)

    Malekzadeh, A.; Asgari, J.; Amiri-Simkooei, A. R.

    2015-12-01

    NRTK1 is an efficient method to achieve precise real time positioning from GNSS measurements. In this paper we attempt to improve NRTK algorithm by introducing a new strategy. In this strategy a precise relocation of master station observations is performed using Sagnac effect. After processing the double differences, the tropospheric and ionospheric errors of each baseline can be estimated separately. The next step is interpolation of these errors for the atmospheric errors mitigation of desired baseline. Linear and kriging interpolation methods are implemented in this study. In the new strategy the RINEX2 data of the master station is re-located and is converted to the desired virtual observations. Then the interpolated corrections are applied to the virtual observations. The results are compared by the classical method of VRS generation. 1 Network Real Time Kinematic 2 Receiver Independent Exchange Format

  15. Receptive amusia: evidence for cross-hemispheric neural networks underlying music processing strategies.

    PubMed

    Schuppert, M; Münte, T F; Wieringa, B M; Altenmüller, E

    2000-03-01

    Perceptual musical functions were investigated in patients suffering from unilateral cerebrovascular cortical lesions. Using MIDI (Musical Instrument Digital Interface) technique, a standardized short test battery was established that covers local (analytical) as well as global perceptual mechanisms. These represent the principal cognitive strategies in melodic and temporal musical information processing (local, interval and rhythm; global, contour and metre). Of the participating brain-damaged patients, a total of 69% presented with post-lesional impairments in music perception. Left-hemisphere-damaged patients showed significant deficits in the discrimination of local as well as global structures in both melodic and temporal information processing. Right-hemisphere-damaged patients also revealed an overall impairment of music perception, reaching significance in the temporal conditions. Detailed analysis outlined a hierarchical organization, with an initial right-hemisphere recognition of contour and metre followed by identification of interval and rhythm via left-hemisphere subsystems. Patterns of dissociated and associated melodic and temporal deficits indicate autonomous, yet partially integrated neural subsystems underlying the processing of melodic and temporal stimuli. In conclusion, these data contradict a strong hemispheric specificity for music perception, but indicate cross-hemisphere, fragmented neural substrates underlying local and global musical information processing in the melodic and temporal dimensions. Due to the diverse profiles of neuropsychological deficits revealed in earlier investigations as well as in this study, individual aspects of musicality and musical behaviour very likely contribute to the definite formation of these widely distributed neural networks. PMID:10686177

  16. Parametric Modeling of Welding Processes Using Numerical-Analytical Basis Functions and Equivalent Source Distributions

    NASA Astrophysics Data System (ADS)

    Lambrakos, S. G.

    2016-04-01

    A general methodology for inverse thermal analysis of steady-state energy deposition in plate structures, typically welds, is extended with respect to its formulation. This methodology is in terms of numerical-analytical basis functions, which provide parametric representations of weld-temperature histories that can be adopted as input data to various types of computational procedures, such as those for prediction of solid-state phase transformations and mechanical response. The extension of the methodology presented here concerns construction of numerical-analytical basis functions and their associated parameterizations, which permit optimal and convenient parameter optimization with respect to different types of weld-workpiece boundary conditions, energy source characteristics, and experimental measurements adoptable as weld-temperature history constraints. Prototype inverse thermal analyses of a steel weld are presented that provide proof of concept for inverse thermal analysis using these basis functions.

  17. Process-Hardened, Multi-Analyte Sensor for Characterizing Rocket Plume Constituents

    NASA Technical Reports Server (NTRS)

    Goswami, Kisholoy

    2011-01-01

    A multi-analyte sensor was developed that enables simultaneous detection of rocket engine combustion-product molecules in a launch-vehicle ground test stand. The sensor was developed using a pin-printing method by incorporating multiple sensor elements on a single chip. It demonstrated accurate and sensitive detection of analytes such as carbon dioxide, carbon monoxide, kerosene, isopropanol, and ethylene from a single measurement. The use of pin-printing technology enables high-volume fabrication of the sensor chip, which will ultimately eliminate the need for individual sensor calibration since many identical sensors are made in one batch. Tests were performed using a single-sensor chip attached to a fiber-optic bundle. The use of a fiber bundle allows placement of the opto-electronic readout device at a place remote from the test stand. The sensors are rugged for operation in harsh environments.

  18. Enhanced surface sampler and process for collection and release of analytes

    SciTech Connect

    Addleman, Raymond S; Atkinson, David A; Bays, John T; Chouyyok, Wilaiwan; Cinson, Anthony D; Ewing, Robert G; Gerasimenko, Aleksandr A

    2015-02-03

    An enhanced swipe sampler and method of making are described. The swipe sampler is made of a fabric containing selected glass, metal oxide, and/or oxide-coated glass or metal fibers. Fibers are modified with silane ligands that are directly attached to the surface of the fibers to functionalize the sampling surface of the fabric. The swipe sampler collects various target analytes including explosives and other threat agents on the surface of the sampler.

  19. Managing the Pre- and Post-analytical Phases of the Total Testing Process

    PubMed Central

    2012-01-01

    For many years, the clinical laboratory's focus on analytical quality has resulted in an error rate of 4-5 sigma, which surpasses most other areas in healthcare. However, greater appreciation of the prevalence of errors in the pre- and post-analytical phases and their potential for patient harm has led to increasing requirements for laboratories to take greater responsibility for activities outside their immediate control. Accreditation bodies such as the Joint Commission International (JCI) and the College of American Pathologists (CAP) now require clear and effective procedures for patient/sample identification and communication of critical results. There are a variety of free on-line resources available to aid in managing the extra-analytical phase and the recent publication of quality indicators and proposed performance levels by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) working group on laboratory errors and patient safety provides particularly useful benchmarking data. Managing the extra-laboratory phase of the total testing cycle is the next challenge for laboratory medicine. By building on its existing quality management expertise, quantitative scientific background and familiarity with information technology, the clinical laboratory is well suited to play a greater role in reducing errors and improving patient safety outside the confines of the laboratory. PMID:22259773

  20. Retinal vessel extraction using Lattice Neural Networks with Dendritic Processing.

    PubMed

    Vega, Roberto; Sanchez-Ante, Gildardo; Falcon-Morales, Luis E; Sossa, Humberto; Guevara, Elizabeth

    2015-03-01

    Retinal images can be used to detect and follow up several important chronic diseases. The classification of retinal images requires an experienced ophthalmologist. This has been a bottleneck to implement routine screenings performed by general physicians. It has been proposed to create automated systems that can perform such task with little intervention from humans, with partial success. In this work, we report advances in such endeavor, by using a Lattice Neural Network with Dendritic Processing (LNNDP). We report results using several metrics, and compare against well known methods such as Support Vector Machines (SVM) and Multilayer Perceptrons (MLP). Our proposal shows better performance than other approaches reported in the literature. An additional advantage is that unlike those other tools, LNNDP requires no parameters, and it automatically constructs its structure to solve a particular problem. The proposed methodology requires four steps: (1) Pre-processing, (2) Feature computation, (3) Classification and (4) Post-processing. The Hotelling T(2) control chart was used to reduce the dimensionality of the feature vector, from 7 that were used before to 5 in this work. The experiments were run on images of DRIVE and STARE databases. The results show that on average, F1-Score is better in LNNDP, compared with SVM and MLP implementations. Same improvement is observed for MCC and the accuracy. PMID:25589415

  1. Understanding Social Contagion in Adoption Processes Using Dynamic Social Networks.

    PubMed

    Herrera, Mauricio; Armelini, Guillermo; Salvaj, Erica

    2015-01-01

    There are many studies in the marketing and diffusion literature of the conditions in which social contagion affects adoption processes. Yet most of these studies assume that social interactions do not change over time, even though actors in social networks exhibit different likelihoods of being influenced across the diffusion period. Rooted in physics and epidemiology theories, this study proposes a Susceptible Infectious Susceptible (SIS) model to assess the role of social contagion in adoption processes, which takes changes in social dynamics over time into account. To study the adoption over a span of ten years, the authors used detailed data sets from a community of consumers and determined the importance of social contagion, as well as how the interplay of social and non-social influences from outside the community drives adoption processes. Although social contagion matters for diffusion, it is less relevant in shaping adoption when the study also includes social dynamics among members of the community. This finding is relevant for managers and entrepreneurs who trust in word-of-mouth marketing campaigns whose effect may be overestimated if marketers fail to acknowledge variations in social interactions. PMID:26505473

  2. Signal processing using artificial neural network for BOTDA sensor system.

    PubMed

    Azad, Abul Kalam; Wang, Liang; Guo, Nan; Tam, Hwa-Yaw; Lu, Chao

    2016-03-21

    We experimentally demonstrate the use of artificial neural network (ANN) to process sensing signals obtained from Brillouin optical time domain analyzer (BOTDA). The distributed temperature information is extracted directly from the local Brillouin gain spectra (BGSs) along the fiber under test without the process of determination of Brillouin frequency shift (BFS) and hence conversion from BFS to temperature. Unlike our previous work for short sensing distance where ANN is trained by measured BGSs, here we employ ideal BGSs with different linewidths to train the ANN in order to take the linewidth variation due to different conditions from the training and testing phases into account, making it feasible for long distance sensing. Moreover, the performance of ANN is compared with other two techniques, Lorentzian curve fitting and cross-correlation method, and our results show that ANN has higher accuracy and larger tolerance to measurement error, especially at large frequency scanning step. We also show that the temperature extraction from BOTDA measurements employing ANN is significantly faster than the other two approaches. Hence ANN can be an excellent alternative tool to process BGSs measured by BOTDA and obtain temperature distribution along the fiber, especially when large frequency scanning step is adopted to significantly reduce the measurement time but without sacrifice of sensing accuracy. PMID:27136863

  3. Understanding Social Contagion in Adoption Processes Using Dynamic Social Networks

    PubMed Central

    2015-01-01

    There are many studies in the marketing and diffusion literature of the conditions in which social contagion affects adoption processes. Yet most of these studies assume that social interactions do not change over time, even though actors in social networks exhibit different likelihoods of being influenced across the diffusion period. Rooted in physics and epidemiology theories, this study proposes a Susceptible Infectious Susceptible (SIS) model to assess the role of social contagion in adoption processes, which takes changes in social dynamics over time into account. To study the adoption over a span of ten years, the authors used detailed data sets from a community of consumers and determined the importance of social contagion, as well as how the interplay of social and non-social influences from outside the community drives adoption processes. Although social contagion matters for diffusion, it is less relevant in shaping adoption when the study also includes social dynamics among members of the community. This finding is relevant for managers and entrepreneurs who trust in word-of-mouth marketing campaigns whose effect may be overestimated if marketers fail to acknowledge variations in social interactions. PMID:26505473

  4. Using Fuzzy Analytic Hierarchy Process multicriteria and Geographical information system for coastal vulnerability analysis in Morocco: The case of Mohammedia

    NASA Astrophysics Data System (ADS)

    Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha

    2016-04-01

    This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.

  5. Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process.

    PubMed

    Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R

    2016-03-30

    A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. PMID:26530518

  6. Dynamics and processing in finite self-similar networks

    PubMed Central

    DeDeo, Simon; Krakauer, David C.

    2012-01-01

    A common feature of biological networks is the geometrical property of self-similarity. Molecular regulatory networks through to circulatory systems, nervous systems, social systems and ecological trophic networks show self-similar connectivity at multiple scales. We analyse the relationship between topology and signalling in contrasting classes of such topologies. We find that networks differ in their ability to contain or propagate signals between arbitrary nodes in a network depending on whether they possess branching or loop-like features. Networks also differ in how they respond to noise, such that one allows for greater integration at high noise, and this performance is reversed at low noise. Surprisingly, small-world topologies, with diameters logarithmic in system size, have slower dynamical time scales, and may be less integrated (more modular) than networks with longer path lengths. All of these phenomena are essentially mesoscopic, vanishing in the infinite limit but producing strong effects at sizes and time scales relevant to biology. PMID:22378750

  7. Cascading processes on multiplex networks: Impact of weak layers

    NASA Astrophysics Data System (ADS)

    Lee, Kyu-Min; Goh, Kwang-Il

    Many real-world complex systems such as biological and socio-technological systems consist of manifold layers in multiplex networks. The multiple network layers give rise to the nonlinear effect for the emergent dynamics of systems. Especially, the weak layers plays the significant role in nonlinearity of multiplex networks, which can be neglected in single-layer network framework overlaying all layers. Here we present a simple model of cascades on multiplex networks of heterogeneous layers. The model is simulated on the multiplex network of international trades. We found that the multiplex model produces more catastrophic cascading failures which were the result of collective behaviors from coupling layers rather than the simple summation effect. Therefore risks can be systematically underestimated in simply overlaid network system because the impact of weak layers is overlooked. Our simple theoretical model would have some implications to investigate and design optimal real-world complex systems.

  8. Ergodicity testing using an analytical formula for a dynamical functional of alpha-stable autoregressive fractionally integrated moving average processes

    NASA Astrophysics Data System (ADS)

    Loch, Hanna; Janczura, Joanna; Weron, Aleksander

    2016-04-01

    In this paper we study asymptotic behavior of a dynamical functional for an α -stable autoregressive fractionally integrated moving average (ARFIMA) process. We find an analytical formula for this important statistics and show its usefulness as a diagnostic tool for ergodic properties. The obtained results point to the very fast convergence of the dynamical functional and show that even for short trajectories one may obtain reliable conclusions on the ergodic properties of the ARFIMA process. Moreover we use the obtained theoretical results to illustrate how the dynamical functional statistics can be used in the verification of the proper model for an analysis of some biophysical experimental data.

  9. A cognitive information processing framework for distributed sensor networks

    NASA Astrophysics Data System (ADS)

    Wang, Feiyi; Qi, Hairong

    2004-09-01

    In this paper, we present a cognitive agent framework (CAF) based on swarm intelligence and self-organization principles, and demonstrate it through collaborative processing for target classification in sensor networks. The framework involves integrated designs to provide both cognitive behavior at the organization level to conquer complexity and reactive behavior at the individual agent level to retain simplicity. The design tackles various problems in the current information processing systems, including overly complex systems, maintenance difficulties, increasing vulnerability to attack, lack of capability to tolerate faults, and inability to identify and cope with low-frequency patterns. An important and distinguishing point of the presented work from classical AI research is that the acquired intelligence does not pertain to distinct individuals but to groups. It also deviates from multi-agent systems (MAS) due to sheer quantity of extremely simple agents we are able to accommodate, to the degree that some loss of coordination messages and behavior of faulty/compromised agents will not affect the collective decision made by the group.

  10. High level cognitive information processing in neural networks

    NASA Technical Reports Server (NTRS)

    Barnden, John A.; Fields, Christopher A.

    1992-01-01

    Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.

  11. The Scaling of Human Contacts and Epidemic Processes in Metapopulation Networks

    NASA Astrophysics Data System (ADS)

    Tizzoni, Michele; Sun, Kaiyuan; Benusiglio, Diego; Karsai, Márton; Perra, Nicola

    2015-10-01

    We study the dynamics of reaction-diffusion processes on heterogeneous metapopulation networks where interaction rates scale with subpopulation sizes. We first present new empirical evidence, based on the analysis of the interactions of 13 million users on Twitter, that supports the scaling of human interactions with population size with an exponent γ ranging between 1.11 and 1.21, as observed in recent studies based on mobile phone data. We then integrate such observations into a reaction- diffusion metapopulation framework. We provide an explicit analytical expression for the global invasion threshold which sets a critical value of the diffusion rate below which a contagion process is not able to spread to a macroscopic fraction of the system. In particular, we consider the Susceptible-Infectious-Recovered epidemic model. Interestingly, the scaling of human contacts is found to facilitate the spreading dynamics. This behavior is enhanced by increasing heterogeneities in the mobility flows coupling the subpopulations. Our results show that the scaling properties of human interactions can significantly affect dynamical processes mediated by human contacts such as the spread of diseases, ideas and behaviors.

  12. The Scaling of Human Contacts and Epidemic Processes in Metapopulation Networks.

    PubMed

    Tizzoni, Michele; Sun, Kaiyuan; Benusiglio, Diego; Karsai, Márton; Perra, Nicola

    2015-01-01

    We study the dynamics of reaction-diffusion processes on heterogeneous metapopulation networks where interaction rates scale with subpopulation sizes. We first present new empirical evidence, based on the analysis of the interactions of 13 million users on Twitter, that supports the scaling of human interactions with population size with an exponent γ ranging between 1.11 and 1.21, as observed in recent studies based on mobile phone data. We then integrate such observations into a reaction- diffusion metapopulation framework. We provide an explicit analytical expression for the global invasion threshold which sets a critical value of the diffusion rate below which a contagion process is not able to spread to a macroscopic fraction of the system. In particular, we consider the Susceptible-Infectious-Recovered epidemic model. Interestingly, the scaling of human contacts is found to facilitate the spreading dynamics. This behavior is enhanced by increasing heterogeneities in the mobility flows coupling the subpopulations. Our results show that the scaling properties of human interactions can significantly affect dynamical processes mediated by human contacts such as the spread of diseases, ideas and behaviors. PMID:26478209

  13. Aggregation Processes on Networks: Deterministic Equations, Stochastic Model and Numerical Simulation

    SciTech Connect

    Guias, Flavius

    2008-09-01

    We introduce an infinite system of equations modeling the time evolution of the growth process of a network. The nodes are characterized by their degree k(set-membership sign)N and a fitness parameter f(set-membership sign)[0,h]. Every new node which emerges becomes a fitness f' according to a given distribution P and attaches to an existing node with fitness f and degree k at rate fA{sub k}, where A{sub k} are positive coefficients, growing sub-linearly in k. If the parameter f takes only one value, the dynamics of this process can be described by a variant of the Becker-Doering equations, where the l growth of the size of clusters of size k occurs only with increment 1. In contrast l to the established Becker-Doering equations, the system considered here is nonconservative, since mass (i.e. links) is continuously added. Nevertheless, it has the property of linearity, which is a natural consequence of the process which is being modeled. The purpose of this paper is to construct a solution of the system based on a stochastic approximation algorithm, which allows also a numerical simulation in order to get insight into its qualitative behaviour. In particular we show analytically and numerically the property of Bose-Einstein condensation, which was observed in the literature on random graphs and which can be described as an emergence of a huge cluster which captures a macroscopic fraction of the total link density.

  14. The Scaling of Human Contacts and Epidemic Processes in Metapopulation Networks

    PubMed Central

    Tizzoni, Michele; Sun, Kaiyuan; Benusiglio, Diego; Karsai, Márton; Perra, Nicola

    2015-01-01

    We study the dynamics of reaction-diffusion processes on heterogeneous metapopulation networks where interaction rates scale with subpopulation sizes. We first present new empirical evidence, based on the analysis of the interactions of 13 million users on Twitter, that supports the scaling of human interactions with population size with an exponent γ ranging between 1.11 and 1.21, as observed in recent studies based on mobile phone data. We then integrate such observations into a reaction- diffusion metapopulation framework. We provide an explicit analytical expression for the global invasion threshold which sets a critical value of the diffusion rate below which a contagion process is not able to spread to a macroscopic fraction of the system. In particular, we consider the Susceptible-Infectious-Recovered epidemic model. Interestingly, the scaling of human contacts is found to facilitate the spreading dynamics. This behavior is enhanced by increasing heterogeneities in the mobility flows coupling the subpopulations. Our results show that the scaling properties of human interactions can significantly affect dynamical processes mediated by human contacts such as the spread of diseases, ideas and behaviors. PMID:26478209

  15. Pre-PCR processing in bioterrorism preparedness: improved diagnostic capabilities for laboratory response networks.

    PubMed

    Hedman, Johannes; Knutsson, Rickard; Ansell, Ricky; Rådström, Peter; Rasmusson, Birgitta

    2013-09-01

    Diagnostic DNA analysis using polymerase chain reaction (PCR) has become a valuable tool for rapid detection of biothreat agents. However, analysis is often challenging because of the limited size, quality, and purity of the biological target. Pre-PCR processing is an integrated concept in which the issues of analytical limit of detection and simplicity for automation are addressed in all steps leading up to PCR amplification--that is, sampling, sample treatment, and the chemical composition of PCR. The sampling method should maximize target uptake and minimize uptake of extraneous substances that could impair the analysis--so-called PCR inhibitors. In sample treatment, there is a trade-off between yield and purity, as extensive purification leads to DNA loss. A cornerstone of pre-PCR processing is to apply DNA polymerase-buffer systems that are tolerant to specific sample impurities, thereby lowering the need for expensive purification steps and maximizing DNA recovery. Improved awareness among Laboratory Response Networks (LRNs) regarding pre-PCR processing is important, as ineffective sample processing leads to increased cost and possibly false-negative or ambiguous results, hindering the decision-making process in a bioterrorism crisis. This article covers the nature and mechanisms of PCR-inhibitory substances relevant for agroterrorism and bioterrorism preparedness, methods for quality control of PCR reactions, and applications of pre-PCR processing to optimize and simplify the analysis of various biothreat agents. Knowledge about pre-PCR processing will improve diagnostic capabilities of LRNs involved in the response to bioterrorism incidents. PMID:23971826

  16. On the network convergence process in RPL over IEEE 802.15.4 multihop networks: improvement and trade-offs.

    PubMed

    Kermajani, Hamidreza; Gomez, Carles

    2014-01-01

    The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs. PMID:25004154

  17. On the Network Convergence Process in RPL over IEEE 802.15.4 Multihop Networks: Improvement and Trade-Offs

    PubMed Central

    Kermajani, Hamidreza; Gomez, Carles

    2014-01-01

    The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs. PMID:25004154

  18. Demonstrating the use of web analytics and an online survey to understand user groups of a national network of river level data

    NASA Astrophysics Data System (ADS)

    Macleod, Christopher Kit; Braga, Joao; Arts, Koen; Ioris, Antonio; Han, Xiwu; Sripada, Yaji; van der Wal, Rene

    2016-04-01

    The number of local, national and international networks of online environmental sensors are rapidly increasing. Where environmental data are made available online for public consumption, there is a need to advance our understanding of the relationships between the supply of and the different demands for such information. Understanding how individuals and groups of users are using online information resources may provide valuable insights into their activities and decision making. As part of the 'dot.rural wikiRivers' project we investigated the potential of web analytics and an online survey to generate insights into the use of a national network of river level data from across Scotland. These sources of online information were collected alongside phone interviews with volunteers sampled from the online survey, and interviews with providers of online river level data; as part of a larger project that set out to help improve the communication of Scotland's online river data. Our web analytics analysis was based on over 100 online sensors which are maintained by the Scottish Environmental Protection Agency (SEPA). Through use of Google Analytics data accessed via the R Ganalytics package we assessed: if the quality of data provided by Google Analytics free service is good enough for research purposes; if we could demonstrate what sensors were being used, when and where; how the nature and pattern of sensor data may affect web traffic; and whether we can identify and profile these users based on information from traffic sources. Web analytics data consists of a series of quantitative metrics which capture and summarize various dimensions of the traffic to a certain web page or set of pages. Examples of commonly used metrics include the number of total visits to a site and the number of total page views. Our analyses of the traffic sources from 2009 to 2011 identified several different major user groups. To improve our understanding of how the use of this national

  19. An alternative analytical formulation for the Voigt function applied to resonant effects in nuclear processes

    NASA Astrophysics Data System (ADS)

    Palma, Daniel A. P.; Gonçalves, Alessandro da C.; Martinez, Aquilino S.

    2011-10-01

    The Voigt function H( a, v) is defined as the convolution of the Gaussian and Lorentzian functions. Recent papers puplished in different areas of physics emphasize the importance of the fast and accurate calculation of the Voigt function for different orders of magnitude of variables a and v. An alternative analytical formulation for the Voigt function is proposed in this paper. This formulation is based on the solution of the non-homogeneous ordinary differential equation, satisfied by the Voigt function, using the Frobenius and parameter variation methods. The functional form of the Voigt function, as proposed, proved simple and precise. Systematic tests are accomplished demonstrating some advantages with other existent methods in the literature and with the numeric method of reference.

  20. Reactive gas plasma specimen processing for use in microanalysis and imaging in analytical electron microscopy

    SciTech Connect

    Zaluzec, N.J.; Kestel, B.J.; Henriks, D.

    1997-01-01

    It has long been the bane of analytical electron microscopy (AEM) that the use of focused probes during microanalysis of specimens increases the local rate of hydrocarbon contamination. This is most succinctly observed by the formation of contamination deposits during focused probe work typical of AEM studies. While serving to indicate the location of the electron probe, the contamination obliterates the area of the specimen being analyzed and adversely affects all quantitative microanalysis methodologies. A variety of methods including: UV, electron beam flooding, heating and/or cooling can decrease the rate of contamination, however, none of these methods directly attack the source of specimen borne contamination. Research has shown that reactive gas plasmas may be used to clean both the specimen and stage for AEM, in this study the authors report on quantitative measurements of the reduction in contamination rates in an AEM as a function of operating conditions and plasma gases.

  1. Analysing Learning Processes and Quality of Knowledge Construction in Networked Learning

    ERIC Educational Resources Information Center

    Veldhuis-Diermanse, A. E.; Biemans, H. J. A.; Mulder, M.; Mahdizadeh, H.

    2006-01-01

    Networked learning aims to foster students' knowledge construction processes as well as the quality of knowledge construction. In this respect, it is crucial to be able to analyse both aspects of networked learning. Based on theories on networked learning and the empirical work of relevant authors in this domain, two coding schemes are presented…

  2. Complex Network Structure Influences Processing in Long-Term and Short-Term Memory

    ERIC Educational Resources Information Center

    Vitevitch, Michael S.; Chan, Kit Ying; Roodenrys, Steven

    2012-01-01

    Complex networks describe how entities in systems interact; the structure of such networks is argued to influence processing. One measure of network structure, clustering coefficient, C, measures the extent to which neighbors of a node are also neighbors of each other. Previous psycholinguistic experiments found that the C of phonological…

  3. Irrelevant stimulus processing in ADHD: catecholamine dynamics and attentional networks

    PubMed Central

    Aboitiz, Francisco; Ossandón, Tomás; Zamorano, Francisco; Palma, Bárbara; Carrasco, Ximena

    2014-01-01

    A cardinal symptom of attention deficit and hyperactivity disorder (ADHD) is a general distractibility where children and adults shift their attentional focus to stimuli that are irrelevant to the ongoing behavior. This has been attributed to a deficit in dopaminergic signaling in cortico-striatal networks that regulate goal-directed behavior. Furthermore, recent imaging evidence points to an impairment of large scale, antagonistic brain networks that normally contribute to attentional engagement and disengagement, such as the task-positive networks and the default mode network (DMN). Related networks are the ventral attentional network (VAN) involved in attentional shifting, and the salience network (SN) related to task expectancy. Here we discuss the tonic–phasic dynamics of catecholaminergic signaling in the brain, and attempt to provide a link between this and the activities of the large-scale cortical networks that regulate behavior. More specifically, we propose that a disbalance of tonic catecholamine levels during task performance produces an emphasis of phasic signaling and increased excitability of the VAN, yielding distractibility symptoms. Likewise, immaturity of the SN may relate to abnormal tonic signaling and an incapacity to build up a proper executive system during task performance. We discuss different lines of evidence including pharmacology, brain imaging and electrophysiology, that are consistent with our proposal. Finally, restoring the pharmacodynamics of catecholaminergic signaling seems crucial to alleviate ADHD symptoms; however, the possibility is open to explore cognitive rehabilitation strategies to top-down modulate network dynamics compensating the pharmacological deficits. PMID:24723897

  4. Resynchronization in neuronal network divided by femtosecond laser processing.

    PubMed

    Hosokawa, Chie; Kudoh, Suguru N; Kiyohara, Ai; Taguchi, Takahisa

    2008-05-01

    We demonstrated scission of a living neuronal network on multielectrode arrays (MEAs) using a focused femtosecond laser and evaluated the resynchronization of spontaneous electrical activity within the network. By an irradiation of femtosecond laser into hippocampal neurons cultured on a multielectrode array dish, neurites were cut at the focal point. After the irradiation, synchronization of neuronal activity within the network drastically decreased over the divided area, indicating diminished functional connections between neurons. Cross-correlation analysis revealed that spontaneous activity between the divided areas gradually resynchronized within 10 days. These findings indicate that hippocampal neurons have the potential to regenerate functional connections and to reconstruct a network by self-assembly. PMID:18418255

  5. Development of a frit 202 analytic standard for the Defense Waste Processing Facility

    SciTech Connect

    Schumacher, R.F.; Hardy, B.J.; Sproull, J.F.

    1997-03-30

    During the qualification of Frit 202 samples for the `DWPF Cold Runs`, the need for a reliable chemical frit standard became apparent. A standard was prepared by obtaining a quantity of Frit 202 and grinding into a fine powder. This material was homogenized as one slurry material volume, spray dried to prevent segregation, and hydraulically pressed into discs. These discs were fired and packaged into eleven sub-lots containing approximately 2,000 discs per sub-lot. A number of samples were obtained and analyzed by two analytic laboratories. The chemical analyses were carefully reviewed and evaluated by several statistical means. While there were several statistically significant variations between the sub-lots, it is believed that those variations are partially caused by the variability of the analytic method. These discs should provide a reliable standard for future chemical analyses of DWPF Frits similar in comparison to Frit 202. It is recommended that these discs be used as a standard material included with the representative frit sample to the independent chemical analysis laboratory, and the order of use of these standards be from sub-lot eleven to sub-lot four. It is further recommended that the NIST standard material (93a) be employed along with the 202 standard until confidence in the new standard is gained. The NIST standard should also be used when initial use of a new sub-lot is begun. this procedure should continue to the end of the DWPF program or such time as the chemical composition of the frit is extensively modified.

  6. Dendritic network models: Improving isoscapes and quantifying influence of landscape and in-stream processes on strontium isotopes in rivers

    USGS Publications Warehouse

    Brennan, Sean R.; Torgersen, Christian; Hollenbeck, Jeff P.; Fernandez, Diego P.; Jensen, Carrie K; Schindler, Daniel E.

    2016-01-01

    A critical challenge for the Earth sciences is to trace the transport and flux of matter within and among aquatic, terrestrial, and atmospheric systems. Robust descriptions of isotopic patterns across space and time, called “isoscapes,” form the basis of a rapidly growing and wide-ranging body of research aimed at quantifying connectivity within and among Earth's systems. However, isoscapes of rivers have been limited by conventional Euclidean approaches in geostatistics and the lack of a quantitative framework to apportion the influence of processes driven by landscape features versus in-stream phenomena. Here we demonstrate how dendritic network models substantially improve the accuracy of isoscapes of strontium isotopes and partition the influence of hydrologic transport versus local geologic features on strontium isotope ratios in a large Alaska river. This work illustrates the analytical power of dendritic network models for the field of isotope biogeochemistry, particularly for provenance studies of modern and ancient animals.

  7. On-board processing satellite network architecture and control study

    NASA Technical Reports Server (NTRS)

    Campanella, S. Joseph; Pontano, B.; Chalmers, H.

    1987-01-01

    For satellites to remain a vital part of future national and international communications, system concepts that use their inherent advantages to the fullest must be created. Network architectures that take maximum advantage of satellites equipped with onboard processing are explored. Satellite generations must accommodate various services for which satellites constitute the preferred vehicle of delivery. Such services tend to be those that are widely dispersed and present thin to medium loads to the system. Typical systems considered are thin and medium route telephony, maritime, land and aeronautical radio, VSAT data, low bit rate video teleconferencing, and high bit rate broadcast of high definition video. Delivery of services by TDMA and FDMA multiplexing techniques and combinations of the two for individual and mixed service types are studied. The possibilities offered by onboard circuit switched and packet switched architectures are examined and the results strongly support a preference for the latter. A detailed design architecture encompassing the onboard packet switch and its control, the related demand assigned TDMA burst structures, and destination packet protocols for routing traffic are presented. Fundamental onboard hardware requirements comprising speed, memory size, chip count, and power are estimated. The study concludes with identification of key enabling technologies and identifies a plan to develop a POC model.

  8. Modeling the School System Adoption Process for Library Networking.

    ERIC Educational Resources Information Center

    Kester, Diane Katherine Davies

    The successful inclusion of school library media centers in fully articulated networks involves considerable planning and organization for technological change. In this study a preliminary model of the stages of school system participation in library networks was developed with the major activities for each stage identified. The model follows…

  9. Learning Process of a Stochastic Feed-Forward Neural Network

    NASA Astrophysics Data System (ADS)

    Fujiki, Sumiyoshi; Fujiki, Nahomi

    1995-03-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network by minimizing a relative entropic measure, and a learning equation similar to that of the Boltzmann machine is obtained. The learning of the network actually shows a similar result to that of the Boltzmann machine in the classification problems of AND and XOR, by numerical experiments.

  10. Analytical Searching.

    ERIC Educational Resources Information Center

    Pappas, Marjorie L.

    1995-01-01

    Discusses analytical searching, a process that enables searchers of electronic resources to develop a planned strategy by combining words or phrases with Boolean operators. Defines simple and complex searching, and describes search strategies developed with Boolean logic and truncation. Provides guidelines for teaching students analytical…

  11. Optimal Medical Equipment Maintenance Service Proposal Decision Support System combining Activity Based Costing (ABC) and the Analytic Hierarchy Process (AHP).

    PubMed

    da Rocha, Leticia; Sloane, Elliot; M Bassani, Jose

    2005-01-01

    This study describes a framework to support the choice of the maintenance service (in-house or third party contract) for each category of medical equipment based on: a) the real medical equipment maintenance management system currently used by the biomedical engineering group of the public health system of the Universidade Estadual de Campinas located in Brazil to control the medical equipment maintenance service, b) the Activity Based Costing (ABC) method, and c) the Analytic Hierarchy Process (AHP) method. Results show the cost and performance related to each type of maintenance service. Decision-makers can use these results to evaluate possible strategies for the categories of equipment. PMID:17281912

  12. [Discussion on Quality Evaluation Method of Medical Device During Life-Cycle in Operation Based on the Analytic Hierarchy Process].

    PubMed

    Zheng, Caixian; Zheng, Kun; Shen, Yunming; Wu, Yunyun

    2016-01-01

    The content related to the quality during life-cycle in operation of medical device includes daily use, repair volume, preventive maintenance, quality control and adverse event monitoring. In view of this, the article aims at discussion on the quality evaluation method of medical devices during their life cycle in operation based on the Analytic Hierarchy Process (AHP). The presented method is proved to be effective by evaluating patient monitors as example. The method presented in can promote and guide the device quality control work, and it can provide valuable inputs to decisions about purchase of new device. PMID:27197489

  13. A Semi-Analytic Study of Feedback Processes and Metallicity Profiles in Disc Galaxies

    NASA Astrophysics Data System (ADS)

    Sandford, Nathan Ross; Lu, Yu

    2016-01-01

    The metallicity gradients of disc galaxies contain valuable information about the physics governing their formation and evolution. The observed metallicity profiles have negative gradients that are steeper at high redshifts, indicating an inside-out formation of disc galaxies. We improve on our semi-analytic galaxy formation model (Lu, Mo & Wechsler 2015) by incorporating the radial distribution of metals into the model. With the improved model, we explore how feedback scenarios affect metallicity gradients. The model features 3 feedback scenarios: An Ejective (EJ) model, which includes ejective supernova (SN) feedback, a PRe-Heating (PR) model, which assumes that the intergalactic medium is preheated, preventing it from collapsing onto galaxies, and a Re-Incorporation (RI) model, which also includes strong outflows but allows ejected gas to re-accrete onto the galaxies. We compare the models with observations from Ho et al. (2015) and find that while all models struggle to match the observed metallicity gradient-stellar mass relationship, the PR model predicts metallicity gradients that best match observations. We also find that the RI model predicts a flat gradient because its outflow and re-accretion replenish the disc uniformly with newly accreted enriched gas, erasing the mark of inside-out formation. Our findings suggest feedback plays a key role in shaping the metallicity gradients of disc galaxies and require more detailed theoretical modeling to understand them.

  14. Estimating Information Processing in a Memory System: The Utility of Meta-analytic Methods for Genetics

    PubMed Central

    Yildizoglu, Tugce; Weislogel, Jan-Marek; Mohammad, Farhan; Chan, Edwin S.-Y.; Assam, Pryseley N.; Claridge-Chang, Adam

    2015-01-01

    Genetic studies in Drosophila reveal that olfactory memory relies on a brain structure called the mushroom body. The mainstream view is that each of the three lobes of the mushroom body play specialized roles in short-term aversive olfactory memory, but a number of studies have made divergent conclusions based on their varying experimental findings. Like many fields, neurogenetics uses null hypothesis significance testing for data analysis. Critics of significance testing claim that this method promotes discrepancies by using arbitrary thresholds (α) to apply reject/accept dichotomies to continuous data, which is not reflective of the biological reality of quantitative phenotypes. We explored using estimation statistics, an alternative data analysis framework, to examine published fly short-term memory data. Systematic review was used to identify behavioral experiments examining the physiological basis of olfactory memory and meta-analytic approaches were applied to assess the role of lobular specialization. Multivariate meta-regression models revealed that short-term memory lobular specialization is not supported by the data; it identified the cellular extent of a transgenic driver as the major predictor of its effect on short-term memory. These findings demonstrate that effect sizes, meta-analysis, meta-regression, hierarchical models and estimation methods in general can be successfully harnessed to identify knowledge gaps, synthesize divergent results, accommodate heterogeneous experimental design and quantify genetic mechanisms. PMID:26647168

  15. Technosocial Predictive Analytics for Illicit Nuclear Trafficking

    SciTech Connect

    Sanfilippo, Antonio P.; Butner, R. Scott; Cowell, Andrew J.; Dalton, Angela C.; Haack, Jereme N.; Kreyling, Sean J.; Riensche, Roderick M.; White, Amanda M.; Whitney, Paul D.

    2011-03-29

    Illicit nuclear trafficking networks are a national security threat. These networks can directly lead to nuclear proliferation, as state or non-state actors attempt to identify and acquire nuclear weapons-related expertise, technologies, components, and materials. The ability to characterize and anticipate the key nodes, transit routes, and exchange mechanisms associated with these networks is essential to influence, disrupt, interdict or destroy the function of the networks and their processes. The complexities inherent to the characterization and anticipation of illicit nuclear trafficking networks requires that a variety of modeling and knowledge technologies be jointly harnessed to construct an effective analytical and decision making workflow in which specific case studies can be built in reasonable time and with realistic effort. In this paper, we explore a solution to this challenge that integrates evidentiary and dynamic modeling with knowledge management and analytical gaming, and demonstrate its application to a geopolitical region at risk.

  16. ANALYTICAL METHODS FOR HAZARDOUS ORGANICS IN LIQUID WASTES FROM COAL GASIFICATION AND LIQUEFACTION PROCESSES

    EPA Science Inventory

    This study was conducted by the University of Southern California group to provide methods for the analysis of coal liquefaction wastes from coal conversion processing plants. Several methods of preliminary fractionation prior to analysis were considered. The most satisfactory me...

  17. Building the process-drug–side effect network to discover the relationship between biological Processes and side effects

    PubMed Central

    2011-01-01

    Background Side effects are unwanted responses to drug treatment and are important resources for human phenotype information. The recent development of a database on side effects, the side effect resource (SIDER), is a first step in documenting the relationship between drugs and their side effects. It is, however, insufficient to simply find the association of drugs with biological processes; that relationship is crucial because drugs that influence biological processes can have an impact on phenotype. Therefore, knowing which processes respond to drugs that influence the phenotype will enable more effective and systematic study of the effect of drugs on phenotype. To the best of our knowledge, the relationship between biological processes and side effects of drugs has not yet been systematically researched. Methods We propose 3 steps for systematically searching relationships between drugs and biological processes: enrichment scores (ES) calculations, t-score calculation, and threshold-based filtering. Subsequently, the side effect-related biological processes are found by merging the drug-biological process network and the drug-side effect network. Evaluation is conducted in 2 ways: first, by discerning the number of biological processes discovered by our method that co-occur with Gene Ontology (GO) terms in relation to effects extracted from PubMed records using a text-mining technique and second, determining whether there is improvement in performance by limiting response processes by drugs sharing the same side effect to frequent ones alone. Results The multi-level network (the process-drug-side effect network) was built by merging the drug-biological process network and the drug-side effect network. We generated a network of 74 drugs-168 side effects-2209 biological process relation resources. The preliminary results showed that the process-drug-side effect network was able to find meaningful relationships between biological processes and side effects in an

  18. Study of an ultrasound-based process analytical tool for homogenization of nanoparticulate pharmaceutical vehicles.

    PubMed

    Cavegn, Martin; Douglas, Ryan; Akkermans, Guy; Kuentz, Martin

    2011-08-01

    There are currently no adequate process analyzers for nanoparticulate viscosity enhancers. This article aims to evaluate ultrasonic resonator technology as a monitoring tool for homogenization of nanoparticulate gels. Aqueous dispersions of colloidal microcrystalline cellulose (MCC) and a mixture of clay particles with xanthan gum were compared with colloidal silicon dioxide in oil. The processing was conducted using a laboratory-scale homogenizing vessel. The study investigated first the homogenization kinetics of the different systems to focus then on process factors in the case of colloidal MCC. Moreover, rheological properties were analyzed offline to assess the structure of the resulting gels. Results showed the suitability of ultrasound velocimetry to monitor the homogenization process. The obtained data were fitted using a novel heuristic model. It was possible to identify characteristic homogenization times for each formulation. The subsequent study of the process factors demonstrated that ultrasonic process analysis was equally sensitive as offline rheological measurements in detecting subtle manufacturing changes. It can be concluded that the ultrasonic method was able to successfully assess homogenization of nanoparticulate viscosity enhancers. This novel technique can become a vital tool for development and production of pharmaceutical suspensions in the future. PMID:21412782

  19. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    SciTech Connect

    Saini, K. K.; Saini, Sanju

    2008-10-07

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  20. Real-time determination of critical quality attributes using near-infrared spectroscopy: a contribution for Process Analytical Technology (PAT).

    PubMed

    Rosas, Juan G; Blanco, Marcel; González, Josep M; Alcalà, Manel

    2012-08-15

    Process Analytical Technology (PAT) is playing a central role in current regulations on pharmaceutical production processes. Proper understanding of all operations and variables connecting the raw materials to end products is one of the keys to ensuring quality of the products and continuous improvement in their production. Near infrared spectroscopy (NIRS) has been successfully used to develop faster and non-invasive quantitative methods for real-time predicting critical quality attributes (CQA) of pharmaceutical granulates (API content, pH, moisture, flowability, angle of repose and particle size). NIR spectra have been acquired from the bin blender after granulation process in a non-classified area without the need of sample withdrawal. The methodology used for data acquisition, calibration modelling and method application in this context is relatively inexpensive and can be easily implemented by most pharmaceutical laboratories. For this purpose, Partial Least-Squares (PLS) algorithm was used to calculate multivariate calibration models, that provided acceptable Root Mean Square Error of Predictions (RMSEP) values (RMSEP(API)=1.0 mg/g; RMSEP(pH)=0.1; RMSEP(Moisture)=0.1%; RMSEP(Flowability)=0.6 g/s; RMSEP(Angle of repose)=1.7° and RMSEP(Particle size)=2.5%) that allowed the application for routine analyses of production batches. The proposed method affords quality assessment of end products and the determination of important parameters with a view to understanding production processes used by the pharmaceutical industry. As shown here, the NIRS technique is a highly suitable tool for Process Analytical Technologies. PMID:22841062

  1. Applying decision-making tools to national e-waste recycling policy: an example of Analytic Hierarchy Process.

    PubMed

    Lin, Chun-Hsu; Wen, Lihchyi; Tsai, Yue-Mi

    2010-05-01

    As policy making is in essence a process of discussion, decision-making tools have in many cases been proposed to resolve the differences of opinion among the different parties. In our project that sought to promote a country's performance in recycling, we used the Analytic Hierarchy Process (AHP) to evaluate the possibilities and determine the priority of the addition of new mandatory recycled waste, also referred to as Due Recycled Wastes, from candidate waste appliances. The evaluation process started with the collection of data based on telephone interviews and field investigations to understand the behavior of consumers as well as their overall opinions regarding the disposal of certain waste appliances. With the data serving as background information, the research team then implemented the Analytic Hierarchy Process using the information that formed an incomplete hierarchy structure in order to determine the priority for recycling. Since the number of objects to be evaluated exceeded the number that the AHP researchers had suggested, we reclassified the objects into four groups and added one more level of pair-wise comparisons, which substantially reduced the inconsistency in the judgment of the AHP participants. The project was found to serve as a flexible and achievable application of AHP to the environmental policy-making process. In addition, based on the project's outcomes derived from the project as a whole, the research team drew conclusions regarding the government's need to take back 15 of the items evaluated, and suggested instruments that could be used or recycling regulations that could be changed in the future. Further analysis on the top three items recommended by the results of the evaluation for recycling, namely, Compact Disks, Cellular Phones and Computer Keyboards, was then conducted to clarify their concrete feasibility. After the trial period for recycling ordered by the Taiwan Environmental Protection Administration, only Computer

  2. Using Graph-Based Assessments within Socratic Tutorials to Reveal and Refine Students' Analytical Thinking about Molecular Networks

    ERIC Educational Resources Information Center

    Trujillo, Caleb; Cooper, Melanie M.; Klymkowsky, Michael W.

    2012-01-01

    Biological systems, from the molecular to the ecological, involve dynamic interaction networks. To examine student thinking about networks we used graphical responses, since they are easier to evaluate for implied, but unarticulated assumptions. Senior college level molecular biology students were presented with simple molecular level scenarios;…

  3. Students' Personal Networks in Virtual and Personal Learning Environments: A Case Study in Higher Education Using Learning Analytics Approach

    ERIC Educational Resources Information Center

    Casquero, Oskar; Ovelar, Ramón; Romo, Jesús; Benito, Manuel; Alberdi, Mikel

    2016-01-01

    The main objective of this paper is to analyse the effect of the affordances of a virtual learning environment and a personal learning environment (PLE) in the configuration of the students' personal networks in a higher education context. The results are discussed in light of the adaptation of the students to the learning network made up by two…

  4. Cognitive Components of a Mathematical Processing Network in 9-Year-Old Children

    ERIC Educational Resources Information Center

    Szucs, Dénes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence

    2014-01-01

    We determined how various cognitive abilities, including several measures of a proposed domain-specific number sense, relate to mathematical competence in nearly 100 9-year-old children with normal reading skill. Results are consistent with an extended number processing network and suggest that important processing nodes of this network are…

  5. Neural network post-processing of grayscale optical correlator

    NASA Technical Reports Server (NTRS)

    Lu, Thomas T; Hughlett, Casey L.; Zhoua, Hanying; Chao, Tien-Hsin; Hanan, Jay C.

    2005-01-01

    In this paper we present the use of a radial basis function neural network (RBFNN) as a post-processor to assist the optical correlator to identify the objects and to reject false alarms. Image plane features near the correlation peaks are extracted and fed to the neural network for analysis. The approach is capable of handling large number of object variations and filter sets. Preliminary experimental results are presented and the performance is analyzed.

  6. Decision making using AHP (Analytic Hierarchy Process) and fuzzy set theory in waste management

    SciTech Connect

    Chung, J.Y.; Lee, K.J.; Kim, C.D.

    1995-12-31

    The major problem is how to consider the differences in opinions, when many experts are involved in decision making process. This paper provides a simple general methodology to treat the differences in various opinions. The authors determined the grade of membership through the process of magnitude estimation derived from pairwise comparisons and AHP developed by Saaty. They used fuzzy set theory to consider the differences in opinions and obtain the priorities for each alternative. An example, which can be applied to radioactive waste management, also was presented. The result shows a good agreement with the results of averaging methods.

  7. MALDI based identification of soybean protein markers--possible analytical targets for allergen detection in processed foods.

    PubMed

    Cucu, Tatiana; De Meulenaer, Bruno; Devreese, Bart

    2012-02-01

    Soybean (Glycine max) is extensively used all over the world due to its nutritional qualities. However, soybean is included in the "big eight" list of food allergens. According to the EU directive 2007/68/EC, food products containing soybeans have to be labeled in order to protect the allergic consumers. Nevertheless, soybeans can still inadvertently be present in food products. The development of analytical methods for the detection of traces of allergens is important for the protection of allergic consumers. Mass spectrometry of marker proteolytical fragments of protein allergens is growingly recognized as a detection method in food control. However, quantification of soybean at the peptide level is hindered due to limited information regarding specific stable markers derived after proteolytic digestion. The aim of this study was to use MALDI-TOF/MS and MS/MS as a fast screening tool for the identification of stable soybean derived tryptic markers which were still identifiable even if the proteins were subjected to various changes at the molecular level through a number of reactions typically occurring during food processing (denaturation, the Maillard reaction and oxidation). The peptides (401)Val-Arg(410) from the G1 glycinin (Gly m 6) and the (518)Gln-Arg(528) from the α' chain of the β-conglycinin (Gly m 5) proved to be the most stable. These peptides hold potential to be used as targets for the development of new analytical methods for the detection of soybean protein traces in processed foods. PMID:22212959

  8. Bibliographic Post-Processing with the TIS Intelligent Gateway: Analytical and Communication Capabilities.

    ERIC Educational Resources Information Center

    Burton, Hilary D.

    TIS (Technology Information System) is an intelligent gateway system capable of performing quantitative evaluation and analysis of bibliographic citations using a set of Process functions. Originally developed by Lawrence Livermore National Laboratory (LLNL) to analyze information retrieved from three major federal databases, DOE/RECON,…

  9. Analytical study of space processing of immiscible materials for superconductors and electrical contacts

    NASA Technical Reports Server (NTRS)

    Gelles, S. H.; Collings, E. W.; Abbott, W. H.; Maringer, R. E.

    1977-01-01

    The results of a study conducted to determine the role space processing or materials research in space plays in the superconductor and electrical contact industries are presented. Visits were made to manufacturers, users, and research organizations connected with these products to provide information about the potential benefits of the space environment and to exchange views on the utilization of space facilities for manufacture, process development, or research. In addition, space experiments were suggested which could result in improved terrestrial processes or products. Notable examples of these are, in the case of superconductors, the development of Nb-bronze alloys (Tsuei alloys) and, in the electrical contact field, the production of Ag-Ni or Ag-metal oxide alloys with controlled microstructure for research and development activities as well as for product development. A preliminary experimental effort to produce and evaluate rapidly cooled Pb-Zn and Cu-Nb-Sn alloys in order to understand the relationship between microstructure and superconducting properties and to simulate the fine structure potentially achievable by space processing was also described.

  10. Effects of video-game play on information processing: a meta-analytic investigation.

    PubMed

    Powers, Kasey L; Brooks, Patricia J; Aldrich, Naomi J; Palladino, Melissa A; Alfieri, Louis

    2013-12-01

    Do video games enhance cognitive functioning? We conducted two meta-analyses based on different research designs to investigate how video games impact information-processing skills (auditory processing, executive functions, motor skills, spatial imagery, and visual processing). Quasi-experimental studies (72 studies, 318 comparisons) compare habitual gamers with controls; true experiments (46 studies, 251 comparisons) use commercial video games in training. Using random-effects models, video games led to improved information processing in both the quasi-experimental studies, d = 0.61, 95% CI [0.50, 0.73], and the true experiments, d = 0.48, 95% CI [0.35, 0.60]. Whereas the quasi-experimental studies yielded small to large effect sizes across domains, the true experiments yielded negligible effects for executive functions, which contrasted with the small to medium effect sizes in other domains. The quasi-experimental studies appeared more susceptible to bias than were the true experiments, with larger effects being reported in higher-tier than in lower-tier journals, and larger effects reported by the most active research groups in comparison with other labs. The results are further discussed with respect to other moderators and limitations in the extant literature. PMID:23519430

  11. Development of process parameters for 22 nm PMOS using 2-D analytical modeling

    SciTech Connect

    Maheran, A. H. Afifah; Menon, P. S.; Shaari, S.; Ahmad, I.; Faizah, Z. A. Noor

    2015-04-24

    The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (I{sub LEAK}) on PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO{sub 2}) and tungsten silicide (WSi{sub x}). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum I{sub LEAK} where the maximum predicted I{sub LEAK} value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device’s leakage current. The absolute process parameters combination results in I{sub LEAK} mean value of 3.96821 nA/µm where is far lower than the predicted value.

  12. Development of process parameters for 22 nm PMOS using 2-D analytical modeling

    NASA Astrophysics Data System (ADS)

    Maheran, A. H. Afifah; Menon, P. S.; Ahmad, I.; Shaari, S.; Faizah, Z. A. Noor

    2015-04-01

    The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (ILEAK) on PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO2) and tungsten silicide (WSix). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum ILEAK where the maximum predicted ILEAK value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device's leakage current. The absolute process parameters combination results in ILEAK mean value of 3.96821 nA/µm where is far lower than the predicted value.

  13. At-line process analytical technology (PAT) for more efficient scale up of biopharmaceutical microfiltration unit operations.

    PubMed

    Watson, Douglas S; Kerchner, Kristi R; Gant, Sean S; Pedersen, Joseph W; Hamburger, James B; Ortigosa, Allison D; Potgieter, Thomas I

    2016-01-01

    Tangential flow microfiltration (MF) is a cost-effective and robust bioprocess separation technique, but successful full scale implementation is hindered by the empirical, trial-and-error nature of scale-up. We present an integrated approach leveraging at-line process analytical technology (PAT) and mass balance based modeling to de-risk MF scale-up. Chromatography-based PAT was employed to improve the consistency of an MF step that had been a bottleneck in the process used to manufacture a therapeutic protein. A 10-min reverse phase ultra high performance liquid chromatography (RP-UPLC) assay was developed to provide at-line monitoring of protein concentration. The method was successfully validated and method performance was comparable to previously validated methods. The PAT tool revealed areas of divergence from a mass balance-based model, highlighting specific opportunities for process improvement. Adjustment of appropriate process controls led to improved operability and significantly increased yield, providing a successful example of PAT deployment in the downstream purification of a therapeutic protein. The general approach presented here should be broadly applicable to reduce risk during scale-up of filtration processes and should be suitable for feed-forward and feed-back process control. © 2015 American Institute of Chemical Engineers Biotechnol. Prog., 32:108-115, 2016. PMID:26519135

  14. An analytical and numerical study of Galton-Watson branching processes relevant to population dynamics

    NASA Astrophysics Data System (ADS)

    Jang, Sa-Han

    Galton-Watson branching processes of relevance to human population dynamics are the subject of this thesis. We begin with an historical survey of the invention of the invention of this model in the middle of the 19th century, for the purpose of modelling the extinction of unusual surnames in France and Britain. We then review the principal developments and refinements of this model, and their applications to a wide variety of problems in biology and physics. Next, we discuss in detail the case where the probability generating function for a Galton-Watson branching process is a geometric series, which can be summed in closed form to yield a fractional linear generating function that can be iterated indefinitely in closed form. We then describe the matrix method of Keyfitz and Tyree, and use it to determine how large a matrix must be chosen to model accurately a Galton-Watson branching process for a very large number of generations, of the order of hundreds or even thousands. Finally, we show that any attempt to explain the recent evidence for the existence thousands of generations ago of a 'mitochondrial Eve' and a 'Y-chromosomal Adam' in terms of a the standard Galton-Watson branching process, or indeed any statistical model that assumes equality of probabilities of passing one's genes to one's descendents in later generations, is unlikely to be successful. We explain that such models take no account of the advantages that the descendents of the most successful individuals in earlier generations enjoy over their contemporaries, which must play a key role in human evolution.

  15. Meta-analytic evidence for the non-modularity of pitch processing in congenital amusia.

    PubMed

    Vuvan, Dominique T; Nunes-Silva, Marilia; Peretz, Isabelle

    2015-08-01

    A major theme driving research in congenital amusia is related to the modularity of this musical disorder, with two possible sources of the amusic pitch perception deficit. The first possibility is that the amusic deficit is due to a broad disorder of acoustic pitch processing that has the effect of disrupting downstream musical pitch processing, and the second is that amusia is specific to a musical pitch processing module. To interrogate these hypotheses, we performed a meta-analysis on two types of effect sizes contained within 42 studies in the amusia literature: the performance gap between amusics and controls on tasks of pitch discrimination, broadly defined, and the correlation between specifically acoustic pitch perception and musical pitch perception. To augment the correlation database, we also calculated this correlation using data from 106 participants tested by our own research group. We found strong evidence for the acoustic account of amusia. The magnitude of the performance gap was moderated by the size of pitch change, but not by whether the stimuli were composed of tones or speech. Furthermore, there was a significant correlation between an individual's acoustic and musical pitch perception. However, individual cases show a double dissociation between acoustic and musical processing, which suggests that although most amusic cases are probably explainable by an acoustic deficit, there is heterogeneity within the disorder. Finally, we found that tonal language fluency does not influence the performance gap between amusics and controls, and that there was no evidence that amusics fare worse with pitch direction tasks than pitch discrimination tasks. These results constitute a quantitative review of the current literature of congenital amusia, and suggest several new directions for research, including the experimental induction of amusic behaviour through transcranial magnetic stimulation (TMS) and the systematic exploration of the developmental

  16. Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.

    2016-01-01

    Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.

  17. Implementation of an analytical Raman scattering correction for satellite ocean-color processing.

    PubMed

    McKinna, Lachlan I W; Werdell, P Jeremy; Proctor, Christopher W

    2016-07-11

    Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a time-series study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs. PMID:27410899

  18. Process analytical technology case study part I: feasibility studies for quantitative near-infrared method development.

    PubMed

    Cogdill, Robert P; Anderson, Carl A; Delgado-Lopez, Miriam; Molseed, David; Chisholm, Robert; Bolton, Raymond; Herkert, Thorsten; Afnán, Ali M; Drennen, James K

    2005-01-01

    This article is the first of a series of articles detailing the development of near-infrared (NIR) methods for solid-dosage form analysis. Experiments were conducted at the Duquesne University Center for Pharmaceutical Technology to qualify the capabilities of instrumentation and sample handling systems, evaluate the potential effect of one source of a process signature on calibration development, and compare the utility of reflection and transmission data collection methods. A database of 572 production-scale sample spectra was used to evaluate the interbatch spectral variability of samples produced under routine manufacturing conditions. A second database of 540 spectra from samples produced under various compression conditions was analyzed to determine the feasibility of pooling spectral data acquired from samples produced at diverse scales. Instrument qualification tests were performed, and appropriate limits for instrument performance were established. To evaluate the repeatability of the sample positioning system, multiple measurements of a single tablet were collected. With the application of appropriate spectral preprocessing techniques, sample repositioning error was found to be insignificant with respect to NIR analyses of product quality attributes. Sample shielding was demonstrated to be unnecessary for transmission analyses. A process signature was identified in the reflection data. Additional tests demonstrated that the process signature was largely orthogonal to spectral variation because of hardness. Principal component analysis of the compression sample set data demonstrated the potential for quantitative model development. For the data sets studied, reflection analysis was demonstrated to be more robust than transmission analysis. PMID:16353986

  19. Active content determination of pharmaceutical tablets using near infrared spectroscopy as Process Analytical Technology tool.

    PubMed

    Chavez, Pierre-François; Sacré, Pierre-Yves; De Bleye, Charlotte; Netchacovitch, Lauranne; Mantanus, Jérôme; Motte, Henri; Schubert, Martin; Hubert, Philippe; Ziemons, Eric

    2015-11-01

    The aim of this study was to develop Near infrared (NIR) methods to determine the active content of non-coated pharmaceutical tablets manufactured from a proportional tablet formulation. These NIR methods intend to be used for the monitoring of the active content of tablets during the tableting process. Firstly, methods were developed in transmission and reflection modes to quantify the API content of the lowest dosage strength. Secondly, these methods were fully validated for a concentration range of 70-130% of the target active content using the accuracy profile approach based on β-expectation tolerance intervals. The model using the transmission mode showed a better ability to predict the right active content compared to the reflection one. However, the ability of the reflection mode to quantify the API content in the highest dosage strength was assessed. Furthermore, the NIR method based on the transmission mode was successfully used to monitor at-line the tablet active content during the tableting process, providing better insight of the API content during the process. This improvement of control of the product quality provided by this PAT method is thoroughly compliant with the Quality by Design (QbD) concept. Finally, the transfer of the transmission model from the off-line to an on-line spectrometer was efficiently investigated. PMID:26452969

  20. The analytical model for vortex ring pinch-off process based on the energy extremum principle

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Liu, Hong; Qin, Suyang; Wang, Fuxin

    2015-11-01

    The discovery of vortex ring pinch-off is greatly helpful for us to understand the mechanism of optimal vortex formation, which further implies the optimal biological propulsion for animals. The vortex ring pinch-off implies its limiting formation and is dominated by the energy extremum principle. However, it is found that vortex ring pinch-off is a continuous process rather than a transient timescale. Therefore, we are wondering that how to identify the onset and end of pinch-off process. Based on the Kelvin-Benjamin variational principle, a dimensionless energy number is adopted to characterize the energy evolution of vortex rings. The vortex ring flow fields are obtained by DPIV with the piston-cylinder setup, and their geometric structures are identified using its Lagrangian coherent structures. The results show that the dimensionless energy numbers with the steady translating vortex rings share a critical value. It is then demonstrated that the dimensionless energy number dominates the onset and the end of pinch-off process. Besides, the onset and end of pinch-off can also be identified using LCSs. Additionally, based on the dimensionless energy number or LCSs, the corresponding vortex ring formation times(L/D) for the onset or the end of pinch-off are consistent.

  1. A Hybrid Authentication and Authorization Process for Control System Networks

    SciTech Connect

    Manz, David O.; Edgar, Thomas W.; Fink, Glenn A.

    2010-08-25

    Convergence of control system and IT networks require that security, privacy, and trust be addressed. Trust management continues to plague traditional IT managers and is even more complex when extended into control system networks, with potentially millions of entities, a mission that requires 100% availability. Yet these very networks necessitate a trusted secure environment where controllers and managers can be assured that the systems are secure and functioning properly. We propose a hybrid authentication management protocol that addresses the unique issues inherent within control system networks, while leveraging the considerable research and momentum in existing IT authentication schemes. Our hybrid authentication protocol for control systems provides end device to end device authentication within a remote station and between remote stations and control centers. Additionally, the hybrid protocol is failsafe and will not interrupt communication or control of vital systems in a network partition or device failure. Finally, the hybrid protocol is resilient to transitory link loss and can operate in an island mode until connectivity is reestablished.

  2. Software Analytical Instrument for Assessment of the Process of Casting Slabs

    SciTech Connect

    Franek, Zdenek; Kavicka, Frantisek; Stetina, Josef; Masarik, Milos

    2010-06-15

    The paper describes the original proposal of ways of solution and function of the program equipment for assessment of the process of casting slabs. The program system LITIOS was developed and implemented in EVRAZ Vitkovice Steel Ostrava on the equipment of continuous casting of steel (further only ECC). This program system works on the data warehouse of technological parameters of casting and quality parameters of slabs. It enables an ECC technologist to analyze the course of casting melt and with using statistics methods to set the influence of single technological parameters on the duality of final slabs. The system also enables long term monitoring and optimization of the production.

  3. Design process of a photonics network for military platforms

    NASA Astrophysics Data System (ADS)

    Nelson, George F.; Rao, Nagarajan M.; Krawczak, John A.; Stevens, Rick C.

    1999-02-01

    Technology development in photonics is rapidly progressing. The concept of a Unified Network will provide re- configurable network access to platform sensors, Vehicle Management Systems, Stores and avionics. The re-configurable taps into the network will accommodate present interface standards and provide scaleability for the insertion of future interfaces. Significant to this development is the design and test of the Optical Backplane Interconnect System funded by Naval Air Systems Command and developed by Lockheed Martin Tactical Defense Systems - Eagan. OBIS results in the merging of the electrical backplane and the optical backplane, with interconnect fabric and card edge connectors finally providing adequate electrical and optical card access. Presently OBIS will support 1.2 Gb/s per fiber over multiples of 12 fibers per ribbon cable.

  4. Virtual optical network provisioning with unified service logic processing model for software-defined multidomain optical networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Li, Shikun; Song, Yinan; Sun, Ji; Zhang, Jie

    2015-12-01

    Hierarchical control architecture is designed for software-defined multidomain optical networks (SD-MDONs), and a unified service logic processing model (USLPM) is first proposed for various applications. USLPM-based virtual optical network (VON) provisioning process is designed, and two VON mapping algorithms are proposed: random node selection and per controller computation (RNS&PCC) and balanced node selection and hierarchical controller computation (BNS&HCC). Then an SD-MDON testbed is built with OpenFlow extension in order to support optical transport equipment. Finally, VON provisioning service is experimentally demonstrated on the testbed along with performance verification.

  5. An Analytical Framework for Studying Small-Number Effects in Catalytic Reaction Networks: A Probability Generating Function Approach to Chemical Master Equations

    PubMed Central

    Nakagawa, Masaki; Togashi, Yuichi

    2016-01-01

    Cell activities primarily depend on chemical reactions, especially those mediated by enzymes, and this has led to these activities being modeled as catalytic reaction networks. Although deterministic ordinary differential equations of concentrations (rate equations) have been widely used for modeling purposes in the field of systems biology, it has been pointed out that these catalytic reaction networks may behave in a way that is qualitatively different from such deterministic representation when the number of molecules for certain chemical species in the system is small. Apart from this, representing these phenomena by simple binary (on/off) systems that omit the quantities would also not be feasible. As recent experiments have revealed the existence of rare chemical species in cells, the importance of being able to model potential small-number phenomena is being recognized. However, most preceding studies were based on numerical simulations, and theoretical frameworks to analyze these phenomena have not been sufficiently developed. Motivated by the small-number issue, this work aimed to develop an analytical framework for the chemical master equation describing the distributional behavior of catalytic reaction networks. For simplicity, we considered networks consisting of two-body catalytic reactions. We used the probability generating function method to obtain the steady-state solutions of the chemical master equation without specifying the parameters. We obtained the time evolution equations of the first- and second-order moments of concentrations, and the steady-state analytical solution of the chemical master equation under certain conditions. These results led to the rank conservation law, the connecting state to the winner-takes-all state, and analysis of 2-molecules M-species systems. A possible interpretation of the theoretical conclusion for actual biochemical pathways is also discussed. PMID:27047384

  6. An Analytical Framework for Studying Small-Number Effects in Catalytic Reaction Networks: A Probability Generating Function Approach to Chemical Master Equations.

    PubMed

    Nakagawa, Masaki; Togashi, Yuichi

    2016-01-01

    Cell activities primarily depend on chemical reactions, especially those mediated by enzymes, and this has led to these activities being modeled as catalytic reaction networks. Although deterministic ordinary differential equations of concentrations (rate equations) have been widely used for modeling purposes in the field of systems biology, it has been pointed out that these catalytic reaction networks may behave in a way that is qualitatively different from such deterministic representation when the number of molecules for certain chemical species in the system is small. Apart from this, representing these phenomena by simple binary (on/off) systems that omit the quantities would also not be feasible. As recent experiments have revealed the existence of rare chemical species in cells, the importance of being able to model potential small-number phenomena is being recognized. However, most preceding studies were based on numerical simulations, and theoretical frameworks to analyze these phenomena have not been sufficiently developed. Motivated by the small-number issue, this work aimed to develop an analytical framework for the chemical master equation describing the distributional behavior of catalytic reaction networks. For simplicity, we considered networks consisting of two-body catalytic reactions. We used the probability generating function method to obtain the steady-state solutions of the chemical master equation without specifying the parameters. We obtained the time evolution equations of the first- and second-order moments of concentrations, and the steady-state analytical solution of the chemical master equation under certain conditions. These results led to the rank conservation law, the connecting state to the winner-takes-all state, and analysis of 2-molecules M-species systems. A possible interpretation of the theoretical conclusion for actual biochemical pathways is also discussed. PMID:27047384

  7. Complex surface analytical investigations on hydrogen absorption and desorption processes of a TiMn2-based alloy.

    PubMed

    Schülke, Mark; Kiss, Gábor; Paulus, Hubert; Lammers, Martin; Ramachandran, Vaidyanath; Sankaran, Kannan; Müller, Karl-Heinz

    2009-04-01

    Metal hydrides are one of the most promising technologies in the field of hydrogen storage due to their high volumetric storage density. Important reaction steps take place at the very surface of the solid during hydrogen absorption. Since these reaction steps are drastically influenced by the properties and potential contamination of the solid, it is very important to understand the characteristics of the surface, and a variety of analytical methods are required to achieve this. In this work, a TiMn(2)-type metal hydride alloy is investigated by means of high-pressure activation measurements, X-ray photoelectron spectroscopy (XPS), secondary neutral mass spectrometry (SNMS) and thermal desorption mass spectrometry (TDMS). In particular, TDMS is an analytical tool that, in contrast to SIMS or SNMS, allows the hydrogen content in a metal to be quantified. Furthermore, it allows the activation energy for desorption to be determined from TDMS profiles; the method used to achieve this is presented here in detail. In the results section, it is shown that the oxide layer formed during manufacture and long-term storage prevents any hydrogen from being absorbed, and so an activation process is required. XPS measurements show the oxide states of the main alloy elements, and a layer 18 nm thick is determined via SNMS. Furthermore, defined oxide layers are produced and characterized in UHV using XPS. The influence of these thin oxide layers on the hydrogen sorption process is examined using TDMS. Finally, the activation energy of desorption is determined for the investigated alloy using the method presented here, and values of 46 kJ/mol for hydrogen sorbed in UHV and 103 kJ/mol for hydrogen originating from the manufacturing process are obtained. PMID:19294368

  8. CONCH: A Visual Basic program for interactive processing of ion-microprobe analytical data

    NASA Astrophysics Data System (ADS)

    Nelson, David R.

    2006-11-01

    A Visual Basic program for flexible, interactive processing of ion-microprobe data acquired for quantitative trace element, 26Al- 26Mg, 53Mn- 53Cr, 60Fe- 60Ni and U-Th-Pb geochronology applications is described. Default but editable run-tables enable software identification of secondary ion species analyzed and for characterization of the standard used. Counts obtained for each species may be displayed in plots against analysis time and edited interactively. Count outliers can be automatically identified via a set of editable count-rejection criteria and displayed for assessment. Standard analyses are distinguished from Unknowns by matching of the analysis label with a string specified in the Set-up dialog, and processed separately. A generalized routine writes background-corrected count rates, ratios and uncertainties, plus weighted means and uncertainties for Standards and Unknowns, to a spreadsheet that may be saved as a text-delimited file. Specialized routines process trace-element concentration, 26Al- 26Mg, 53Mn- 53Cr, 60Fe- 60Ni, and Th-U disequilibrium analysis types, and U-Th-Pb isotopic data obtained for zircon, titanite, perovskite, monazite, xenotime and baddeleyite. Correction to measured Pb-isotopic, Pb/U and Pb/Th ratios for the presence of common Pb may be made using measured 204Pb counts, or the 207Pb or 208Pb counts following subtraction from these of the radiogenic component. Common-Pb corrections may be made automatically, using a (user-specified) common-Pb isotopic composition appropriate for that on the sample surface, or for that incorporated within the mineral at the time of its crystallization, depending on whether the 204Pb count rate determined for the Unknown is substantially higher than the average 204Pb count rate for all session standards. Pb/U inter-element fractionation corrections are determined using an interactive log e-log e plot of common-Pb corrected 206Pb/ 238U ratios against any nominated fractionation-sensitive species pair

  9. Experimental and analytical investigation of the fracture processes of boron/aluminum laminates containing notches

    NASA Technical Reports Server (NTRS)

    Johnson, W. S.; Bigelow, C. A.; Bahei-El-din, Y. A.

    1983-01-01

    Experimental results for five laminate orientations of boron/aluminum composites containing either circular holes or crack-like slits are presented. Specimen stress-strain behavior, stress at first fiber failure, and ultimate strength were determined. Radiographs were used to monitor the fracture process. The specimens were analyzed with a three-dimensional elastic-elastic finite-element model. The first fiber failures in notched specimens with laminate orientation occurred at or very near the specimen ultimate strength. For notched unidirectional specimens, the first fiber failure occurred at approximately one-half of the specimen ultimate strength. Acoustic emission events correlated with fiber breaks in unidirectional composites, but did not for other laminates. Circular holes and crack-like slits of the same characteristic length were found to produce approximately the same strength reduction. The predicted stress-strain responses and stress at first fiber failure compared very well with test data for laminates containing 0 deg fibers.

  10. Investigation of the Application of Process Analytical Technology for a Laser Welding Process in Medical Device Manufacturing

    NASA Astrophysics Data System (ADS)

    Moore, Sean; Conneely, Alan; Stenzel, Eric; Murphy, Eamonn

    In FDA regulated medical device manufacturing, real time inspection of manufactured product is limited by the requirement to destructively test random samples of the product post production. Infra Red thermography offers the ability to non-destructively test, key critical to quality attributes of medical devices during laser welding and facilitates real time statistical process control for enhanced product quality and yield. This paper will present results of research work focused on non-destructive methods using Infra Red Thermography to potentially replace destructive methods of assessment for laser welded joints in stent delivery catheters. The approach utilizes designed experiments in conjunction with IR assessment and also identifies some limitations of the proposed method.

  11. Process and analytical studies of enhanced low severity co-processing using selective coal pretreatment. Final technical report

    SciTech Connect

    Baldwin, R.M.; Miller, R.L.

    1991-12-01

    The findings in the first phase were as follows: 1. Both reductive (non-selective) alkylation and selective oxygen alkylation brought about an increase in liquefaction reactivity for both coals. 2. Selective oxygen alkylation is more effective in enhancing the reactivity of low rank coals. In the second phase of studies, the major findings were as follows: 1. Liquefaction reactivity increases with increasing level of alkylation for both hydroliquefaction and co-processing reaction conditions. 2. the increase in reactivity found for O-alkylated Wyodak subbituminous coal is caused by chemical changes at phenolic and carboxylic functional sites. 3. O-methylation of Wyodak subbituminous coal reduced the apparent activation energy for liquefaction of this coal.

  12. The default network and self-generated thought: component processes, dynamic control, and clinical relevance

    PubMed Central

    Andrews-Hanna, Jessica R.; Smallwood, Jonathan; Spreng, R. Nathan

    2014-01-01

    Though only a decade has elapsed since the default network was first emphasized as being a large-scale brain system, recent years have brought great insight into the network’s adaptive functions. A growing theme highlights the default network as playing a key role in internally-directed—or self-generated—thought. Here, we synthesize recent findings from cognitive science, neuroscience, and clinical psychology to focus attention on two emerging topics as current and future directions surrounding the default network. First, we present evidence that self-generated thought is a multi-faceted construct whose component processes are supported by different subsystems within the network. Second, we highlight the dynamic nature of the default network, emphasizing its interaction with executive control systems when regulating aspects of internal thought. We conclude by discussing clinical implications of disruptions to the integrity of the network, and consider disorders when thought content becomes polarized or network interactions become disrupted or imbalanced. PMID:24502540

  13. On-board processing satellite network architecture and control study

    NASA Technical Reports Server (NTRS)

    Campanella, S. Joseph; Pontano, Benjamin A.; Chalmers, Harvey

    1987-01-01

    The market for telecommunications services needs to be segmented into user classes having similar transmission requirements and hence similar network architectures. Use of the following transmission architecture was considered: satellite switched TDMA; TDMA up, TDM down; scanning (hopping) beam TDMA; FDMA up, TDM down; satellite switched MF/TDMA; and switching Hub earth stations with double hop transmission. A candidate network architecture will be selected that: comprises multiple access subnetworks optimized for each user; interconnects the subnetworks by means of a baseband processor; and optimizes the marriage of interconnection and access techniques. An overall network control architecture will be provided that will serve the needs of the baseband and satellite switched RF interconnected subnetworks. The results of the studies shall be used to identify elements of network architecture and control that require the greatest degree of technology development to realize an operational system. This will be specified in terms of: requirements of the enabling technology; difference from the current available technology; and estimate of the development requirements needed to achieve an operational system. The results obtained for each of these tasks are presented.

  14. Large-scale network-level processes during entrainment.

    PubMed

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-03-15

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4-30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band "disconnecting" visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  15. Large-scale network-level processes during entrainment

    PubMed Central

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-01-01

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  16. DEFENSE WASTE PROCESSING FACILITY ANALYTICAL METHOD VERIFICATION FOR THE SLUDGE BATCH 5 QUALIFICATION SAMPLE

    SciTech Connect

    Click, D; Tommy Edwards, T; Henry Ajo, H

    2008-07-25

    For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) performs confirmation of the applicability of the digestion method to be used by the DWPF lab for elemental analysis of Sludge Receipt and Adjustment Tank (SRAT) receipt samples and SRAT product process control samples. DWPF SRAT samples are typically dissolved using a room temperature HF-HNO3 acid dissolution (i.e., DWPF Cold Chem Method, see Procedure SW4-15.201) and then analyzed by inductively coupled plasma - atomic emission spectroscopy (ICP-AES). This report contains the results and comparison of data generated from performing the Aqua Regia (AR), Sodium Peroxide/Hydroxide Fusion (PF) and DWPF Cold Chem (CC) method digestion of Sludge Batch 5 (SB5) SRAT Receipt and SB5 SRAT Product samples. The SB5 SRAT Receipt and SB5 SRAT Product samples were prepared in the SRNL Shielded Cells, and the SRAT Receipt material is representative of the sludge that constitutes the SB5 Batch composition. This is the sludge in Tank 51 that is to be transferred into Tank 40, which will contain the heel of Sludge Batch 4 (SB4), to form the SB5 Blend composition. The results for any one particular element should not be used in any way to identify the form or speciation of a particular element in the sludge or used to estimate ratios of compounds in the sludge. A statistical comparison of the data validates the use of the DWPF CC method for SB5 Batch composition. However, the difficulty that was encountered in using the CC method for SB4 brings into question the adequacy of CC for the SB5 Blend. Also, it should be noted that visible solids remained in the final diluted solutions of all samples digested by this method at SRNL (8 samples total), which is typical for the DWPF CC method but not seen in the other methods. Recommendations to the DWPF for application to SB5 based on studies to date: (1) A dissolution study should be performed on the WAPS

  17. Meta-food-chains as a many-layer epidemic process on networks.

    PubMed

    Barter, Edmund; Gross, Thilo

    2016-02-01

    Notable recent works have focused on the multilayer properties of coevolving diseases. We point out that very similar systems play an important role in population ecology. Specifically we study a meta-food-web model that was recently proposed by Pillai et al. [Theor. Ecol. 3, 223 (2009)]. This model describes a network of species connected by feeding interactions, which spread over a network of spatial patches. Focusing on the essential case, where the network of feeding interactions is a chain, we develop an analytical approach for the computation of the degree distributions of colonized spatial patches for the different species in the chain. This framework allows us to address ecologically relevant questions. Considering configuration model ensembles of spatial networks, we find that there is an upper bound for the fraction of patches that a given species can occupy, which depends only on the networks mean degree. For a given mean degree there is then an optimal degree distribution that comes closest to the upper bound. Notably scale-free degree distributions perform worse than more homogeneous degree distributions if the mean degree is sufficiently high. Because species experience the underlying network differently the optimal degree distribution for one particular species is generally not the optimal distribution for the other species in the same food web. These results are of interest for conservation ecology, where, for instance, the task of selecting areas of old-growth forest to preserve in an agricultural landscape, amounts to the design of a patch network. PMID:26986348

  18. Determination of PASHs by various analytical techniques based on gas chromatography-mass spectrometry: application to a biodesulfurization process.

    PubMed

    Mezcua, Milagros; Fernández-Alba, Amadeo R; Boltes, Karina; Alonso Del Aguila, Raul; Leton, Pedro; Rodríguez, Antonio; García-Calvo, Eloy

    2008-06-15

    Polycyclic aromatic sulphur heterocyclic (PASH) compounds, such as dibenzothiophene (DBT) and alkylated derivatives are used as model compounds in biodesulfurization processes. The development of these processes is focused on the reduction of the concentration of sulphur in gasoline and gas-oil [D.J. Monticello, Curr. Opin. Biotechnol. 11 (2000) 540], in order to meet European Union and United States directives. The evaluation of biodesulfurization processes requires the development of adequate analytical techniques, allowing the identification of any transformation products generated. The identification of intermediates and final products permits the evaluation of the degradation process. In this work, seven sulfurated compounds and one non-sulfurated compound have been selected to develop an extraction method and to compare the sensitivity and identification capabilities of three different gas chromatography ionization modes. The selected compounds are: dibenzothiophene (DBT), 4-methyl-dibenzothiophene (4-m-DBT), 4,6-dimethyl-dibenzothiophene (4,6-dm-DBT) and 4,6 diethyl-dibenzothiophene (4,6 de-DBT), all of which can be used as model compounds in biodesulfurization processes; as well as dibenzothiophene sulfoxide (DBTO(2)), dibenzothiophene sulfone (DBTO) and 2-(2-hydroxybiphenyl)-benzenesulfinate (HBPS), which are intermediate products in biodesulfurization processes of DBT [ A. Alcon, V.E. Santos, A.B. Martín, P. Yustos, F. García-Ochoa, Biochem. Eng. J. 26 (2005) 168]. Furthermore, a non-sulfurated compound, 2-hydroxybiphenyl (2-HBP), has also been selected as it is the final product in the biodesulfurization process of DBT [A. Alcon, V.E. Santos, A.B. Martín, P. Yustos, F. García-Ochoa. Biochem. Eng. J. 26 (2005) 168]. Since, typically, biodesulfurization reactions take place in a biphasic medium, two extraction methods have been developed: a liquid-liquid extraction method for the watery phase and a solid phase extraction method for the organic phase

  19. Toward mission-specific service utility estimation using analytic stochastic process models

    NASA Astrophysics Data System (ADS)

    Thornley, David J.; Young, Robert J.; Richardson, James P.

    2009-05-01

    Planning a mission to monitor, control or prevent activity requires postulation of subject behaviours, specification of goals, and the identification of suitable effects, candidate methods, information requirements, and effective infrastructure. In an operation that comprises many missions, it is desirable to base decisions to assign assets and computation time or communications bandwidth on the value of the result of doing so in a particular mission to the operation. We describe initial investigations of a holistic approach for judging the value of candidate sensing service designs by stochastic modeling of information delivery, knowledge building, synthesis of situational awareness, and the selection of actions and achievement of goals. Abstraction of physical and information transformations to interdependent stochastic state transition models enables calculation of probability distributions over uncertain futures using wellcharacterized approximations. This complements traditional Monte Carlo war gaming in which example futures are explored individually, by capturing probability distributions over loci of behaviours that show the importance and value of mission component designs. The overall model is driven by sensing processes that are constructed by abstracting from the physics of sensing to a stochastic model of the system's trajectories through sensing modes. This is formulated by analysing probabilistic projections of subject behaviours against functions which describe the quality of information delivered by the sensing service. This enables energy consumption predictions, and when composed into a mission model, supports calculation of situational awareness formulation and command satisfaction timing probabilities. These outcome probabilities then support calculation of relative utility and value.

  20. Sources and processes of contaminant loss from an intensively grazed catchment inferred from patterns in discharge and concentration of thirteen analytes using high intensity sampling

    NASA Astrophysics Data System (ADS)

    Holz, G. K.

    2010-03-01

    SummaryContaminants in water from intensively grazed catchments have been shown to cause significant environmental impacts. Effective intervention to reduce contaminant loads depends on identifying their sources and processes of mobilisation and transport. In this study, flow ( Q) and analyte concentrations ( CA) from a 12 ha catchment in north-west Tasmania used for grazing dairy cattle were monitored at a fine temporal scale and used to infer sources and processes of loss. Three groups of analytes identified based on CA- Q relationships, which included hysteresis loops, demonstrated that the TP group (TP, DRP, TSS, TN, E.coli and Enterococcus) was transported by surface runoff processes while the behaviour of the NO 3 group (NO 3, TDS, Ca, Mg, Na) was explained by subsurface processes and pathways. The NH 4 group (NH 4, K) was dominated by the addition of large quantities of analyte from grazing. In addition to the CA- Q relationships, concentrations of most analytes decreased linearly over each season of runoff. NH 4 and K concentrations decreased exponentially following grazing events while TP concentrations decreased linearly. The study demonstrated the importance of understanding surface water and groundwater interactions and that relationships between runoff events, analyte concentrations and management as revealed by a fine temporal sampling regime may yield significant insights to sources and processes of loss of analytes in surface flow, at a given scale.

  1. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  2. Implementing an extension of the analytical hierarchy process using ordered weighted averaging operators with fuzzy quantifiers in ArcGIS

    NASA Astrophysics Data System (ADS)

    Boroushaki, Soheil; Malczewski, Jacek

    2008-04-01

    This paper focuses on the integration of GIS and an extension of the analytical hierarchy process (AHP) using quantifier-guided ordered weighted averaging (OWA) procedure. AHP_OWA is a multicriteria combination operator. The nature of the AHP_OWA depends on some parameters, which are expressed by means of fuzzy linguistic quantifiers. By changing the linguistic terms, AHP_OWA can generate a wide range of decision strategies. We propose a GIS-multicriteria evaluation (MCE) system through implementation of AHP_OWA within ArcGIS, capable of integrating linguistic labels within conventional AHP for spatial decision making. We suggest that the proposed GIS-MCE would simplify the definition of decision strategies and facilitate an exploratory analysis of multiple criteria by incorporating qualitative information within the analysis.

  3. Biparametric potentiometric analytical microsystem for nitrate and potassium monitoring in water recycling processes for manned space missions.

    PubMed

    Calvo-López, Antonio; Arasa-Puig, Eva; Puyol, Mar; Casalta, Joan Manel; Alonso-Chamarro, Julián

    2013-12-01

    The construction and evaluation of a Low Temperature Co-fired Ceramics (LTCC)-based continuous flow potentiometric microanalyzer prototype to simultaneously monitor the presence of two ions (potassium and nitrate) in samples from the water recycling process for future manned space missions is presented. The microsystem integrates microfluidics and the detection system in a single substrate and it is smaller than a credit card. The detection system is based on two ion-selective electrodes (ISEs), which are built using all-solid state nitrate and potassium polymeric membranes, and a screen-printed Ag/AgCl reference electrode. The obtained analytical features after the optimization of the microfluidic design and hydrodynamics are a linear range from 10 to 1000 mg L(-1) and from 1.9 to 155 mg L(-1) and a detection limit of 9.56 mg L(-1) and 0.81 mg L(-1) for nitrate and potassium ions respectively. PMID:24267081

  4. Online flow cytometry for monitoring apoptosis in mammalian cell cultures as an application for process analytical technology.

    PubMed

    Kuystermans, Darrin; Avesh, Mohd; Al-Rubeai, Mohamed

    2016-05-01

    Apoptosis is the main driver of cell death in bioreactor suspension cell cultures during the production of biopharmaceuticals from animal cell lines. It is known that apoptosis also has an effect on the quality and quantity of the expressed recombinant protein. This has raised the importance of studying apoptosis for implementing culture optimization strategies. The work here describes a novel approach to obtain near real time data on proportion of viable, early apoptotic, late apoptotic and necrotic cell populations in a suspension CHO culture using automated sample preparation in conjunction with flow cytometry. The resultant online flow cytometry data can track the progression of apoptotic events in culture, aligning with analogous manual methodologies and giving similar results. The obtained near-real time apoptosis data are a significant improvement in monitoring capabilities and can lead to improved control strategies and research data on complex biological systems in bioreactor cultures in both academic and industrial settings focused on process analytical technology applications. PMID:25352493

  5. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed. PMID:25956002

  6. Landslide susceptibility mapping by combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process in Dozain basin

    NASA Astrophysics Data System (ADS)

    Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.

    2014-10-01

    Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.

  7. Digital Signal Processing and Control for the Study of Gene Networks

    PubMed Central

    Shin, Yong-Jun

    2016-01-01

    Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828

  8. Digital Signal Processing and Control for the Study of Gene Networks.

    PubMed

    Shin, Yong-Jun

    2016-01-01

    Thanks to the digital revolution, digital signal processing and control has been widely used in many areas of science and engineering today. It provides practical and powerful tools to model, simulate, analyze, design, measure, and control complex and dynamic systems such as robots and aircrafts. Gene networks are also complex dynamic systems which can be studied via digital signal processing and control. Unlike conventional computational methods, this approach is capable of not only modeling but also controlling gene networks since the experimental environment is mostly digital today. The overall aim of this article is to introduce digital signal processing and control as a useful tool for the study of gene networks. PMID:27102828

  9. Energy Efficiency of Distributed Signal Processing in Wireless Networks: A Cross-Layer Analysis

    NASA Astrophysics Data System (ADS)

    Geraci, Giovanni; Wildemeersch, Matthias; Quek, Tony Q. S.

    2016-02-01

    In order to meet the growing mobile data demand, future wireless networks will be equipped with a multitude of access points (APs). Besides the important implications for the energy consumption, the trend towards densification requires the development of decentralized and sustainable radio resource management techniques. It is critically important to understand how the distribution of signal processing operations affects the energy efficiency of wireless networks. In this paper, we provide a cross-layer framework to evaluate and compare the energy efficiency of wireless networks under different levels of distribution of the signal processing load: (i) hybrid, where the signal processing operations are shared between nodes and APs, (ii) centralized, where signal processing is entirely implemented at the APs, and (iii) fully distributed, where all operations are performed by the nodes. We find that in practical wireless networks, hybrid signal processing exhibits a significant energy efficiency gain over both centralized and fully distributed approaches.

  10. Advanced information processing system: Authentication protocols for network communication

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Adams, Stuart J.; Babikyan, Carol A.; Butler, Bryan P.; Clark, Anne L.; Lala, Jaynarayan H.

    1994-01-01

    In safety critical I/O and intercomputer communication networks, reliable message transmission is an important concern. Difficulties of communication and fault identification in networks arise primarily because the sender of a transmission cannot be identified with certainty, an intermediate node can corrupt a message without certainty of detection, and a babbling node cannot be identified and silenced without lengthy diagnosis and reconfiguration . Authentication protocols use digital signature techniques to verify the authenticity of messages with high probability. Such protocols appear to provide an efficient solution to many of these problems. The objective of this program is to develop, demonstrate, and evaluate intercomputer communication architectures which employ authentication. As a context for the evaluation, the authentication protocol-based communication concept was demonstrated under this program by hosting a real-time flight critical guidance, navigation and control algorithm on a distributed, heterogeneous, mixed redundancy system of workstations and embedded fault-tolerant computers.

  11. The Processing of Verbs and Nouns in Neural Networks: Insights from Synthetic Brain Imaging

    ERIC Educational Resources Information Center

    Cangelosi, Angelo; Parisi, Domenico

    2004-01-01

    The paper presents a computational model of language in which linguistic abilities evolve in organisms that interact with an environment. Each individual's behavior is controlled by a neural network and we study the consequences in the network's internal functional organization of learning to process different classes of words. Agents are selected…

  12. An Analytic Hierarchy Process-based Method to Rank the Critical Success Factors of Implementing a Pharmacy Barcode System.

    PubMed

    Alharthi, Hana; Sultana, Nahid; Al-Amoudi, Amjaad; Basudan, Afrah

    2015-01-01

    Pharmacy barcode scanning is used to reduce errors during the medication dispensing process. However, this technology has rarely been used in hospital pharmacies in Saudi Arabia. This article describes the barriers to successful implementation of a barcode scanning system in Saudi Arabia. A literature review was conducted to identify the relevant critical success factors (CSFs) for a successful dispensing barcode system implementation. Twenty-eight pharmacists from a local hospital in Saudi Arabia were interviewed to obtain their perception of these CSFs. In this study, planning (process flow issues and training requirements), resistance (fear of change, communication issues, and negative perceptions about technology), and technology (software, hardware, and vendor support) were identified as the main barriers. The analytic hierarchy process (AHP), one of the most widely used tools for decision making in the presence of multiple criteria, was used to compare and rank these identified CSFs. The results of this study suggest that resistance barriers have a greater impact than planning and technology barriers. In particular, fear of change is the most critical factor, and training is the least critical factor. PMID:26807079

  13. Multi-parameter flow cytometry as a process analytical technology (PAT) approach for the assessment of bacterial ghost production.

    PubMed

    Langemann, Timo; Mayr, Ulrike Beate; Meitz, Andrea; Lubitz, Werner; Herwig, Christoph

    2016-01-01

    Flow cytometry (FCM) is a tool for the analysis of single-cell properties in a cell suspension. In this contribution, we present an improved FCM method for the assessment of E-lysis in Enterobacteriaceae. The result of the E-lysis process is empty bacterial envelopes-called bacterial ghosts (BGs)-that constitute potential products in the pharmaceutical field. BGs have reduced light scattering properties when compared with intact cells. In combination with viability information obtained from staining samples with the membrane potential-sensitive fluorescent dye bis-(1,3-dibutylarbituric acid) trimethine oxonol (DiBAC4(3)), the presented method allows to differentiate between populations of viable cells, dead cells, and BGs. Using a second fluorescent dye RH414 as a membrane marker, non-cellular background was excluded from the data which greatly improved the quality of the results. Using true volumetric absolute counting, the FCM data correlated well with cell count data obtained from colony-forming units (CFU) for viable populations. Applicability of the method to several Enterobacteriaceae (different Escherichia coli strains, Salmonella typhimurium, Shigella flexneri 2a) could be shown. The method was validated as a resilient process analytical technology (PAT) tool for the assessment of E-lysis and for particle counting during 20-l batch processes for the production of Escherichia coli Nissle 1917 BGs. PMID:26521248

  14. Priorities determination using novel analytic hierarchy process and median ranked sample set, case study of landfill siting criteria

    NASA Astrophysics Data System (ADS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, N. E. Ahmad; Basri, H.

    2015-02-01

    Integrating environmental, social, political, and economical attributes enhances the decision making process. Multi criteria decision making (MCDM) involves ambiguity and uncertainty due to various preferences. This study presents a model to minimize the uncertainty and ambiguity of human judgments by means of integrating the counter stakeholders with median ranked sample set (MRSS) and Analytic hierarchy process (AHP). The model uses landfill site selection as a MCDM problem. Sixteen experts belong to four clusters that are government, private, institution, and non-governmental organisations participated and their preferences were ranked in four by four matrix. Then the MRSS and the AHP were used to obtain the priorities of landfill siting criteria. Environmental criteria have the highest priority that equals to 48.1% and the distance from surface water, and the faults zones are the most important factors with priorities equal to 18% and 13.7% respectively. In conclusion, the hybrid approach that integrates counter stakeholders MRSS, and AHP is capable of being applied to complex decision making process and its outputs are justified.

  15. Visual Analytics for Comparison of Ocean Model Output with Reference Data: Detecting and Analyzing Geophysical Processes Using Clustering Ensembles.

    PubMed

    Köthur, Patrick; Sips, Mike; Dobslaw, Henryk; Dransch, Doris

    2014-12-01

    Researchers assess the quality of an ocean model by comparing its output to that of a previous model version or to observations. One objective of the comparison is to detect and to analyze differences and similarities between both data sets regarding geophysical processes, such as particular ocean currents. This task involves the analysis of thousands or hundreds of thousands of geographically referenced temporal profiles in the data. To cope with the amount of data, modelers combine aggregation of temporal profiles to single statistical values with visual comparison. Although this strategy is based on experience and a well-grounded body of expert knowledge, our discussions with domain experts have shown that it has two limitations: (1) using a single statistical measure results in a rather limited scope of the comparison and in significant loss of information, and (2) the decisions modelers have to make in the process may lead to important aspects being overlooked. In this article, we propose a Visual Analytics approach that broadens the scope of the analysis, reduces subjectivity, and facilitates comparison of the two data sets. It comprises three steps: First, it allows modelers to consider many aspects of the temporal behavior of geophysical processes by conducting multiple clusterings of the temporal profiles in each data set. Modelers can choose different features describing the temporal behavior of relevant processes, clustering algorithms, and parameterizations. Second, our approach consolidates the clusterings of one data set into a single clustering via a clustering ensembles approach. The consolidated clustering presents an overview of the geospatial distribution of temporal behavior in a data set. Third, a visual interface allows modelers to compare the two consolidated clusterings. It enables them to detect clusters of temporal profiles that represent geophysical processes and to analyze differences and similarities between two data sets. This work is

  16. Using fuzzy sets to model the uncertainty in the fault location process of distribution networks

    SciTech Connect

    Jaerventausta, P.; Verho, P.; Partanen, J. )

    1994-04-01

    In the computerized fault diagnosis of distribution networks the heuristic knowledge of the control center operators can be combined with the information obtained from the network data base and SCADA system. However, the nature of the heuristic knowledge is inexact and uncertain. Also the information obtained from the remote control system contains uncertainty and may be incorrect, conflicting or inadequate. This paper proposes a method based on fuzzy set theory to deal with the uncertainty involved in the process of locating faults in distribution networks. The method is implemented in a prototype version of the distribution network operation support system.

  17. Neural networks type MLP in the process of identification chosen varieties of maize

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Tomczak, R.

    2011-06-01

    During the adaptation process of the weights vector that occurs in the iterative presentation of the teaching vector, the the MLP type artificial neural network (MultiLayer Perceptron) attempts to learn the structure of the data. Such a network can learn to recognise aggregates of input data occurring in the input data set regardless of the assumed criteria of similarity and the quantity of the data explored. The MLP type neural network can be also used to detect regularities occurring in the obtained graphic empirical data. The neuronal image analysis is then a new field of digital processing of signals. It is possible to use it to identify chosen objects given in the form of bitmap. If at the network input, a new unknown case appears which the network is unable to recognise, it means that it is different from all the classes known previously. The MLP type artificial neural network taught in this way can serve as a detector signalling the appearance of a widely understood novelty. Such a network can also look for similarities between the known data and the noisy data. In this way, it is able to identify fragments of images presented in photographs of e.g. maze's grain. The purpose of the research was to use the MLP neural networks in the process of identification of chosen varieties of maize with the use of image analysis method. The neuronal classification shapes of grains was performed with the use of the Johan Gielis super formula.

  18. Network reliability: The effect of local network structure on diffusive processes

    NASA Astrophysics Data System (ADS)

    Youssef, Mina; Khorramzadeh, Yasamin; Eubank, Stephen

    2013-11-01

    This paper reintroduces the network reliability polynomial, introduced by Moore and Shannon [Moore and Shannon, J. Franklin Inst.JFINAB0016-003210.1016/0016-0032(56)90559-2 262, 191 (1956)], for studying the effect of network structure on the spread of diseases. We exhibit a representation of the polynomial that is well suited for estimation by distributed simulation. We describe a collection of graphs derived from Erdős-Rényi and scale-free-like random graphs in which we have manipulated assortativity-by-degree and the number of triangles. We evaluate the network reliability for all of these graphs under a reliability rule that is related to the expected size of a connected component. Through these extensive simulations, we show that for positively or neutrally assortative graphs, swapping edges to increase the number of triangles does not increase the network reliability. Also, positively assortative graphs are more reliable than neutral or disassortative graphs with the same number of edges. Moreover, we show the combined effect of both assortativity-by-degree and the presence of triangles on the critical point and the size of the smallest subgraph that is reliable.

  19. Network reliability: the effect of local network structure on diffusive processes.

    PubMed

    Youssef, Mina; Khorramzadeh, Yasamin; Eubank, Stephen

    2013-11-01

    This paper reintroduces the network reliability polynomial, introduced by Moore and Shannon [Moore and Shannon, J. Franklin Inst. 262, 191 (1956)], for studying the effect of network structure on the spread of diseases. We exhibit a representation of the polynomial that is well suited for estimation by distributed simulation. We describe a collection of graphs derived from Erdős-Rényi and scale-free-like random graphs in which we have manipulated assortativity-by-degree and the number of triangles. We evaluate the network reliability for all of these graphs under a reliability rule that is related to the expected size of a connected component. Through these extensive simulations, we show that for positively or neutrally assortative graphs, swapping edges to increase the number of triangles does not increase the network reliability. Also, positively assortative graphs are more reliable than neutral or disassortative graphs with the same number of edges. Moreover, we show the combined effect of both assortativity-by-degree and the presence of triangles on the critical point and the size of the smallest subgraph that is reliable. PMID:24329321

  20. Network Reliability: The effect of local network structure on diffusive processes

    PubMed Central

    Youssef, Mina; Khorramzadeh, Yasamin; Eubank, Stephen

    2014-01-01

    This paper re-introduces the network reliability polynomial – introduced by Moore and Shannon in 1956 – for studying the effect of network structure on the spread of diseases. We exhibit a representation of the polynomial that is well-suited for estimation by distributed simulation. We describe a collection of graphs derived from Erdős-Rényi and scale-free-like random graphs in which we have manipulated assortativity-by-degree and the number of triangles. We evaluate the network reliability for all these graphs under a reliability rule that is related to the expected size of a connected component. Through these extensive simulations, we show that for positively or neutrally assortative graphs, swapping edges to increase the number of triangles does not increase the network reliability. Also, positively assortative graphs are more reliable than neutral or disassortative graphs with the same number of edges. Moreover, we show the combined effect of both assortativity-by-degree and the presence of triangles on the critical point and the size of the smallest subgraph that is reliable. PMID:24329321