A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows
NASA Astrophysics Data System (ADS)
Lei, Xin; Li, Jiequan
2018-04-01
This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.
Simulation of Automated Vehicles' Drive Cycles
DOT National Transportation Integrated Search
2018-02-28
This research has two objectives: 1) To develop algorithms for plausible and legally-justifiable freeway car-following and arterial-street gap acceptance driving behavior for AVs 2) To implement these algorithms on a representative road network, in o...
2012-01-01
Background Systematic reviews have been challenged to consider effects on disadvantaged groups. A priori specification of subgroup analyses is recommended to increase the credibility of these analyses. This study aimed to develop and assess inter-rater agreement for an algorithm for systematic review authors to predict whether differences in effect measures are likely for disadvantaged populations relative to advantaged populations (only relative effect measures were addressed). Methods A health equity plausibility algorithm was developed using clinimetric methods with three items based on literature review, key informant interviews and methodology studies. The three items dealt with the plausibility of differences in relative effects across sex or socioeconomic status (SES) due to: 1) patient characteristics; 2) intervention delivery (i.e., implementation); and 3) comparators. Thirty-five respondents (consisting of clinicians, methodologists and research users) assessed the likelihood of differences across sex and SES for ten systematic reviews with these questions. We assessed inter-rater reliability using Fleiss multi-rater kappa. Results The proportion agreement was 66% for patient characteristics (95% confidence interval: 61%-71%), 67% for intervention delivery (95% confidence interval: 62% to 72%) and 55% for the comparator (95% confidence interval: 50% to 60%). Inter-rater kappa, assessed with Fleiss kappa, ranged from 0 to 0.199, representing very low agreement beyond chance. Conclusions Users of systematic reviews rated that important differences in relative effects across sex and socioeconomic status were plausible for a range of individual and population-level interventions. However, there was very low inter-rater agreement for these assessments. There is an unmet need for discussion of plausibility of differential effects in systematic reviews. Increased consideration of external validity and applicability to different populations and settings is warranted in systematic reviews to meet this need. PMID:23253632
Welch, Vivian; Brand, Kevin; Kristjansson, Elizabeth; Smylie, Janet; Wells, George; Tugwell, Peter
2012-12-19
Systematic reviews have been challenged to consider effects on disadvantaged groups. A priori specification of subgroup analyses is recommended to increase the credibility of these analyses. This study aimed to develop and assess inter-rater agreement for an algorithm for systematic review authors to predict whether differences in effect measures are likely for disadvantaged populations relative to advantaged populations (only relative effect measures were addressed). A health equity plausibility algorithm was developed using clinimetric methods with three items based on literature review, key informant interviews and methodology studies. The three items dealt with the plausibility of differences in relative effects across sex or socioeconomic status (SES) due to: 1) patient characteristics; 2) intervention delivery (i.e., implementation); and 3) comparators. Thirty-five respondents (consisting of clinicians, methodologists and research users) assessed the likelihood of differences across sex and SES for ten systematic reviews with these questions. We assessed inter-rater reliability using Fleiss multi-rater kappa. The proportion agreement was 66% for patient characteristics (95% confidence interval: 61%-71%), 67% for intervention delivery (95% confidence interval: 62% to 72%) and 55% for the comparator (95% confidence interval: 50% to 60%). Inter-rater kappa, assessed with Fleiss kappa, ranged from 0 to 0.199, representing very low agreement beyond chance. Users of systematic reviews rated that important differences in relative effects across sex and socioeconomic status were plausible for a range of individual and population-level interventions. However, there was very low inter-rater agreement for these assessments. There is an unmet need for discussion of plausibility of differential effects in systematic reviews. Increased consideration of external validity and applicability to different populations and settings is warranted in systematic reviews to meet this need.
Ant Lion Optimization algorithm for kidney exchanges.
Hamouda, Eslam; El-Metwally, Sara; Tarek, Mayada
2018-01-01
The kidney exchange programs bring new insights in the field of organ transplantation. They make the previously not allowed surgery of incompatible patient-donor pairs easier to be performed on a large scale. Mathematically, the kidney exchange is an optimization problem for the number of possible exchanges among the incompatible pairs in a given pool. Also, the optimization modeling should consider the expected quality-adjusted life of transplant candidates and the shortage of computational and operational hospital resources. In this article, we introduce a bio-inspired stochastic-based Ant Lion Optimization, ALO, algorithm to the kidney exchange space to maximize the number of feasible cycles and chains among the pool pairs. Ant Lion Optimizer-based program achieves comparable kidney exchange results to the deterministic-based approaches like integer programming. Also, ALO outperforms other stochastic-based methods such as Genetic Algorithm in terms of the efficient usage of computational resources and the quantity of resulting exchanges. Ant Lion Optimization algorithm can be adopted easily for on-line exchanges and the integration of weights for hard-to-match patients, which will improve the future decisions of kidney exchange programs. A reference implementation for ALO algorithm for kidney exchanges is written in MATLAB and is GPL licensed. It is available as free open-source software from: https://github.com/SaraEl-Metwally/ALO_algorithm_for_Kidney_Exchanges.
A model of proto-object based saliency
Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph
2013-01-01
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601
Multilevel models for estimating incremental net benefits in multinational studies.
Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John
2007-08-01
Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-20
... to employ algorithms for quoting and trading consistent with NYSE and SEC regulations. As such, DMM units at the Exchange all use algorithms to engage in quoting and trading activity at the Exchange. \\3... technological change to enable DMM units to use algorithms to close a security as well, i.e., to effectuate a...
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Graph-drawing algorithms geometries versus molecular mechanics in fullereness
NASA Astrophysics Data System (ADS)
Kaufman, M.; Pisanski, T.; Lukman, D.; Borštnik, B.; Graovac, A.
1996-09-01
The algorithms of Kamada-Kawai (KK) and Fruchterman-Reingold (FR) have been recently generalized (Pisanski et al., Croat. Chem. Acta 68 (1995) 283) in order to draw molecular graphs in three-dimensional space. The quality of KK and FR geometries is studied here by comparing them with the molecular mechanics (MM) and the adjacency matrix eigenvectors (AME) algorithm geometries. In order to compare different layouts of the same molecule, an appropriate method has been developed. Its application to a series of experimentally detected fullerenes indicates that the KK, FR and AME algorithms are able to reproduce plausible molecular geometries.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-28
... Change The Exchange proposes to modify the wording of Rule 6.12 relating to the C2 matching algorithm... matching algorithm and subsequently overlay certain priorities over the selected base algorithm. There are currently two base algorithms: price-time (often referred to as first in, first out or FIFO) in which...
Functionality limit of classical simulated annealing
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2015-09-01
By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
Calculation algorithms for breath-by-breath alveolar gas exchange: the unknowns!
Golja, Petra; Cettolo, Valentina; Francescato, Maria Pia
2018-06-25
Several papers (algorithm papers) describe computational algorithms that assess alveolar breath-by-breath gas exchange by accounting for changes in lung gas stores. It is unclear, however, if the effects of the latter are actually considered in literature. We evaluated dissemination of algorithm papers and the relevant provided information. The list of documents investigating exercise transients (in 1998-2017) was extracted from Scopus database. Documents citing the algorithm papers in the same period were analyzed in full text to check consistency of the relevant information provided. Less than 8% (121/1522) of documents dealing with exercise transients cited at least one algorithm paper; the paper of Beaver et al. (J Appl Physiol 51:1662-1675, 1981) was cited most often, with others being cited tenfold less. Among the documents citing the algorithm paper of Beaver et al. (J Appl Physiol 51:1662-1675, 1981) (N = 251), only 176 cited it for the application of their algorithm/s; in turn, 61% (107/176) of them stated the alveolar breath-by-breath gas exchange measurement, but only 1% (1/107) of the latter also reported the assessment of volunteers' functional residual capacity, a crucial parameter for the application of the algorithm. Information related to gas exchange was provided consistently in the methods and in the results in 1 of the 107 documents. Dissemination of algorithm papers in literature investigating exercise transients is by far narrower than expected. The information provided about the actual application of gas exchange algorithms is often inadequate and/or ambiguous. Some guidelines are provided that can help to improve the quality of future publications in the field.
Multidimensional generalized-ensemble algorithms for complex systems.
Mitsutake, Ayori; Okamoto, Yuko
2009-06-07
We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.
Rethinking exchange market models as optimization algorithms
NASA Astrophysics Data System (ADS)
Luquini, Evandro; Omar, Nizam
2018-02-01
The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
NASA Astrophysics Data System (ADS)
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
Peinemann, Frank; Kleijnen, Jos
2015-01-01
Objectives To develop an algorithm that aims to provide guidance and awareness for choosing multiple study designs in systematic reviews of healthcare interventions. Design Method study: (1) To summarise the literature base on the topic. (2) To apply the integration of various study types in systematic reviews. (3) To devise decision points and outline a pragmatic decision tree. (4) To check the plausibility of the algorithm by backtracking its pathways in four systematic reviews. Results (1) The results of our systematic review of the published literature have already been published. (2) We recaptured the experience from our four previously conducted systematic reviews that required the integration of various study types. (3) We chose length of follow-up (long, short), frequency of events (rare, frequent) and types of outcome as decision points (death, disease, discomfort, disability, dissatisfaction) and aligned the study design labels according to the Cochrane Handbook. We also considered practical or ethical concerns, and the problem of unavailable high-quality evidence. While applying the algorithm, disease-specific circumstances and aims of interventions should be considered. (4) We confirmed the plausibility of the pathways of the algorithm. Conclusions We propose that the algorithm can assist to bring seminal features of a systematic review with multiple study designs to the attention of anyone who is planning to conduct a systematic review. It aims to increase awareness and we think that it may reduce the time burden on review authors and may contribute to the production of a higher quality review. PMID:26289450
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
..., as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms June 17, 2010. On... Hybrid System. Each rule currently provides allocation algorithms the Exchange can utilize when executing incoming electronic orders, including the Ultimate Matching Algorithm (``UMA''), and price-time and pro...
PERFORMANCE, RELIABILITY, AND IMPROVEMENT OF A TISSUE-SPECIFIC METABOLIC SIMULATOR
A methodology is described that has been used to build and enhance a simulator for rat liver metabolism providing reliable predictions within a large chemical domain. The tissue metabolism simulator (TIMES) utilizes a heuristic algorithm to generate plausible metabolic maps using...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-27
... provides a ``menu'' of matching algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different matching algorithms on a class-by-class basis. The menu includes, among other choices, the ultimate matching algorithm (``UMA''), as well as price-time...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-18
... Change, as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms May 12, 2010... allocation algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different allocation algorithms on a class-by-class basis. The menu includes, among...
Efficiency of exchange schemes in replica exchange
NASA Astrophysics Data System (ADS)
Lingenheil, Martin; Denschlag, Robert; Mathias, Gerald; Tavan, Paul
2009-08-01
In replica exchange simulations a fast diffusion of the replicas through the temperature space maximizes the efficiency of the statistical sampling. Here, we compare the diffusion speed as measured by the round trip rates for four exchange algorithms. We find different efficiency profiles with optimal average acceptance probabilities ranging from 8% to 41%. The best performance is determined by benchmark simulations for the most widely used algorithm, which alternately tries to exchange all even and all odd replica pairs. By analytical mathematics we show that the excellent performance of this exchange scheme is due to the high diffusivity of the underlying random walk.
DNA motif alignment by evolving a population of Markov chains.
Bi, Chengpeng
2009-01-30
Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
.... Those public customers who continue to receive priority in the execution algorithm are called Priority... standard execution algorithm: \\3\\ Securities Exchange Act Release No. 59287 (January 23, 2009), 74 FR 5694...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... modifiers available to algorithms used by Floor brokers to route interest to the Exchange's matching engine...-Quotes entered into the matching engine by an algorithm on behalf of a Floor broker. STP modifiers would... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...
Crystal structure and cation exchanging properties of a novel open framework phosphate of Ce (IV)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevara, Samatha; Achary, S. N., E-mail: sachary@barc.gov.in; Tyagi, A. K.
2016-05-23
Herein we report preparation, crystal structure and ion exchanging properties of a new phosphate of tetravalent cerium, K{sub 2}Ce(PO{sub 4}){sub 2}. A monoclinic structure having framework type arrangement of Ce(PO{sub 4}){sub 6} units formed by C2O{sub 8} square-antiprism and PO{sub 4} tetrahedra is assigned for K{sub C}e(PO{sub 4}){sub 2}. The K{sup +} ions are occupied in the channels formed by the Ce(PO{sub 4})6 and provide overall charge neutrality. The unique channel type arrangements of the K+ make them exchangeable with other cations. The ion exchanging properties of K2Ce(PO4)2 has been investigated by equilibrating with solution of 90Sr followed by radiometricmore » analysis. In optimum conditions, significant exchange of K+ with Sr2+ with Kd ~ 8000 mL/g is observed. The details of crystal structure and ion exchange properties are explained and a plausible mechanism for ion exchange is presented.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-05
... electronic matching algorithm from CBOE Rule 6.45B shall apply to SAL executions (e.g., pro-rata, price-time... entitlement when the pro-rata algorithm is in effect for SAL in selected Hybrid 3.0 classes as part of a pilot... what it would have been under the pre-pilot allocation algorithm. The Exchange will reduce the DPM/LMM...
Multiphase complete exchange: A theoretical analysis
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Complete Exchange requires each of N processors to send a unique message to each of the remaining N-1 processors. For a circuit switched hypercube with N = 2(sub d) processors, the Direct and Standard algorithms for Complete Exchange are optimal for very large and very small message sizes, respectively. For intermediate sizes, a hybrid Multiphase algorithm is better. This carries out Direct exchanges on a set of subcubes whose dimensions are a partition of the integer d. The best such algorithm for a given message size m could hitherto only be found by enumerating all partitions of d. The Multiphase algorithm is analyzed assuming a high performance communication network. It is proved that only algorithms corresponding to equipartitions of d (partitions in which the maximum and minimum elements differ by at most 1) can possibly be optimal. The run times of these algorithms plotted against m form a hull of optimality. It is proved that, although there is an exponential number of partitions, (1) the number of faces on this hull is Theta(square root of d), (2) the hull can be found in theta(square root of d) time, and (3) once it has been found, the optimal algorithm for any given m can be found in Theta(log d) time. These results provide a very fast technique for minimizing communication overhead in many important applications, such as matrix transpose, Fast Fourier transform, and ADI.
Marr's levels and the minimalist program.
Johnson, Mark
2017-02-01
A simple change to a cognitive system at Marr's computational level may entail complex changes at the other levels of description of the system. The implementational level complexity of a change, rather than its computational level complexity, may be more closely related to the plausibility of a discrete evolutionary event causing that change. Thus the formal complexity of a change at the computational level may not be a good guide to the plausibility of an evolutionary event introducing that change. For example, while the Minimalist Program's Merge is a simple formal operation (Berwick & Chomsky, 2016), the computational mechanisms required to implement the language it generates (e.g., to parse the language) may be considerably more complex. This has implications for the theory of grammar: theories of grammar which involve several kinds of syntactic operations may be no less evolutionarily plausible than a theory of grammar that involves only one. A deeper understanding of human language at the algorithmic and implementational levels could strengthen Minimalist Program's account of the evolution of language.
Feature reduction and payload location with WAM steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.; Lubenko, Ivans
2009-02-01
WAM steganalysis is a feature-based classifier for detecting LSB matching steganography, presented in 2006 by Goljan et al. and demonstrated to be sensitive even to small payloads. This paper makes three contributions to the development of the WAM method. First, we benchmark some variants of WAM in a number of sets of cover images, and we are able to quantify the significance of differences in results between different machine learning algorithms based on WAM features. It turns out that, like many of its competitors, WAM is not effective in certain types of cover, and furthermore it is hard to predict which types of cover are suitable for WAM steganalysis. Second, we demonstrate that only a few the features used in WAM steganalysis do almost all of the work, so that a simplified WAM steganalyser can be constructed in exchange for a little less detection power. Finally, we demonstrate how the WAM method can be extended to provide forensic tools to identify the location (and potentially content) of LSB matching payload, given a number of stego images with payload placed in the same locations. Although easily evaded, this is a plausible situation if the same stego key is mistakenly re-used for embedding in multiple images.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
Silletta, Emilia V; Franzoni, María B; Monti, Gustavo A; Acosta, Rodolfo H
2018-01-01
Two-dimension (2D) Nuclear Magnetic Resonance relaxometry experiments are a powerful tool extensively used to probe the interaction among different pore structures, mostly in inorganic systems. The analysis of the collected experimental data generally consists of a 2D numerical inversion of time-domain data where T 2 -T 2 maps are generated. Through the years, different algorithms for the numerical inversion have been proposed. In this paper, two different algorithms for numerical inversion are tested and compared under different conditions of exchange dynamics; the method based on Butler-Reeds-Dawson (BRD) algorithm and the fast-iterative shrinkage-thresholding algorithm (FISTA) method. By constructing a theoretical model, the algorithms were tested for a two- and three-site porous media, varying the exchange rates parameters, the pore sizes and the signal to noise ratio. In order to test the methods under realistic experimental conditions, a challenging organic system was chosen. The molecular exchange rates of water confined in hierarchical porous polymeric networks were obtained, for a two- and three-site porous media. Data processed with the BRD method was found to be accurate only under certain conditions of the exchange parameters, while data processed with the FISTA method is precise for all the studied parameters, except when SNR conditions are extreme. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Han, Jongil; Arya, S. Pal; Shaohua, Shen; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2000-01-01
Algorithms are developed to extract atmospheric boundary layer profiles for turbulence kinetic energy (TKE) and energy dissipation rate (EDR), with data from a meteorological tower as input. The profiles are based on similarity theory and scalings for the atmospheric boundary layer. The calculated profiles of EDR and TKE are required to match the observed values at 5 and 40 m. The algorithms are coded for operational use and yield plausible profiles over the diurnal variation of the atmospheric boundary layer.
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial
2015-08-01
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
Numerical calculation of a sea water heta exchanger using Simulink softwear
NASA Astrophysics Data System (ADS)
Preda, A.; Popescu, L. L.; Popescu, R. S.
2017-08-01
To highlight the heat exchange taking place between seawater as primary agent and the working fluid (water, glycol or Freon) as secondary agent, I have used the Simulink softwear in order to creat a new sequence for numerical calculation of heat exchanging. For optimum heat transfer we opted for a counter movement. The model developed to view the dynamic behavior of the exchanger consists of four interconnected levelsess. In the simulations was found that a finer mesh of the whole exchanger lead to results much closer to reality. There have been various models meshing, starting from a single cell and then advancing noticed an improvement in resultsSimulations were made in both the summer and the winter, using as a secondary agent process water and glycol solution. Studying heat transfer that occurs in the primary exchanger of a heat pump, having the primary fluid sea water with this program, we get the data plausible and worthy of consideration. Inserting into the program, the seasonal water temperatures of Black Sea water layers, we get a encouraging picture about storage capacity and heat transfer of sea water.
Voidage correction algorithm for unresolved Euler-Lagrange simulations
NASA Astrophysics Data System (ADS)
Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan
2018-04-01
The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.
Complete exchange on the iPSC-860
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1991-01-01
The implementation of complete exchange on the circuit switched Intel iPSC-860 hypercube is described. This pattern, also known as all-to-all personalized communication, is the densest requirement that can be imposed on a network. On the iPSC-860, care needs to be taken to avoid edge contention, which can have a disastrous impact on communication time. There are basically two classes of algorithms that achieve contention-free complete exchange. The first contains the classical standard exchange algorithm that is generally useful for small message sizes. The second includes a number of optimal or near-optimal algorithms that are best for large messages. Measurement of communication overhead on the iPSC-860 are given and a notation for analyzing communication link usage is developed. It is shown that for the two classes of algorithms, there is substantial variation in performance with synchronization technique and choice of message protocol. Timings of six implementations are given; each of these is useful over a particular range of message size and cube dimension. Since the complete exchange is a superset of communication patterns, these timings represent upper bounds on the time required by an arbitrary communication requirement. These results indicate that the programmer needs to evaluate several possibilities before finalizing an implementation - a careful choice can lead to very significant savings in time.
Genetic algorithm for neural networks optimization
NASA Astrophysics Data System (ADS)
Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta
2004-11-01
This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
Allam, Ahmed M; Abbas, Hazem M
2010-12-01
Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.
Noniterative accurate algorithm for the exact exchange potential of density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cinal, M.; Holas, A.
2007-10-15
An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less
Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho
We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering
ERIC Educational Resources Information Center
Chahine, Firas Safwan
2012-01-01
Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…
Explicit B-spline regularization in diffeomorphic image registration
Tustison, Nicholas J.; Avants, Brian B.
2013-01-01
Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140
Boolean network inference from time series data incorporating prior biological knowledge.
Haider, Saad; Pal, Ranadip
2012-01-01
Numerous approaches exist for modeling of genetic regulatory networks (GRNs) but the low sampling rates often employed in biological studies prevents the inference of detailed models from experimental data. In this paper, we analyze the issues involved in estimating a model of a GRN from single cell line time series data with limited time points. We present an inference approach for a Boolean Network (BN) model of a GRN from limited transcriptomic or proteomic time series data based on prior biological knowledge of connectivity, constraints on attractor structure and robust design. We applied our inference approach to 6 time point transcriptomic data on Human Mammary Epithelial Cell line (HMEC) after application of Epidermal Growth Factor (EGF) and generated a BN with a plausible biological structure satisfying the data. We further defined and applied a similarity measure to compare synthetic BNs and BNs generated through the proposed approach constructed from transitions of various paths of the synthetic BNs. We have also compared the performance of our algorithm with two existing BN inference algorithms. Through theoretical analysis and simulations, we showed the rarity of arriving at a BN from limited time series data with plausible biological structure using random connectivity and absence of structure in data. The framework when applied to experimental data and data generated from synthetic BNs were able to estimate BNs with high similarity scores. Comparison with existing BN inference algorithms showed the better performance of our proposed algorithm for limited time series data. The proposed framework can also be applied to optimize the connectivity of a GRN from experimental data when the prior biological knowledge on regulators is limited or not unique.
Brijesh Thapa
2000-01-01
There is a positive correlation between the debt crisis of countries. To combat the crisis, Lovejoy (1984) introduced the debt-for-nature swap process that involves a mechanism of exchange in which a certain amount of the debtorâs foreign debt is cancelled or forgiven, in return for local currency from the debtor government to be invested in domestic environmental...
Multiphase complete exchange on a circuit switched hypercube
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1991-01-01
On a distributed memory parallel computer, the complete exchange (all-to-all personalized) communication pattern requires each of n processors to send a different block of data to each of the remaining n - 1 processors. This pattern is at the heart of many important algorithms, most notably the matrix transpose. For a circuit switched hypercube of dimension d(n = 2(sup d)), two algorithms for achieving complete exchange are known. These are (1) the Standard Exchange approach that employs d transmissions of size 2(sup d-1) blocks each and is useful for small block sizes, and (2) the Optimal Circuit Switched algorithm that employs 2(sup d) - 1 transmissions of 1 block each and is best for large block sizes. A unified multiphase algorithm is described that includes these two algorithms as special cases. The complete exchange on a hypercube of dimension d and block size m is achieved by carrying out k partial exchange on subcubes of dimension d(sub i) Sigma(sup k)(sub i=1) d(sub i) = d and effective block size m(sub i) = m2(sup d-di). When k = d and all d(sub i) = 1, this corresponds to algorithm (1) above. For the case of k = 1 and d(sub i) = d, this becomes the circuit switched algorithm (2). Changing the subcube dimensions d, varies the effective block size and permits a compromise between the data permutation and block transmission overhead of (1) and the startup overhead of (2). For a hypercube of dimension d, the number of possible combinations of subcubes is p(d), the number of partitions of the integer d. This is an exponential but very slowly growing function and it is feasible over these partitions to discover the best combination for a given message size. The approach was analyzed for, and implemented on, the Intel iPSC-860 circuit switched hypercube. Measurements show good agreement with predictions and demonstrate that the multiphase approach can substantially improve performance for block sizes in the 0 to 160 byte range. This range, which corresponds to 0 to 40 floating point numbers per processor, is commonly encountered in practical numeric applications. The multiphase technique is applicable to all circuit-switched hypercubes that use the common e-cube routing strategy.
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K
2015-03-27
Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.
Three-pass protocol scheme for bitmap image security by using vernam cipher algorithm
NASA Astrophysics Data System (ADS)
Rachmawati, D.; Budiman, M. A.; Aulya, L.
2018-02-01
Confidentiality, integrity, and efficiency are the crucial aspects of data security. Among the other digital data, image data is too prone to abuse of operation like duplication, modification, etc. There are some data security techniques, one of them is cryptography. The security of Vernam Cipher cryptography algorithm is very dependent on the key exchange process. If the key is leaked, security of this algorithm will collapse. Therefore, a method that minimizes key leakage during the exchange of messages is required. The method which is used, is known as Three-Pass Protocol. This protocol enables message delivery process without the key exchange. Therefore, the sending messages process can reach the receiver safely without fear of key leakage. The system is built by using Java programming language. The materials which are used for system testing are image in size 200×200 pixel, 300×300 pixel, 500×500 pixel, 800×800 pixel and 1000×1000 pixel. The result of experiments showed that Vernam Cipher algorithm in Three-Pass Protocol scheme could restore the original image.
Finding long chains in kidney exchange using the traveling salesman problem.
Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E
2015-01-20
As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice.
Finding long chains in kidney exchange using the traveling salesman problem
Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E.
2015-01-01
As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice. PMID:25561535
Fe atom exchange between aqueous Fe2+ and magnetite.
Gorski, Christopher A; Handler, Robert M; Beard, Brian L; Pasakarnis, Timothy; Johnson, Clark M; Scherer, Michelle M
2012-11-20
The reaction between magnetite and aqueous Fe(2+) has been extensively studied due to its role in contaminant reduction, trace-metal sequestration, and microbial respiration. Previous work has demonstrated that the reaction of Fe(2+) with magnetite (Fe(3)O(4)) results in the structural incorporation of Fe(2+) and an increase in the bulk Fe(2+) content of magnetite. It is unclear, however, whether significant Fe atom exchange occurs between magnetite and aqueous Fe(2+), as has been observed for other Fe oxides. Here, we measured the extent of Fe atom exchange between aqueous Fe(2+) and magnetite by reacting isotopically "normal" magnetite with (57)Fe-enriched aqueous Fe(2+). The extent of Fe atom exchange between magnetite and aqueous Fe(2+) was significant (54-71%), and went well beyond the amount of Fe atoms found at the near surface. Mössbauer spectroscopy of magnetite reacted with (56)Fe(2+) indicate that no preferential exchange of octahedral or tetrahedral sites occurred. Exchange experiments conducted with Co-ferrite (Co(2+)Fe(2)(3+)O(4)) showed little impact of Co substitution on the rate or extent of atom exchange. Bulk electron conduction, as previously invoked to explain Fe atom exchange in goethite, is a possible mechanism, but if it is occurring, conduction does not appear to be the rate-limiting step. The lack of significant impact of Co substitution on the kinetics of Fe atom exchange, and the relatively high diffusion coefficients reported for magnetite suggest that for magnetite, unlike goethite, Fe atom diffusion is a plausible mechanism to explain the rapid rates of Fe atom exchange in magnetite.
Local Estimators for Spacecraft Formation Flying
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh
2011-01-01
A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.
How Ag Nanospheres Are Transformed into AgAu Nanocages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreau, Liane M.; Schurman, Charles A.; Kewalramani, Sumit
Bimetallic hollow, porous noble metal nanoparticles are of broad interest for biomedical, optical and catalytic applications. The most straightforward method for preparing such structures involves the reaction between HAuCl4 and well-formed Ag particles, typically spheres, cubes, or triangular prisms, yet the mechanism underlying their formation is poorly understood at the atomic scale. By combining in situ nanoscopic and atomic-scale characterization techniques (XAFS, SAXS, XRF, and electron microscopy) to follow the process, we elucidate a plausible reaction pathway for the conversion of citrate-capped Ag nanospheres to AgAu nanocages; importantly, the hollowing event cannot be explained by the nanoscale Kirkendall effect, normore » by Galvanic exchange alone, two processes that have been previously proposed. We propose a modification of the bulk Galvanic exchange process that takes into account considerations that can only occur with nanoscale particles. This nanoscale Galvanic exchange process explains the novel morphological and chemical changes associated with the typically observed hollowing process.« less
Sun, Cheng; Boutis, Gregory S
2011-02-28
We report on the direct measurement of the exchange rate of waters of hydration in elastin by T(2)-T(2) exchange spectroscopy. The exchange rates in bovine nuchal ligament elastin and aortic elastin at temperatures near, below and at the physiological temperature are reported. Using an Inverse Laplace Transform (ILT) algorithm, we are able to identify four components in the relaxation times. While three of the components are in good agreement with previous measurements that used multi-exponential fitting, the ILT algorithm distinguishes a fourth component having relaxation times close to that of free water and is identified as water between fibers. With the aid of scanning electron microscopy, a model is proposed allowing for the application of a two-site exchange analysis between any two components for the determination of exchange rates between reservoirs. The results of the measurements support a model (described elsewhere [1]) wherein the net entropy of bulk waters of hydration should increase upon increasing temperature in the inverse temperature transition.
Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network
NASA Astrophysics Data System (ADS)
Oda, Akihiro; Nishi, Hiroaki
Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.
Instrument-induced spatial crosstalk deconvolution algorithm
NASA Technical Reports Server (NTRS)
Wright, Valerie G.; Evans, Nathan L., Jr.
1986-01-01
An algorithm has been developed which reduces the effects of (deconvolves) instrument-induced spatial crosstalk in satellite image data by several orders of magnitude where highly precise radiometry is required. The algorithm is based upon radiance transfer ratios which are defined as the fractional bilateral exchange of energy betwen pixels A and B.
A distributed algorithm to maintain and repair the trail networks of arboreal ants.
Chandrasekhar, Arjun; Gordon, Deborah M; Navlakha, Saket
2018-06-18
We study how the arboreal turtle ant (Cephalotes goniodontus) solves a fundamental computing problem: maintaining a trail network and finding alternative paths to route around broken links in the network. Turtle ants form a routing backbone of foraging trails linking several nests and temporary food sources. This species travels only in the trees, so their foraging trails are constrained to lie on a natural graph formed by overlapping branches and vines in the tangled canopy. Links between branches, however, can be ephemeral, easily destroyed by wind, rain, or animal movements. Here we report a biologically feasible distributed algorithm, parameterized using field data, that can plausibly describe how turtle ants maintain the routing backbone and find alternative paths to circumvent broken links in the backbone. We validate the ability of this probabilistic algorithm to circumvent simulated breaks in synthetic and real-world networks, and we derive an analytic explanation for why certain features are crucial to improve the algorithm's success. Our proposed algorithm uses fewer computational resources than common distributed graph search algorithms, and thus may be useful in other domains, such as for swarm computing or for coordinating molecular robots.
Kalter, Henry D.; Roubanatou, Abdoulaye–Mamadou; Koffi, Alain; Black, Robert E.
2015-01-01
Background This study was one of a set of verbal autopsy investigations undertaken by the WHO/UNCEF–supported Child Health Epidemiology Reference Group (CHERG) to derive direct estimates of the causes of neonatal and child deaths in high priority countries of sub–Saharan Africa. The objective of the study was to determine the cause distributions of neonatal (0–27 days) and child (1–59 months) mortality in Niger. Methods Verbal autopsy interviews were conducted of random samples of 453 neonatal deaths and 620 child deaths from 2007 to 2010 identified by the 2011 Niger National Mortality Survey. The cause of each death was assigned using two methods: computerized expert algorithms arranged in a hierarchy and physician completion of a death certificate for each child. The findings of the two methods were compared to each other, and plausibility checks were conducted to assess which is the preferred method. Comparison of some direct measures from this study with CHERG modeled cause of death estimates are discussed. Findings The cause distributions of neonatal deaths as determined by expert algorithms and the physician were similar, with the same top three causes by both methods and all but two other causes within one rank of each other. Although child causes of death differed more, the reasons often could be discerned by analyzing algorithmic criteria alongside the physician’s application of required minimal diagnostic criteria. Including all algorithmic (primary and co–morbid) and physician (direct, underlying and contributing) diagnoses in the comparison minimized the differences, with kappa coefficients greater than 0.40 for five of 11 neonatal diagnoses and nine of 13 child diagnoses. By algorithmic diagnosis, early onset neonatal infection was significantly associated (χ2 = 13.2, P < 0.001) with maternal infection, and the geographic distribution of child meningitis deaths closely corresponded with that for meningitis surveillance cases and deaths. Conclusions Verbal autopsy conducted in the context of a national mortality survey can provide useful estimates of the cause distributions of neonatal and child deaths. While the current study found reasonable agreement between the expert algorithm and physician analyses, it also demonstrated greater plausibility for two algorithmic diagnoses and validation work is needed to ascertain the findings. Direct, large–scale measurement of causes of death complement, can strengthen, and in some settings may be preferred over modeled estimates. PMID:25969734
Algorithms for Lightweight Key Exchange.
Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio
2017-06-27
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.
Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system
NASA Astrophysics Data System (ADS)
Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU
2018-03-01
The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.
NASA Astrophysics Data System (ADS)
Jo, Sunhwan; Jiang, Wei
2015-12-01
Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.
NASA Astrophysics Data System (ADS)
Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.
2015-09-01
Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Rahmani, Amirreza
2011-01-01
Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation-flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are proposed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms rely on a local information exchange network, relaxing the assumptions on existing algorithms. Distributed space systems rely on a signal transmission network among multiple spacecraft for their operation. Control and coordination among multiple spacecraft in a formation is facilitated via a network of relative sensing and interspacecraft communications. Guidance, navigation, and control rely on the sensing network. This network becomes more complex the more spacecraft are added, or as mission requirements become more complex. The observability of a formation state was observed by a set of local observations from a particular node in the formation. Formation observability can be parameterized in terms of the matrices appearing in the formation dynamics and observation matrices. An agreement protocol was used as a mechanism for observing formation states from local measurements. An agreement protocol is essentially an unforced dynamic system whose trajectory is governed by the interconnection geometry and initial condition of each node, with a goal of reaching a common value of interest. The observability of the interconnected system depends on the geometry of the network, as well as the position of the observer relative to the topology. For the first time, critical GN&C (guidance, navigation, and control estimation) subsystems are synthesized by bringing the contribution of the spacecraft information-exchange network to the forefront of algorithmic analysis and design. The result is a formation estimation algorithm that is modular and robust to variations in the topology and link properties of the underlying formation network.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... class-by-class basis which electronic allocation algorithm \\6\\ would apply for rotations. Currently Rule... opening price (with multiple quotes and orders being ranked in accordance with the allocation algorithm in... and quotes ranked in accordance with the allocation algorithm in effect for the class). Any remaining...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-25
... same class as an affiliate if CBOE uses in that class an allocation algorithm that allocates electronic... in a particular options class an allocation algorithm that does not allocate electronic trades, in... bid or offer. Unlike the CBOE, the ISE allocation algorithm does not provide for the potential...
Saberliner flight test for airborne wind shear forward looking detection and avoidance radar systems
NASA Technical Reports Server (NTRS)
Mathews, Bruce D.
1991-01-01
Westinghouse conducted a flight test with its Sabreliner AN/APG-68 instrumented radar to assess the urban discrete/ground moving vehicle clutter environment. Glideslope approaches were flown into Washington National, BWI, and Georgetown, Delaware, airports employing radar mode timing, waveform, and processing configurations plausible for microburst windshear avoidance. The perceptions, both general and specific, of the clutter environment furnish an empirical foundation for beginning low false alarm detection algorithm development.
Crypto-Watermarking of Transmitted Medical Images.
Al-Haj, Ali; Mohammad, Ahmad; Amer, Alaa'
2017-02-01
Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images' region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient's data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.
Relabeling exchange method (REM) for learning in neural networks
NASA Astrophysics Data System (ADS)
Wu, Wen; Mammone, Richard J.
1994-02-01
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Feature Based Retention Time Alignment for Improved HDX MS Analysis
NASA Astrophysics Data System (ADS)
Venable, John D.; Scuba, William; Brock, Ansgar
2013-04-01
An algorithm for retention time alignment of mass shifted hydrogen-deuterium exchange (HDX) data based on an iterative distance minimization procedure is described. The algorithm performs pairwise comparisons in an iterative fashion between a list of features from a reference file and a file to be time aligned to calculate a retention time mapping function. Features are characterized by their charge, retention time and mass of the monoisotopic peak. The algorithm is able to align datasets with mass shifted features, which is a prerequisite for aligning hydrogen-deuterium exchange mass spectrometry datasets. Confidence assignments from the fully automated processing of a commercial HDX software package are shown to benefit significantly from retention time alignment prior to extraction of deuterium incorporation values.
Algorithms for Lightweight Key Exchange †
Santonja, Juan; Zamora, Antonio
2017-01-01
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks. PMID:28654006
Ermer, Elsa; Guerin, Scott A; Cosmides, Leda; Tooby, John; Miller, Michael B
2006-01-01
Baron-Cohen (1995) proposed that the theory of mind (ToM) inference system evolved to promote strategic social interaction. Social exchange--a form of co-operation for mutual benefit--involves strategic social interaction and requires ToM inferences about the contents of other individuals' mental states, especially their desires, goals, and intentions. There are behavioral and neuropsychological dissociations between reasoning about social exchange and reasoning about equivalent problems tapping other, more general content domains. It has therefore been proposed that social exchange behavior is regulated by social contract algorithms: a domain-specific inference system that is functionally specialized for reasoning about social exchange. We report an fMRI study using the Wason selection task that provides further support for this hypothesis. Precautionary rules share so many properties with social exchange rules--they are conditional, deontic, and involve subjective utilities--that most reasoning theories claim they are processed by the same neurocomputational machinery. Nevertheless, neuroimaging shows that reasoning about social exchange activates brain areas not activated by reasoning about precautionary rules, and vice versa. As predicted, neural correlates of ToM (anterior and posterior temporal cortex) were activated when subjects interpreted social exchange rules, but not precautionary rules (where ToM inferences are unnecessary). We argue that the interaction between ToM and social contract algorithms can be reciprocal: social contract algorithms requires ToM inferences, but their functional logic also allows ToM inferences to be made. By considering interactions between ToM in the narrower sense (belief-desire reasoning) and all the social inference systems that create the logic of human social interaction--ones that enable as well as use inferences about the content of mental states--a broader conception of ToM may emerge: a computational model embodying a Theory of Human Nature (ToHN).
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
A Review of Industrial Heat Exchange Optimization
NASA Astrophysics Data System (ADS)
Yao, Junjie
2018-01-01
Heat exchanger is an energy exchange equipment, it transfers the heat from a working medium to another working medium, which has been wildly used in petrochemical industry, HVAC refrigeration, aerospace and so many other fields. The optimal design and efficient operation of the heat exchanger and heat transfer network are of great significance to the process industry to realize energy conservation, production cost reduction and energy consumption reduction. In this paper, the optimization of heat exchanger, optimal algorithm and heat exchanger optimization with different objective functions are discussed. Then, optimization of the heat exchanger and the heat exchanger network considering different conditions are compared and analysed. Finally, all the problems discussed are summarized and foresights are proposed.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
2014-08-01
consensus algorithm called randomized gossip is more suitable [7, 8]. In asynchronous randomized gossip algorithms, pairs of neighboring nodes exchange...messages and perform updates in an asynchronous and unattended manner, and they also 1 The class of broadcast gossip algorithms [9, 10, 11, 12] are...dynamics [2] and asynchronous pairwise randomized gossip [7, 8], broadcast gossip algorithms do not require that nodes know the identities of their
Enhanced diffie-hellman algorithm for reliable key exchange
NASA Astrophysics Data System (ADS)
Aryan; Kumar, Chaithanya; Vincent, P. M. Durai Raj
2017-11-01
The Diffie -Hellman is one of the first public-key procedure and is a certain way of exchanging the cryptographic keys securely. This concept was introduced by Ralph Markel and it is named after Whitfield Diffie and Martin Hellman. Sender and Receiver make a common secret key in Diffie-Hellman algorithm and then they start communicating with each other over the public channel which is known to everyone. A number of internet services are secured by Diffie -Hellman. In Public key cryptosystem, the sender has to trust while receiving the public key of the receiver and vice-versa and this is the challenge of public key cryptosystem. Man-in-the-Middle attack is very much possible on the existing Diffie-Hellman algorithm. In man-in-the-middle attack, the attacker exists in the public channel, the attacker receives the public key of both sender and receiver and sends public keys to sender and receiver which is generated by his own. This is how man-in-the-middle attack is possible on Diffie-Hellman algorithm. Denial of service attack is another attack which is found common on Diffie-Hellman. In this attack, the attacker tries to stop the communication happening between sender and receiver and attacker can do this by deleting messages or by confusing the parties with miscommunication. Some more attacks like Insider attack, Outsider attack, etc are possible on Diffie-Hellman. To reduce the possibility of attacks on Diffie-Hellman algorithm, we have enhanced the Diffie-Hellman algorithm to a next level. In this paper, we are extending the Diffie -Hellman algorithm by using the concept of the Diffie -Hellman algorithm to get a stronger secret key and that secret key is further exchanged between the sender and the receiver so that for each message, a new secret shared key would be generated. The second secret key will be generated by taking primitive root of the first secret key.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-20
... needed. Rule 104(b) further provides that DMM units shall have the ability to employ algorithms for... use algorithms to engage in quoting and trading activity at the Exchange. \\3\\ Rule 104 is operating on... technological change to enable DMM units to use algorithms to close a security as well, i.e., to effectuate a...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-26
... allocation algorithm shall apply for COB and/or COA executions on a class-by-class basis, subject to certain conditions. Currently, as described in more detail below, the allocation algorithms for COB and COA default to the allocation algorithms in effect for a given options class. As proposed, the rule change would...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... make the STP modifiers available to algorithms used by Floor brokers to route interest to the Exchange..., pegging e- Quotes, and g-Quotes entered into the matching engine by an algorithm on behalf of a Floor... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... priority allocation algorithm for the SPXPM option class,\\5\\ subject to certain conditions. \\5\\ SPXPM is... algorithm in effect for the class, subject to various conditions set forth in subparagraphs (b)(3)(A... permit the allocation algorithm in effect for AIM in the SPXPM option class to be the price-time priority...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... algorithm \\5\\ for HOSS and to make related changes to Interpretation and Policy .03. Currently, there are... applicable allocation algorithm for the HOSS and modified HOSS rotation procedures. Paragraph (c)(iv) of the... allocation algorithm in effect for the option class pursuant to Rule 6.45A or 6.45B), then to limit orders...
Processes of Ammonia Air-Surface Exchange in a Fertilized Zea Mays Canopy
Recent incorporation of coupled soil biogeochemical and bi-directional NH3 air-surface exchange algorithms into regional air quality models holds promise for further reducing uncertainty in estimates of NH3 emissions from fertilized soils. While this advancement represents a sig...
Extension of analog network coding in wireless information exchange
NASA Astrophysics Data System (ADS)
Chen, Cheng; Huang, Jiaqing
2012-01-01
Ever since the concept of analog network coding(ANC) was put forward by S.Katti, much attention has been focused on how to utilize analog network coding to take advantage of wireless interference, which used to be considered generally harmful, to improve throughput performance. Previously, only the case of two nodes that need to exchange information has been fully discussed while the issue of extending analog network coding to more than three nodes remains undeveloped. In this paper, we propose a practical transmission scheme to extend analog network coding to more than two nodes that need to exchange information among themselves. We start with the case of three nodes that need to exchange information and demonstrate that through utilizing our algorithm, the throughput can achieve 33% and 20% increase compared with that of traditional transmission scheduling and digital network coding, respectively. Then, we generalize the algorithm so that it can fit for occasions with any number of nodes. We also discuss some technical issues and throughput analysis as well as the bit error rate.
Large unbalanced credit scoring using Lasso-logistic regression ensemble.
Wang, Hong; Xu, Qingsong; Zhou, Lifeng
2015-01-01
Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
NASA Technical Reports Server (NTRS)
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Insights from mathematical modeling of renal tubular function.
Weinstein, A M
1998-01-01
Mathematical models of proximal tubule have been developed which represent the important solute species within the constraints of known cytosolic concentrations, transport fluxes, and overall epithelial permeabilities. In general, model simulations have been used to assess the quantitative feasibility of what appear to be qualitatively plausible mechanisms, or alternatively, to identify incomplete rationalization of experimental observations. The examples considered include: (1) proximal water reabsorption, for which the lateral interspace is a locus for solute-solvent coupling; (2) ammonia secretion, for which the issue is prioritizing driving forces - transport on the Na+/H+ exchanger, on the Na,K-ATPase, or ammoniagenesis; (3) formate-stimulated NaCl reabsorption, for which simple addition of a luminal membrane chloride/formate exchanger fails to represent experimental observation, and (4) balancing luminal entry and peritubular exit, in which ATP-dependent peritubular K+ channels have been implicated, but appear unable to account for the bulk of proximal tubule cell volume homeostasis.
NASA Technical Reports Server (NTRS)
Katsuda, Satoru; Tsunemi, Hiroshi; Mori, Koji; Uchida, Hiroyuki; Petre, Robert; Yamada, Shinya; Akamatsu, Hiroki; Konami, Saori; Tamagawa, Toru
2012-01-01
We present high-resolution X-ray spectra of cloud-shock interaction regions in the eastern and northern rims of the Galactic supernova remnant Puppis A, using the Reflection Grating Spectrometer onboard the XMM-Newton satellite. A number of emission lines including K(alpha) triplets of He-like N, O , and Ne are clearly resolved for the first time. Intensity ratios of forbidden to resonance lines in the triplets are found to be higher than predictions by thermal emission models having plausible plasma parameters. The anomalous line ratios cannot be reproduced by effects of resonance scattering, recombination, or inner-shell ionization processes, but could be explained by charge-exchange emission that should arise at interfaces between the cold/warm clouds and the hot plasma. Our observations thus provide observational support for charge-exchange X-ray emission in supernova remnants.
NASA Astrophysics Data System (ADS)
Hay, C.; Creveling, J. R.; Huybers, P. J.
2016-12-01
Excursions in the stable carbon isotopic composition of carbonate rocks (δ13Ccarb) can facilitate correlation of Precambrian and Phanerozoic sedimentary successions at a higher temporal resolution than radiometric and biostratigraphic frameworks typically afford. Within the bounds of litho- and biostratigraphic constraints, stratigraphers often correlate isotopic patterns between distant stratigraphic sections through visual alignment of local maxima and minima of isotopic values. The reproducibility of this method can prove challenging and, thus, evaluating the statistical robustness of intrabasinal composite carbon isotope curves, and global correlations to these reference curves, remains difficult. To assess the reproducibility of stratigraphic alignment of δ13Ccarb data, and correlations between carbon isotope excursions, we employ a numerical dynamic time warping methodology that stretches and squeezes the time axis of a record to obtain an optimal correlation (in a least-squares sense) between time-uncertain series of data. In particular, we assess various alignments between series of Early Cambrian δ13Ccarb data with respect to plausible matches. We first show that an alignment of these records obtained visually, and published previously, is broadly reproducible using dynamic time warping. Alternative alignments with similar goodness of fits are also obtainable, and their stratigraphic plausibility are discussed. This approach should be generalizable to an algorithm for the purposes of developing a library of plausible alignments between multiple time-uncertain stratigraphic records.
Balancing Contention and Synchronization on the Intel Paragon
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Nicol, David M.
1996-01-01
The Intel Paragon is a mesh-connected distributed memory parallel computer. It uses an oblivious and deterministic message routing algorithm: this permits us to develop highly optimized schedules for frequently needed communication patterns. The complete exchange is one such pattern. Several approaches are available for carrying it out on the mesh. We study an algorithm developed by Scott. This algorithm assumes that a communication link can carry one message at a time and that a node can only transmit one message at a time. It requires global synchronization to enforce a schedule of transmissions. Unfortunately global synchronization has substantial overhead on the Paragon. At the same time the powerful interconnection mechanism of this machine permits 2 or 3 messages to share a communication link with minor overhead. It can also overlap multiple message transmission from the same node to some extent. We develop a generalization of Scott's algorithm that executes complete exchange with a prescribed contention. Schedules that incur greater contention require fewer synchronization steps. This permits us to tradeoff contention against synchronization overhead. We describe the performance of this algorithm and compare it with Scott's original algorithm as well as with a naive algorithm that does not take interconnection structure into account. The Bounded contention algorithm is always better than Scott's algorithm and outperforms the naive algorithm for all but the smallest message sizes. The naive algorithm fails to work on meshes larger than 12 x 12. These results show that due consideration of processor interconnect and machine performance parameters is necessary to obtain peak performance from the Paragon and its successor mesh machines.
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
NASA Astrophysics Data System (ADS)
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
Optimal Deployment of Sensor Nodes Based on Performance Surface of Underwater Acoustic Communication
Choi, Jee Woong
2017-01-01
The underwater acoustic sensor network (UWASN) is a system that exchanges data between numerous sensor nodes deployed in the sea. The UWASN uses an underwater acoustic communication technique to exchange data. Therefore, it is important to design a robust system that will function even in severely fluctuating underwater communication conditions, along with variations in the ocean environment. In this paper, a new algorithm to find the optimal deployment positions of underwater sensor nodes is proposed. The algorithm uses the communication performance surface, which is a map showing the underwater acoustic communication performance of a targeted area. A virtual force-particle swarm optimization algorithm is then used as an optimization technique to find the optimal deployment positions of the sensor nodes, using the performance surface information to estimate the communication radii of the sensor nodes in each generation. The algorithm is evaluated by comparing simulation results between two different seasons (summer and winter) for an area located off the eastern coast of Korea as the selected targeted area. PMID:29053569
Huang, Yu-Ming M; McCammon, J Andrew; Miao, Yinglong
2018-04-10
Through adding a harmonic boost potential to smooth the system potential energy surface, Gaussian accelerated molecular dynamics (GaMD) provides enhanced sampling and free energy calculation of biomolecules without the need of predefined reaction coordinates. This work continues to improve the acceleration power and energy reweighting of the GaMD by combining the GaMD with replica exchange algorithms. Two versions of replica exchange GaMD (rex-GaMD) are presented: force constant rex-GaMD and threshold energy rex-GaMD. During simulations of force constant rex-GaMD, the boost potential can be exchanged between replicas of different harmonic force constants with fixed threshold energy. However, the algorithm of threshold energy rex-GaMD tends to switch the threshold energy between lower and upper bounds for generating different levels of boost potential. Testing simulations on three model systems, including the alanine dipeptide, chignolin, and HIV protease, demonstrate that through continuous exchanges of the boost potential, the rex-GaMD simulations not only enhance the conformational transitions of the systems but also narrow down the distribution width of the applied boost potential for accurate energetic reweighting to recover biomolecular free energy profiles.
Improved treatment of exact exchange in Quantum ESPRESSO
Barnes, Taylor A.; Kurth, Thorsten; Carrier, Pierre; ...
2017-01-18
Here, we present an algorithm and implementation for the parallel computation of exact exchange in Quantum ESPRESSO (QE) that exhibits greatly improved strong scaling. QE is an open-source software package for electronic structure calculations using plane wave density functional theory, and supports the use of local, semi-local, and hybrid DFT functionals. Wider application of hybrid functionals is desirable for the improved simulation of electronic band energy alignments and thermodynamic properties, but the computational complexity of evaluating the exact exchange potential limits the practical application of hybrid functionals to large systems and requires efficient implementations. We demonstrate that existing implementations ofmore » hybrid DFT that utilize a single data structure for both the local and exact exchange regions of the code are significantly limited in the degree of parallelization achievable. We present a band-pair parallelization approach, in which the calculation of exact exchange is parallelized and evaluated independently from the parallelization of the remainder of the calculation, with the wavefunction data being efficiently transformed on-the-fly into a form that is optimal for each part of the calculation. For a 64 water molecule supercell, our new algorithm reduces the overall time to solution by nearly an order of magnitude.« less
User authentication based on the NFC host-card-emulation technology
NASA Astrophysics Data System (ADS)
Kološ, Jan; Kotyrba, Martin
2017-11-01
This paper deals with implementation of algorithms for data exchange between mobile devices supporting NFC HCE (Host-Card-Emulation) and a contactless NFC reader communicating in a read/write mode. This solution provides multiplatform architecture for data exchange between devices with a focus on safe and simple user authentication.
Previous exposure assessment panel studies have observed considerable seasonal, between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure ...
Zhou, Ruhong
2004-05-01
A highly parallel replica exchange method (REM) that couples with a newly developed molecular dynamics algorithm particle-particle particle-mesh Ewald (P3ME)/RESPA has been proposed for efficient sampling of protein folding free energy landscape. The algorithm is then applied to two separate protein systems, beta-hairpin and a designed protein Trp-cage. The all-atom OPLSAA force field with an explicit solvent model is used for both protein folding simulations. Up to 64 replicas of solvated protein systems are simulated in parallel over a wide range of temperatures. The combined trajectories in temperature and configurational space allow a replica to overcome free energy barriers present at low temperatures. These large scale simulations reveal detailed results on folding mechanisms, intermediate state structures, thermodynamic properties and the temperature dependences for both protein systems.
NASA Astrophysics Data System (ADS)
Plag, H.
2009-12-01
Local Sea Level (LSL) rise is one of the major anticipated impacts of future global warming with potentially devastating consequences, particularly in many low-lying, often subsiding, and densely populated coastal areas. Risk and vulnerability assessments in support of informed decisions ask for predictions of the plausible range of future LSL trajectories as input, while mitigation and adaptation to potentially rapid LSL changes would benefit from a forecasting of LSL changes on decadal time scales. Low-frequency to secular changes in LSL are the result of a number of location-dependent processes including ocean temperature and salinity changes, ocean and atmospheric circulation changes, mass exchange of the oceans with other reservoirs in the water cycle, and vertical land motion. Mass exchange between oceans and the ice sheets, glaciers, and land water storage has the potential to change coastal LSL in many geographical regions. LSL changes in response to mass exchange with land-based ice sheets, glaciers and water storage are spatially variable due to vertical land motion induced by the shifting loads and gravitational effects resulting from both the relocation of surface water mass and the deformation of the solid Earth under the load. As a consequence, close to a melting ice mass LSL will fall significantly and far away increase more than the global average. The so-called sea level equation expresses LSL as a function of current and past mass changes in ice sheets, glaciers, land water storage, and the resulting mass redistribution in the oceans. Predictions of mass-induced LSL changes exhibit significant inter-model differences, which introduce a large uncertainty in the prediction of LSL variations caused by changes in ice sheets, glaciers, and land water storage. Together with uncertainties in other contributions, this uncertainty produces a large range of plausible future LSL trajectories, which hampers the development of reasonable adaptation strategies for the coastal zone. While the sea level equation has been tested extensively in postglacial rebound studies for the viscous (post-mass change) contribution, a thorough validation of the elastic (co-mass change) contribution has yet to be done. Accurate observations of concurrent LSL changes, vertical land motion, and gravity changes required for such a test were missing until very recently. For the validation, new observations of LSL changes, vertical land motion, and gravity changes close to rapidly changing ice sheets and glaciers in Greenland, Svalbard, and other regions, as well as satellite altimetry observations of sea surface height changes and satellite gravity mission observations of mass changes in the hydrosphere are now available. With a validated solution, we will be able to better characterize LSL changes due to mass exchange of the oceans with, in particular, ice sheets and glaciers as an important contribution to the plausible range of future LSL trajectories in coastal zones. The current "error budget" will be assessed, and the impact of the uncertainties in LSL forecasts (on decadal time scales) and long-term projections (century time scales) on adaptation and mitigation strategies will be discussed.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah
2017-04-01
Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.
Weiss, Carolin; Tursunova, Irada; Neuschmelting, Volker; Lockau, Hannah; Nettekoven, Charlotte; Oros-Peusquens, Ana-Maria; Stoffels, Gabriele; Rehme, Anne K.; Faymonville, Andrea Maria; Shah, N. Jon; Langen, Karl Josef; Goldbrunner, Roland; Grefkes, Christian
2015-01-01
Imaging of the course of the corticospinal tract (CST) by diffusion tensor imaging (DTI) is useful for function-preserving tumour surgery. The integration of functional localizer data into tracking algorithms offers to establish a direct structure–function relationship in DTI data. However, alterations of MRI signals in and adjacent to brain tumours often lead to spurious tracking results. We here compared the impact of subcortical seed regions placed at different positions and the influences of the somatotopic location of the cortical seed and clinical co-factors on fibre tracking plausibility in brain tumour patients. The CST of 32 patients with intracranial tumours was investigated by means of deterministic DTI and neuronavigated transcranial magnetic stimulation (nTMS). The cortical seeds were defined by the nTMS hot spots of the primary motor area (M1) of the hand, the foot and the tongue representation. The CST originating from the contralesional M1 hand area was mapped as intra-individual reference. As subcortical region of interests (ROI), we used the posterior limb of the internal capsule (PLIC) and/or the anterior inferior pontine region (aiP). The plausibility of the fibre trajectories was assessed by a-priori defined anatomical criteria. The following potential co-factors were analysed: Karnofsky Performance Scale (KPS), resting motor threshold (RMT), T1-CE tumour volume, T2 oedema volume, presence of oedema within the PLIC, the fractional anisotropy threshold (FAT) to elicit a minimum amount of fibres and the minimal fibre length. The results showed a higher proportion of plausible fibre tracts for the aiP-ROI compared to the PLIC-ROI. Low FAT values and the presence of peritumoural oedema within the PLIC led to less plausible fibre tracking results. Most plausible results were obtained when the FAT ranged above a cut-off of 0.105. In addition, there was a strong effect of somatotopic location of the seed ROI; best plausibility was obtained for the contralateral hand CST (100%), followed by the ipsilesional hand CST (>95%), the ipsilesional foot (>85%) and tongue (>75%) CST. In summary, we found that the aiP-ROI yielded better tracking results compared to the IC-ROI when using deterministic CST tractography in brain tumour patients, especially when the M1 hand area was tracked. In case of FAT values lower than 0.10, the result of the respective CST tractography should be interpreted with caution with respect to spurious tracking results. Moreover, the presence of oedema within the internal capsule should be considered a negative predictor for plausible CST tracking. PMID:25685709
An ATR architecture for algorithm development and testing
NASA Astrophysics Data System (ADS)
Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym
2013-05-01
A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.
Markov Chain Monte Carlo Bayesian Learning for Neural Networks
NASA Technical Reports Server (NTRS)
Goodrich, Michael S.
2011-01-01
Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.
Large Unbalanced Credit Scoring Using Lasso-Logistic Regression Ensemble
Wang, Hong; Xu, Qingsong; Zhou, Lifeng
2015-01-01
Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data. PMID:25706988
between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.
Hula, Andreas; Montague, P Read; Dayan, Peter
2015-06-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Prostate contouring in MRI guided biopsy.
Vikal, Siddharth; Haker, Steven; Tempany, Clare; Fichtinger, Gabor
2009-03-27
With MRI possibly becoming a modality of choice for detection and staging of prostate cancer, fast and accurate outlining of the prostate is required in the volume of clinical interest. We present a semi-automatic algorithm that uses a priori knowledge of prostate shape to arrive at the final prostate contour. The contour of one slice is then used as initial estimate in the neighboring slices. Thus we propagate the contour in 3D through steps of refinement in each slice. The algorithm makes only minimum assumptions about the prostate shape. A statistical shape model of prostate contour in polar transform space is employed to narrow search space. Further, shape guidance is implicitly imposed by allowing only plausible edge orientations using template matching. The algorithm does not require region-homogeneity, discriminative edge force, or any particular edge profile. Likewise, it makes no assumption on the imaging coils and pulse sequences used and it is robust to the patient's pose (supine, prone, etc.). The contour method was validated using expert segmentation on clinical MRI data. We recorded a mean absolute distance of 2.0 ± 0.6 mm and dice similarity coefficient of 0.93 ± 0.3 in midsection. The algorithm takes about 1 second per slice.
Prostate contouring in MRI guided biopsy
Vikal, Siddharth; Haker, Steven; Tempany, Clare; Fichtinger, Gabor
2010-01-01
With MRI possibly becoming a modality of choice for detection and staging of prostate cancer, fast and accurate outlining of the prostate is required in the volume of clinical interest. We present a semi-automatic algorithm that uses a priori knowledge of prostate shape to arrive at the final prostate contour. The contour of one slice is then used as initial estimate in the neighboring slices. Thus we propagate the contour in 3D through steps of refinement in each slice. The algorithm makes only minimum assumptions about the prostate shape. A statistical shape model of prostate contour in polar transform space is employed to narrow search space. Further, shape guidance is implicitly imposed by allowing only plausible edge orientations using template matching. The algorithm does not require region-homogeneity, discriminative edge force, or any particular edge profile. Likewise, it makes no assumption on the imaging coils and pulse sequences used and it is robust to the patient's pose (supine, prone, etc.). The contour method was validated using expert segmentation on clinical MRI data. We recorded a mean absolute distance of 2.0 ± 0.6 mm and dice similarity coefficient of 0.93 ± 0.3 in midsection. The algorithm takes about 1 second per slice. PMID:21132083
Energy design for protein-protein interactions
Ravikant, D. V. S.; Elber, Ron
2011-01-01
Proteins bind to other proteins efficiently and specifically to carry on many cell functions such as signaling, activation, transport, enzymatic reactions, and more. To determine the geometry and strength of binding of a protein pair, an energy function is required. An algorithm to design an optimal energy function, based on empirical data of protein complexes, is proposed and applied. Emphasis is made on negative design in which incorrect geometries are presented to the algorithm that learns to avoid them. For the docking problem the search for plausible geometries can be performed exhaustively. The possible geometries of the complex are generated on a grid with the help of a fast Fourier transform algorithm. A novel formulation of negative design makes it possible to investigate iteratively hundreds of millions of negative examples while monotonically improving the quality of the potential. Experimental structures for 640 protein complexes are used to generate positive and negative examples for learning parameters. The algorithm designed in this work finds the correct binding structure as the lowest energy minimum in 318 cases of the 640 examples. Further benchmarks on independent sets confirm the significant capacity of the scoring function to recognize correct modes of interactions. PMID:21842951
A stochastic modeling of isotope exchange reactions in glutamine synthetase
NASA Astrophysics Data System (ADS)
Kazmiruk, N. V.; Boronovskiy, S. E.; Nartsissov, Ya R.
2017-11-01
The model presented in this work allows simulation of isotopic exchange reactions at chemical equilibrium catalyzed by a glutamine synthetase. To simulate the functioning of the enzyme the algorithm based on the stochastic approach was applied. The dependence of exchange rates for 14C and 32P on metabolite concentration was estimated. The simulation results confirmed the hypothesis of the ascertained validity for preferred order random binding mechanism. Corresponding values of K0.5 were also obtained.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
Interpreting Chromosome Aberration Spectra
NASA Technical Reports Server (NTRS)
Levy, Dan; Reeder, Christopher; Loucas, Bradford; Hlatky, Lynn; Chen, Allen; Cornforth, Michael; Sachs, Rainer
2007-01-01
Ionizing radiation can damage cells by breaking both strands of DNA in multiple locations, essentially cutting chromosomes into pieces. The cell has enzymatic mechanisms to repair such breaks; however, these mechanisms are imperfect and, in an exchange process, may produce a large-scale rearrangement of the genome, called a chromosome aberration. Chromosome aberrations are important in killing cells, during carcinogenesis, in characterizing repair/misrepair pathways, in retrospective radiation biodosimetry, and in a number of other ways. DNA staining techniques such as mFISH ( multicolor fluorescent in situ hybridization) provide a means for analyzing aberration spectra by examining observed final patterns. Unfortunately, an mFISH observed final pattern often does not uniquely determine the underlying exchange process. Further, resolution limitations in the painting protocol sometimes lead to apparently incomplete final patterns. We here describe an algorithm for systematically finding exchange processes consistent with any observed final pattern. This algorithm uses aberration multigraphs, a mathematical formalism that links the various aspects of aberration formation. By applying a measure to the space of consistent multigraphs, we will show how to generate model-specific distributions of aberration processes from mFISH experimental data. The approach is implemented by software freely available over the internet. As a sample application, we apply these algorithms to an aberration data set, obtaining a distribution of exchange cycle sizes, which serves to measure aberration complexity. Estimating complexity, in turn, helps indicate how damaging the aberrations are and may facilitate identification of radiation type in retrospective biodosimetry.
Dynamical simulation priors for human motion tracking.
Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke
2013-01-01
We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
A distributed algorithm for machine learning
NASA Astrophysics Data System (ADS)
Chen, Shihong
2018-04-01
This paper considers a distributed learning problem in which a group of machines in a connected network, each learning its own local dataset, aim to reach a consensus at an optimal model, by exchanging information only with their neighbors but without transmitting data. A distributed algorithm is proposed to solve this problem under appropriate assumptions.
Guo, Yingkun; Zheng, Hairong; Sun, Phillip Zhe
2015-01-01
Chemical exchange saturation transfer (CEST) MRI is a versatile imaging method that probes the chemical exchange between bulk water and exchangeable protons. CEST imaging indirectly detects dilute labile protons via bulk water signal changes following selective saturation of exchangeable protons, which offers substantial sensitivity enhancement and has sparked numerous biomedical applications. Over the past decade, CEST imaging techniques have rapidly evolved due to contributions from multiple domains, including the development of CEST mathematical models, innovative contrast agent designs, sensitive data acquisition schemes, efficient field inhomogeneity correction algorithms, and quantitative CEST (qCEST) analysis. The CEST system that underlies the apparent CEST-weighted effect, however, is complex. The experimentally measurable CEST effect depends not only on parameters such as CEST agent concentration, pH and temperature, but also on relaxation rate, magnetic field strength and more importantly, experimental parameters including repetition time, RF irradiation amplitude and scheme, and image readout. Thorough understanding of the underlying CEST system using qCEST analysis may augment the diagnostic capability of conventional imaging. In this review, we provide a concise explanation of CEST acquisition methods and processing algorithms, including their advantages and limitations, for optimization and quantification of CEST MRI experiments. PMID:25641791
High definition urethral pressure profilometry: Evaluating a novel microtip catheter.
Klünder, Mario; Amend, Bastian; Vaegler, Martin; Kelp, Alexandra; Feuer, Ronny; Sievert, Karl-Dietrich; Stenzl, Arnulf; Sawodny, Oliver; Ederer, Michael
2016-11-01
Urethral pressure profilometry (UPP) is used in the diagnosis of stress urinary incontinence (SUI). SUI is a significant medical, social, and economic problem, affecting about 12.5% of the population. A novel microtip catheter was developed for UPP featuring an inclination sensor and higher angular resolution compared to systems in clinical use today. Therewith, the location of each measured pressure sample can be determined and the spatial pressure distribution inside the urethra reconstructed. In order to assess the performance and plausibility of data from the microtip catheter, we compare it to data from a double balloon air charged system. Both catheters are used on sedated female minipigs. Data from the microtip catheter are processed through a signal reconstruction algorithm, plotted and compared against data from the air-charged catheter. The microtip catheter delivers results in agreement with previous comparisons of microtip and air-charged systems. It additionally provides a new level of detail in the reconstructed UPPs which may lead to new insights into the sphincter mechanism of minipigs. The ability of air-charged catheters to measure pressure circumferentially is widely considered a main advantage over microtip catheters. However, directional pressure readings can provide additional information on angular fluctuations in the urethral pressure distribution. It is shown that the novel microtip catheter in combination with a signal reconstruction algorithm delivers plausible data. It offers the opportunity to evaluate urethral structures, especially the sphincter, in context of the correct location within the anatomical location of the pelvic floor. Neurourol. Urodynam. 35:888-894, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Simard, Valérie; Bernier, Annie; Bélanger, Marie-Ève; Carrier, Julie
2013-06-01
To investigate relations between children's attachment and sleep, using objective and subjective sleep measures. Secondarily, to identify the most accurate actigraphy algorithm for toddlers. 55 mother-child dyads took part in the Strange Situation Procedure (18 months) to assess attachment. At 2 years, children wore an Actiwatch for a 72-hr period, and their mothers completed a sleep diary. The high sensitivity (80) and smoothed actigraphy algorithms provided the most plausible sleep data. Maternal diaries yielded longer estimated sleep duration and shorter wake duration at night and showed poor agreement with actigraphy. More resistant attachment behavior was not associated with actigraphy-assessed sleep, but was associated with longer nocturnal wake duration as estimated by mothers, and with a reduced actigraphy-diary discrepancy. Mothers of children with resistant attachment are more aware of their child's nocturnal awakenings. Researchers and clinicians should select the best sleep measurement method for their specific needs.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
Recombinant Temporal Aberration Detection Algorithms for Enhanced Biosurveillance
Murphy, Sean Patrick; Burkom, Howard
2008-01-01
Objective Broadly, this research aims to improve the outbreak detection performance and, therefore, the cost effectiveness of automated syndromic surveillance systems by building novel, recombinant temporal aberration detection algorithms from components of previously developed detectors. Methods This study decomposes existing temporal aberration detection algorithms into two sequential stages and investigates the individual impact of each stage on outbreak detection performance. The data forecasting stage (Stage 1) generates predictions of time series values a certain number of time steps in the future based on historical data. The anomaly measure stage (Stage 2) compares features of this prediction to corresponding features of the actual time series to compute a statistical anomaly measure. A Monte Carlo simulation procedure is then used to examine the recombinant algorithms’ ability to detect synthetic aberrations injected into authentic syndromic time series. Results New methods obtained with procedural components of published, sometimes widely used, algorithms were compared to the known methods using authentic datasets with plausible stochastic injected signals. Performance improvements were found for some of the recombinant methods, and these improvements were consistent over a range of data types, outbreak types, and outbreak sizes. For gradual outbreaks, the WEWD MovAvg7+WEWD Z-Score recombinant algorithm performed best; for sudden outbreaks, the HW+WEWD Z-Score performed best. Conclusion This decomposition was found not only to yield valuable insight into the effects of the aberration detection algorithms but also to produce novel combinations of data forecasters and anomaly measures with enhanced detection performance. PMID:17947614
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Taylor A.; Kurth, Thorsten; Carrier, Pierre
Here, we present an algorithm and implementation for the parallel computation of exact exchange in Quantum ESPRESSO (QE) that exhibits greatly improved strong scaling. QE is an open-source software package for electronic structure calculations using plane wave density functional theory, and supports the use of local, semi-local, and hybrid DFT functionals. Wider application of hybrid functionals is desirable for the improved simulation of electronic band energy alignments and thermodynamic properties, but the computational complexity of evaluating the exact exchange potential limits the practical application of hybrid functionals to large systems and requires efficient implementations. We demonstrate that existing implementations ofmore » hybrid DFT that utilize a single data structure for both the local and exact exchange regions of the code are significantly limited in the degree of parallelization achievable. We present a band-pair parallelization approach, in which the calculation of exact exchange is parallelized and evaluated independently from the parallelization of the remainder of the calculation, with the wavefunction data being efficiently transformed on-the-fly into a form that is optimal for each part of the calculation. For a 64 water molecule supercell, our new algorithm reduces the overall time to solution by nearly an order of magnitude.« less
Freeze-thaw cycles induce content exchange between cell-sized lipid vesicles
NASA Astrophysics Data System (ADS)
Litschel, Thomas; Ganzinger, Kristina A.; Movinkel, Torgeir; Heymann, Michael; Robinson, Tom; Mutschler, Hannes; Schwille, Petra
2018-05-01
Early protocells are commonly assumed to consist of an amphiphilic membrane enclosing an RNA-based self-replicating genetic system and a primitive metabolism without protein enzymes. Thus, protocell evolution must have relied on simple physicochemical self-organization processes within and across such vesicular structures. We investigate freeze-thaw (FT) cycling as a potential environmental driver for the necessary content exchange between vesicles. To this end, we developed a conceptually simple yet statistically powerful high-throughput procedure based on nucleic acid-containing giant unilamellar vesicles (GUVs) as model protocells. GUVs are formed by emulsion transfer in glass bottom microtiter plates and hence can be manipulated and monitored by fluorescence microscopy without additional pipetting and sample handling steps. This new protocol greatly minimizes artefacts, such as unintended GUV rupture or fusion by shear forces. Using DNA-encapsulating phospholipid GUVs fabricated by this method, we quantified the extent of content mixing between GUVs under different FT conditions. We found evidence of nucleic acid exchange in all detected vesicles if fast freezing of GUVs at ‑80 °C is followed by slow thawing at room temperature. In contrast, slow freezing and fast thawing both adversely affected content mixing. Surprisingly, and in contrast to previous reports for FT-induced content mixing, we found that the content is not exchanged through vesicle fusion and fission, but that vesicles largely maintain their membrane identity and even large molecules are exchanged via diffusion across the membranes. Our approach supports efficient screening of prebiotically plausible molecules and environmental conditions, to yield universal mechanistic insights into how cellular life may have emerged.
An Interferometry Imaging Beauty Contest
NASA Technical Reports Server (NTRS)
Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Monnier, John D.; Zhaod, Ming; Young, John S.; Thorsteinsson, Hrobjartur; Meimon, Serge C.; Mugnier, Laurent; LeBesnerais, Guy;
2004-01-01
We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Six different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formated in the interferometry Data Exchange Standard and is designed to simulate a specific problem relevant to long-baseline imaging. The data are calibrated power spectra and bispectra measured with a ctitious array, intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.
Evaluation of a Delay-Doppler Imaging Algorithm Based on the Wigner-Ville Distribution
1989-10-18
exchanging the frequency and time variables. 2.3 PROPERTIES OF THE WIGNER - VILLE DISTRIBUTION A partial list of the properties of the WVD is provided...ESD-TH-89-163 N Technical Report (N R55 00 Lfl Evaluation of a Delay-Doppler Imaging Algorithm Based on the Wigner - Ville Distribution K.I. Schultz 18...DOPPLER IMAGING ALGORITHM BASED ON THE WIGNER - VILLE DISTRIBUTION K.I. SCHULTZ Group 52 TECHNICAL REPORT 855 18 OCTOBER 1989 Approved for public release
Automated detection of slum area change in Hyderabad, India using multitemporal satellite imagery
NASA Astrophysics Data System (ADS)
Kit, Oleksandr; Lüdeke, Matthias
2013-09-01
This paper presents an approach to automated identification of slum area change patterns in Hyderabad, India, using multi-year and multi-sensor very high resolution satellite imagery. It relies upon a lacunarity-based slum detection algorithm, combined with Canny- and LSD-based imagery pre-processing routines. This method outputs plausible and spatially explicit slum locations for the whole urban agglomeration of Hyderabad in years 2003 and 2010. The results indicate a considerable growth of area occupied by slums between these years and allow identification of trends in slum development in this urban agglomeration.
Elbert, Yevgeniy; Burkom, Howard S
2009-11-20
This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.
Dietrich, Susanne; Borst, Nadine; Schlee, Sandra; Schneider, Daniel; Janda, Jan-Oliver; Sterner, Reinhard; Merkl, Rainer
2012-07-17
The analysis of a multiple-sequence alignment (MSA) with correlation methods identifies pairs of residue positions whose occupation with amino acids changes in a concerted manner. It is plausible to assume that positions that are part of many such correlation pairs are important for protein function or stability. We have used the algorithm H2r to identify positions k in the MSAs of the enzymes anthranilate phosphoribosyl transferase (AnPRT) and indole-3-glycerol phosphate synthase (IGPS) that show a high conn(k) value, i.e., a large number of significant correlations in which k is involved. The importance of the identified residues was experimentally validated by performing mutagenesis studies with sAnPRT and sIGPS from the archaeon Sulfolobus solfataricus. For sAnPRT, five H2r mutant proteins were generated by replacing nonconserved residues with alanine or the prevalent residue of the MSA. As a control, five residues with conn(k) values of zero were chosen randomly and replaced with alanine. The catalytic activities and conformational stabilities of the H2r and control mutant proteins were analyzed by steady-state enzyme kinetics and thermal unfolding studies. Compared to wild-type sAnPRT, the catalytic efficiencies (k(cat)/K(M)) were largely unaltered. In contrast, the apparent thermal unfolding temperature (T(M)(app)) was lowered in most proteins. Remarkably, the strongest observed destabilization (ΔT(M)(app) = 14 °C) was caused by the V284A exchange, which pertains to the position with the highest correlation signal [conn(k) = 11]. For sIGPS, six H2r mutant and four control proteins with alanine exchanges were generated and characterized. The k(cat)/K(M) values of four H2r mutant proteins were reduced between 13- and 120-fold, and their T(M)(app) values were decreased by up to 5 °C. For the sIGPS control proteins, the observed activity and stability decreases were much less severe. Our findings demonstrate that positions with high conn(k) values have an increased probability of being important for enzyme function or stability.
NASA Astrophysics Data System (ADS)
Kurosawa, Kosuke; Okamoto, Takaya; Genda, Hidenori
2018-02-01
Hypervelocity ejection of material by impact spallation is considered a plausible mechanism for material exchange between two planetary bodies. We have modeled the spallation process during vertical impacts over a range of impact velocities from 6 to 21 km/s using both grid- and particle-based hydrocode models. The Tillotson equations of state, which are able to treat the nonlinear dependence of density on pressure and thermal pressure in strongly shocked matter, were used to study the hydrodynamic-thermodynamic response after impacts. The effects of material strength and gravitational acceleration were not considered. A two-dimensional time-dependent pressure field within a 1.5-fold projectile radius from the impact point was investigated in cylindrical coordinates to address the generation of spalled material. A resolution test was also performed to reject ejected materials with peak pressures that were too low due to artificial viscosity. The relationship between ejection velocity veject and peak pressure Ppeak was also derived. Our approach shows that "late-stage acceleration" in an ejecta curtain occurs due to the compressible nature of the ejecta, resulting in an ejection velocity that can be higher than the ideal maximum of the resultant particle velocity after passage of a shock wave. We also calculate the ejecta mass that can escape from a planet like Mars (i.e., veject > 5 km/s) that matches the petrographic constraints from Martian meteorites, and which occurs when Ppeak = 30-50 GPa. Although the mass of such ejecta is limited to 0.1-1 wt% of the projectile mass in vertical impacts, this is sufficient for spallation to have been a plausible mechanism for the ejection of Martian meteorites. Finally, we propose that impact spallation is a plausible mechanism for the generation of tektites.
An Effective Hybrid Evolutionary Algorithm for Solving the Numerical Optimization Problems
NASA Astrophysics Data System (ADS)
Qian, Xiaohong; Wang, Xumei; Su, Yonghong; He, Liu
2018-04-01
There are many different algorithms for solving complex optimization problems. Each algorithm has been applied successfully in solving some optimization problems, but not efficiently in other problems. In this paper the Cauchy mutation and the multi-parent hybrid operator are combined to propose a hybrid evolutionary algorithm based on the communication (Mixed Evolutionary Algorithm based on Communication), hereinafter referred to as CMEA. The basic idea of the CMEA algorithm is that the initial population is divided into two subpopulations. Cauchy mutation operators and multiple paternal crossover operators are used to perform two subpopulations parallelly to evolve recursively until the downtime conditions are met. While subpopulation is reorganized, the individual is exchanged together with information. The algorithm flow is given and the performance of the algorithm is compared using a number of standard test functions. Simulation results have shown that this algorithm converges significantly faster than FEP (Fast Evolutionary Programming) algorithm, has good performance in global convergence and stability and is superior to other compared algorithms.
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
Kim, Chang-Sei; Ansermino, J. Mark; Hahn, Jin-Oh
2016-01-01
The goal of this study is to derive a minimally complex but credible model of respiratory CO2 gas exchange that may be used in systematic design and pilot testing of closed-loop end-tidal CO2 controllers in mechanical ventilation. We first derived a candidate model that captures the essential mechanisms involved in the respiratory CO2 gas exchange process. Then, we simplified the candidate model to derive two lower-order candidate models. We compared these candidate models for predictive capability and reliability using experimental data collected from 25 pediatric subjects undergoing dynamically varying mechanical ventilation during surgical procedures. A two-compartment model equipped with transport delay to account for CO2 delivery between the lungs and the tissues showed modest but statistically significant improvement in predictive capability over the same model without transport delay. Aggregating the lungs and the tissues into a single compartment further degraded the predictive fidelity of the model. In addition, the model equipped with transport delay demonstrated superior reliability to the one without transport delay. Further, the respiratory parameters derived from the model equipped with transport delay, but not the one without transport delay, were physiologically plausible. The results suggest that gas transport between the lungs and the tissues must be taken into account to accurately reproduce the respiratory CO2 gas exchange process under conditions of wide-ranging and dynamically varying mechanical ventilation conditions. PMID:26870728
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moryakov, A. V., E-mail: sailor@orc.ru
2016-12-15
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
OTACT: ONU Turning with Adaptive Cycle Times in Long-Reach PONs
NASA Astrophysics Data System (ADS)
Zare, Sajjad; Ghaffarpour Rahbar, Akbar
2015-01-01
With the expansion of PON networks as Long-Reach PON (LR-PON) networks, the problem of degrading the efficiency of centralized bandwidth allocation algorithms threatens this network due to high propagation delay. This is because these algorithms are based on bandwidth negotiation messages frequently exchanged between the optical line terminal (OLT) in the Central Office and optical network units (ONUs) near the users, which become seriously delayed when the network is extended. To solve this problem, some decentralized algorithms are proposed based on bandwidth negotiation messages frequently exchanged between the Remote Node (RN)/Local Exchange (LX) and ONUs near the users. The network has a relatively high delay since there are relatively large distances between RN/LX and ONUs, and therefore, control messages should travel twice between ONUs and RN/LX in order to go from one ONU to another ONU. In this paper, we propose a novel framework, called ONU Turning with Adaptive Cycle Times (OTACT), that uses Power Line Communication (PLC) to connect two adjacent ONUs. Since there is a large population density in urban areas, ONUs are closer to each other. Thus, the efficiency of the proposed method is high. We investigate the performance of the proposed scheme in contrast with other decentralized schemes under the worst case conditions. Simulation results show that the average upstream packet delay can be decreased under the proposed scheme.
Social signals and algorithmic trading of Bitcoin.
Garcia, David; Schweitzer, Frank
2015-09-01
The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.
Social signals and algorithmic trading of Bitcoin
Garcia, David; Schweitzer, Frank
2015-01-01
The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment. PMID:26473051
Analysis of the Dryden Wet Bulb GLobe Temperature Algorithm for White Sands Missile Range
NASA Technical Reports Server (NTRS)
LaQuay, Ryan Matthew
2011-01-01
In locations where workforce is exposed to high relative humidity and light winds, heat stress is a significant concern. Such is the case at the White Sands Missile Range in New Mexico. Heat stress is depicted by the wet bulb globe temperature, which is the official measurement used by the American Conference of Governmental Industrial Hygienists. The wet bulb globe temperature is measured by an instrument which was designed to be portable and needing routine maintenance. As an alternative form for measuring the wet bulb globe temperature, algorithms have been created to calculate the wet bulb globe temperature from basic meteorological observations. The algorithms are location dependent; therefore a specific algorithm is usually not suitable for multiple locations. Due to climatology similarities, the algorithm developed for use at the Dryden Flight Research Center was applied to data from the White Sands Missile Range. A study was performed that compared a wet bulb globe instrument to data from two Surface Atmospheric Measurement Systems that was applied to the Dryden wet bulb globe temperature algorithm. The period of study was from June to September of2009, with focus being applied from 0900 to 1800, local time. Analysis showed that the algorithm worked well, with a few exceptions. The algorithm becomes less accurate to the measurement when the dew point temperature is over 10 Celsius. Cloud cover also has a significant effect on the measured wet bulb globe temperature. The algorithm does not show red and black heat stress flags well due to shorter time scales of such events. The results of this study show that it is plausible that the Dryden Flight Research wet bulb globe temperature algorithm is compatible with the White Sands Missile Range, except for when there are increased dew point temperatures and cloud cover or precipitation. During such occasions, the wet bulb globe temperature instrument would be the preferred method of measurement. Out of the 30 dates examined, 23 fell under the category of having good accuracy.
Queue and stack sorting algorithm optimization and performance analysis
NASA Astrophysics Data System (ADS)
Qian, Mingzhu; Wang, Xiaobao
2018-04-01
Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.
Enhanced ID Pit Sizing Using Multivariate Regression Algorithm
NASA Astrophysics Data System (ADS)
Krzywosz, Kenji
2007-03-01
EPRI is funding a program to enhance and improve the reliability of inside diameter (ID) pit sizing for balance-of plant heat exchangers, such as condensers and component cooling water heat exchangers. More traditional approaches to ID pit sizing involve the use of frequency-specific amplitude or phase angles. The enhanced multivariate regression algorithm for ID pit depth sizing incorporates three simultaneous input parameters of frequency, amplitude, and phase angle. A set of calibration data sets consisting of machined pits of various rounded and elongated shapes and depths was acquired in the frequency range of 100 kHz to 1 MHz for stainless steel tubing having nominal wall thickness of 0.028 inch. To add noise to the acquired data set, each test sample was rotated and test data acquired at 3, 6, 9, and 12 o'clock positions. The ID pit depths were estimated using a second order and fourth order regression functions by relying on normalized amplitude and phase angle information from multiple frequencies. Due to unique damage morphology associated with the microbiologically-influenced ID pits, it was necessary to modify the elongated calibration standard-based algorithms by relying on the algorithm developed solely from the destructive sectioning results. This paper presents the use of transformed multivariate regression algorithm to estimate ID pit depths and compare the results with the traditional univariate phase angle analysis. Both estimates were then compared with the destructive sectioning results.
NASA Astrophysics Data System (ADS)
Paramestha, D. L.; Santosa, B.
2018-04-01
Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, C.E.
1997-05-01
This report reviews the safety characteristics of hydrogen as an energy carrier for a fuel cell vehicle (FCV), with emphasis on high pressure gaseous hydrogen onboard storage. The authors consider normal operation of the vehicle in addition to refueling, collisions, operation in tunnels, and storage in garages. They identify the most likely risks and failure modes leading to hazardous conditions, and provide potential countermeasures in the vehicle design to prevent or substantially reduce the consequences of each plausible failure mode. They then compare the risks of hydrogen with those of more common motor vehicle fuels including gasoline, propane, and naturalmore » gas.« less
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security
NASA Astrophysics Data System (ADS)
Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra
2018-03-01
The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.
Interferometric tomography of fuel cells for monitoring membrane water content.
Waller, Laura; Kim, Jungik; Shao-Horn, Yang; Barbastathis, George
2009-08-17
We have developed a system that uses two 1D interferometric phase projections for reconstruction of 2D water content changes over time in situ in a proton exchange membrane (PEM) fuel cell system. By modifying the filtered backprojection tomographic algorithm, we are able to incorporate a priori information about the object distribution into a fast reconstruction algorithm which is suitable for real-time monitoring.
Nonlinear Computational Aeroelasticity: Formulations and Solution Algorithms
2003-03-01
problem is proposed. Fluid-structure coupling algorithms are then discussed with some emphasis on distributed computing strategies. Numerical results...the structure and the exchange of structure motion to the fluid. The computational fluid dynamics code PFES is our finite element code for the numerical ...unstructured meshes). It was numerically demonstrated [1-3] that EBS can be less diffusive than SUPG [4-6] and the standard Finite Volume schemes
Fuel management optimization using genetic algorithms and expert knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1996-09-01
The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.
Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*
Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.
2011-01-01
Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829
A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2013-05-01
With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
Advances in Landslide Nowcasting: Evaluation of a Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia Bach; Peters-Lidard, Christa; Adler, Robert; Hong, Yang; Kumar, Sujay; Lerner-Lam, Arthur
2011-01-01
The increasing availability of remotely sensed data offers a new opportunity to address landslide hazard assessment at larger spatial scales. A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that may experience landslide activity. This system combines a calculation of static landslide susceptibility with satellite-derived rainfall estimates and uses a threshold approach to generate a set of nowcasts that classify potentially hazardous areas. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale near real-time landslide hazard assessment efforts, it requires several modifications before it can be fully realized as an operational tool. This study draws upon a prior work s recommendations to develop a new approach for considering landslide susceptibility and hazard at the regional scale. This case study calculates a regional susceptibility map using remotely sensed and in situ information and a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America. The susceptibility map is evaluated with a regional rainfall intensity duration triggering threshold and results are compared with the global algorithm framework for the same event. Evaluation of this regional system suggests that this empirically based approach provides one plausible way to approach some of the data and resolution issues identified in the global assessment. The presented methodology is straightforward to implement, improves upon the global approach, and allows for results to be transferable between regions. The results also highlight several remaining challenges, including the empirical nature of the algorithm framework and adequate information for algorithm validation. Conclusions suggest that integrating additional triggering factors such as soil moisture may help to improve algorithm performance accuracy. The regional algorithm scenario represents an important step forward in advancing regional and global-scale landslide hazard assessment.
Spin and orbital exchange interactions from Dynamical Mean Field Theory
NASA Astrophysics Data System (ADS)
Secchi, A.; Lichtenstein, A. I.; Katsnelson, M. I.
2016-02-01
We derive a set of equations expressing the parameters of the magnetic interactions characterizing a strongly correlated electronic system in terms of single-electron Green's functions and self-energies. This allows to establish a mapping between the initial electronic system and a spin model including up to quadratic interactions between the effective spins, with a general interaction (exchange) tensor that accounts for anisotropic exchange, Dzyaloshinskii-Moriya interaction and other symmetric terms such as dipole-dipole interaction. We present the formulas in a format that can be used for computations via Dynamical Mean Field Theory algorithms.
Landwehr, Jurate Maciunas
2002-01-01
This report presents the data for the Vostok - Devils Hole chronology, termed V-DH chronology, for the Antarctic Vostok ice core record. This depth - age relation is based on a join between the Vostok deuterium profile (D) and the stable oxygen isotope ratio (18O) record of paleotemperature from a calcitic core at Devils Hole, Nevada, using the algorithm developed by Landwehr and Winograd (2001). Both the control points defining the V-DH chronology and the numeric values for the chronology are given. In addition, a plausible chronology for a deformed bottom portion of the Vostok core developed with this algorithm is presented. Landwehr and Winograd (2001) demonstrated the broader utility of their algorithm by applying it to another appropriate Antarctic paleotemperature record, the Antarctic Dome Fuji ice core 18O record. Control points for this chronology are also presented in this report but deemed preliminary because, to date, investigators have published only the visual trace and not the numeric values for the Dome Fuji 18O record. The total uncertainty that can be associated with the assigned ages is also given.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
Hesselmann, Andreas; Görling, Andreas
2011-01-21
A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.
NASA Astrophysics Data System (ADS)
Nikitin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.; Botygin, I. A.
2017-02-01
The results of the research of existent routing protocols in wireless networks and their main features are discussed in the paper. Basing on the protocol data, the routing protocols in wireless networks, including search routing algorithms and phone directory exchange algorithms, are designed with the ‘WiFi-Direct’ technology. Algorithms without IP-protocol were designed, and that enabled one to increase the efficiency of the algorithms while working only with the MAC-addresses of the devices. The developed algorithms are expected to be used in the mobile software engineering with the Android platform taken as base. Easier algorithms and formats of the well-known route protocols, rejection of the IP-protocols enables to use the developed protocols on more primitive mobile devices. Implementation of the protocols to the engineering industry enables to create data transmission networks among working places and mobile robots without any access points.
Community detection in complex networks by using membrane algorithm
NASA Astrophysics Data System (ADS)
Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren
Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.
Gupta, Sebanti; Bhattacharjya, Surajit
2014-11-01
The sterile alpha motif or SAM domain is one of the most frequently present protein interaction modules with diverse functional attributions. SAM domain of the Ste11 protein of budding yeast plays important roles in mitogen-activated protein kinase cascades. In the current study, urea-induced, at subdenaturing concentrations, structural, and dynamical changes in the Ste11 SAM domain have been investigated by nuclear magnetic resonance spectroscopy. Our study revealed that a number of residues from Helix 1 and Helix 5 of the Ste11 SAM domain display plausible alternate conformational states and largest chemical shift perturbations at low urea concentrations. Amide proton (H/D) exchange experiments indicated that Helix 1, loop, and Helix 5 become more susceptible to solvent exchange with increased concentrations of urea. Notably, Helix 1 and Helix 5 are directly involved in binding interactions of the Ste11 SAM domain. Our data further demonstrate that the existence of alternate conformational states around the regions involved in dimeric interactions in native or near native conditions. © 2014 Wiley Periodicals, Inc.
Evaporation from a partially wet forest canopy
NASA Technical Reports Server (NTRS)
Hancock, N. H.; Sellers, P. J.; Crowther, J. M.
1983-01-01
The results of experimental studies of water storage in a Sitka-spruce canopy are presented and analyzed in terms of model simulations of evaporation. Wet-branch cantilever deflection was measured along with meteorological data on three days in August, 1976, to determine the relationship of canopy evaporation to wind speed and (hence) aerodynamic resistance. Two versions of a simple unilayer model of sensible and latent heat transport from a partially wet canopy were tested in the data analysis: model F1 forbids the exchange of heat between wet and dry foliage surfaces; model F2 assumes that this exchange is highly efficient. Model F1 is found to give results consistent with the rainfall-interception model of Rutter et al. (1971, 1975, 1977), but model F2 gives results which are more plausible and correspond to the multilayer simulations of Sellers and Lockwood (1981) and the experimental findings of Hancock and Crowther (1979). It is inferred that the role of eddy diffusivity for water vapor is enhanced relative to momentum transport, and that the similarity hypothesis used in conventional models may fail in the near vicinity of a forest canopy.
The Universal Plausibility Metric (UPM) & Principle (UPP).
Abel, David L
2009-12-03
Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of xi is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (xi < 1).
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X.; Li, Huailiang; Shi, Rui
2017-01-01
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring. PMID:28471418
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X; Li, Huailiang; Shi, Rui
2017-05-04
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring.
Neural mechanisms underlying sensitivity to reverse-phi motion in the fly
Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander
2017-01-01
Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics. PMID:29261684
Neural mechanisms underlying sensitivity to reverse-phi motion in the fly.
Leonhardt, Aljoscha; Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander
2017-01-01
Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics.
Schlink, Uwe; Ragas, Ad M J
2011-01-01
Receptor-oriented approaches can assess the individual-specific exposure to air pollution. In such an individual-based model we analyse the impact of human mobility to the personal exposure that is perceived by individuals simulated in an exemplified urban area. The mobility models comprise random walk (reference point mobility, RPM), truncated Lévy flights (TLF), and agenda-based walk (RPMA). We describe and review the general concepts and provide an inter-comparison of these concepts. Stationary and ergodic behaviour are explained and applied as well as performance criteria for a comparative evaluation of the investigated algorithms. We find that none of the studied algorithm results in purely random trajectories. TLF and RPMA prove to be suitable for human mobility modelling, because they provide conditions for very individual-specific trajectories and exposure. Suggesting these models we demonstrate the plausibility of their results for exposure to air-borne benzene and the combined exposure to benzene and nonane. Copyright © 2011 Elsevier Ltd. All rights reserved.
Real-time dual-band haptic music player for mobile devices.
Hwang, Inwook; Lee, Hyeseon; Choi, Seungmoon
2013-01-01
We introduce a novel dual-band haptic music player for real-time simultaneous vibrotactile playback with music in mobile devices. Our haptic music player features a new miniature dual-mode actuator that can produce vibrations consisting of two principal frequencies and a real-time vibration generation algorithm that can extract vibration commands from a music file for dual-band playback (bass and treble). The algorithm uses a "haptic equalizer" and provides plausible sound-to-touch modality conversion based on human perceptual data. In addition, we present a user study carried out to evaluate the subjective performance (precision, harmony, fun, and preference) of the haptic music player, in comparison with the current practice of bass-band-only vibrotactile playback via a single-frequency voice-coil actuator. The evaluation results indicated that the new dual-band playback outperforms the bass-only rendering, also providing several insights for further improvements. The developed system and experimental findings have implications for improving the multimedia experience with mobile devices.
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
NETWORK ASSISTED ANALYSIS TO REVEAL THE GENETIC BASIS OF AUTISM1
Liu, Li; Lei, Jing; Roeder, Kathryn
2016-01-01
While studies show that autism is highly heritable, the nature of the genetic basis of this disorder remains illusive. Based on the idea that highly correlated genes are functionally interrelated and more likely to affect risk, we develop a novel statistical tool to find more potentially autism risk genes by combining the genetic association scores with gene co-expression in specific brain regions and periods of development. The gene dependence network is estimated using a novel partial neighborhood selection (PNS) algorithm, where node specific properties are incorporated into network estimation for improved statistical and computational efficiency. Then we adopt a hidden Markov random field (HMRF) model to combine the estimated network and the genetic association scores in a systematic manner. The proposed modeling framework can be naturally extended to incorporate additional structural information concerning the dependence between genes. Using currently available genetic association data from whole exome sequencing studies and brain gene expression levels, the proposed algorithm successfully identified 333 genes that plausibly affect autism risk. PMID:27134692
A multi-group firefly algorithm for numerical optimization
NASA Astrophysics Data System (ADS)
Tong, Nan; Fu, Qiang; Zhong, Caiming; Wang, Pengjun
2017-08-01
To solve the problem of premature convergence of firefly algorithm (FA), this paper analyzes the evolution mechanism of the algorithm, and proposes an improved Firefly algorithm based on modified evolution model and multi-group learning mechanism (IMGFA). A Firefly colony is divided into several subgroups with different model parameters. Within each subgroup, the optimal firefly is responsible for leading the others fireflies to implement the early global evolution, and establish the information mutual system among the fireflies. And then, each firefly achieves local search by following the brighter firefly in its neighbors. At the same time, learning mechanism among the best fireflies in various subgroups to exchange information can help the population to obtain global optimization goals more effectively. Experimental results verify the effectiveness of the proposed algorithm.
The high performance parallel algorithm for Unified Gas-Kinetic Scheme
NASA Astrophysics Data System (ADS)
Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu
2016-11-01
A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.
Multiple-variable neighbourhood search for the single-machine total weighted tardiness problem
NASA Astrophysics Data System (ADS)
Chung, Tsui-Ping; Fu, Qunjie; Liao, Ching-Jong; Liu, Yi-Ting
2017-07-01
The single-machine total weighted tardiness (SMTWT) problem is a typical discrete combinatorial optimization problem in the scheduling literature. This problem has been proved to be NP hard and thus provides a challenging area for metaheuristics, especially the variable neighbourhood search algorithm. In this article, a multiple variable neighbourhood search (m-VNS) algorithm with multiple neighbourhood structures is proposed to solve the problem. Special mechanisms named matching and strengthening operations are employed in the algorithm, which has an auto-revising local search procedure to explore the solution space beyond local optimality. Two aspects, searching direction and searching depth, are considered, and neighbourhood structures are systematically exchanged. Experimental results show that the proposed m-VNS algorithm outperforms all the compared algorithms in solving the SMTWT problem.
The Universal Plausibility Metric (UPM) & Principle (UPP)
2009-01-01
Background Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of ξ is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. Conclusion No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (ξ < 1). PMID:19958539
Network and data security design for telemedicine applications.
Makris, L; Argiriou, N; Strintzis, M G
1997-01-01
The maturing of telecommunication technologies has ushered in a whole new era of applications and services in the health care environment. Teleworking, teleconsultation, mutlimedia conferencing and medical data distribution are rapidly becoming commonplace in clinical practice. As a result, a set of problems arises, concerning data confidentiality and integrity. Public computer networks, such as the emerging ISDN technology, are vulnerable to eavesdropping. Therefore it is important for telemedicine applications to employ end-to-end encryption mechanisms securing the data channel from unauthorized access of modification. We propose a network access and encryption system that is both economical and easily implemented for integration in developing or existing applications, using well-known and thoroughly tested encryption algorithms. Public-key cryptography is used for session-key exchange, while symmetric algorithms are used for bulk encryption. Mechanisms for session-key generation and exchange are also provided.
An uncertainty-based distributed fault detection mechanism for wireless sensor networks.
Yang, Yang; Gao, Zhipeng; Zhou, Hang; Qiu, Xuesong
2014-04-25
Exchanging too many messages for fault detection will cause not only a degradation of the network quality of service, but also represents a huge burden on the limited energy of sensors. Therefore, we propose an uncertainty-based distributed fault detection through aided judgment of neighbors for wireless sensor networks. The algorithm considers the serious influence of sensing measurement loss and therefore uses Markov decision processes for filling in missing data. Most important of all, fault misjudgments caused by uncertainty conditions are the main drawbacks of traditional distributed fault detection mechanisms. We draw on the experience of evidence fusion rules based on information entropy theory and the degree of disagreement function to increase the accuracy of fault detection. Simulation results demonstrate our algorithm can effectively reduce communication energy overhead due to message exchanges and provide a higher detection accuracy ratio.
NASA Astrophysics Data System (ADS)
Li, Yuzhong
Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.
Bjørgesaeter, Anders; Ugland, Karl Inne; Bjørge, Arne
2004-10-01
The male harbor seal (Phoca vitulina) produces broadband nonharmonic vocalizations underwater during the breeding season. In total, 120 vocalizations from six colonies were analyzed to provide a description of the acoustic structure and for the presence of geographic variation. The complex harbor seal vocalizations may be described by how the frequency bandwidth varies over time. An algorithm that identifies the boundaries between noise and signal from digital spectrograms was developed in order to extract a frequency bandwidth contour. The contours were used as inputs for multivariate analysis. The vocalizations' sound types (e.g., pulsed sound, whistle, and broadband nonharmonic sound) were determined by comparing the vocalizations' spectrographic representations with sound waves produced by known sound sources. Comparison between colonies revealed differences in the frequency contours, as well as some geographical variation in use of sound types. The vocal differences may reflect a limited exchange of individuals between the six colonies due to long distances and strong site fidelity. Geographically different vocal repertoires have potential for identifying discrete breeding colonies of harbor seals, but more information is needed on the nature and extent of early movements of young, the degree of learning, and the stability of the vocal repertoire. A characteristic feature of many vocalizations in this study was the presence of tonal-like introductory phrases that fit into the categories pulsed sound and whistles. The functions of these phrases are unknown but may be important in distance perception and localization of the sound source. The potential behavioral consequences of the observed variability may be indicative of adaptations to different environmental properties influencing determination of distance and direction and plausible different male mating tactics.
NASA Astrophysics Data System (ADS)
Bogaard, T. A.
2003-04-01
This paper’s objectives are twofold: to test the potential of cation exchange capacity (CEC) analysis for refinement of the knowledge of the hydrological system in landslide areas; and to examine two laboratory CEC analysis techniques on their applicability to partly weathered marls. The NH4Ac and NaCl laboratory techniques are tested. The geochemical results are compared with the core descriptions and interpreted with respect to their usefulness. Both analysis techniques give identical results for CEC, and are plausible on the basis of the available clay content information. The determination of the exchangeable cations was more difficult, since part of the marls dissolved. With the ammonium-acetate method more of the marls are dissolved than with the sodium-chloride method. This negatively affects the results of the exchangeable cations. Therefore, the NaCl method is to be preferred for the determination of the cation fractions at the complex, be it that this method has the disadvantage that the sodium fraction cannot be determined. To overcome this problem it is recommended to try and use another salt e.g. SrCl2 as displacement fluid. Both Alvera and Boulc-Mondorès examples show transitions in cation composition with depth. It was shown that the exchangeable cation fractions can be useful in locating boundaries between water types, especially the boundary between the superficial, rain fed hydrological system and the lower, regional ground water system. This information may be important for landslide interventions since the hydrological system and the origin of the water need to be known in detail. It is also plausible that long-term predictions of slope stability may be improved by knowledge of the hydrogeochemical evolution of clayey landslides. In the Boulc-Mondorès example the subsurface information that can be extracted from CEC analyses was presented. In the Boulc-Mondorès cores deviant intervals of CEC could be identified. These are interpreted as weathered layers that may develop or have already developed into slip surfaces. The CEC analyses of the cores revealed ‘differences in chemical composition’ that can have an influence on slope stability. It is known that the chemical composition of a soil may have a large effect on the strength parameters of the material. The technique described here can also be used before core sampling for laboratory strength tests. The major problem of the CEC analyses turned out to be the explanation of the origin of the differences found in the core samples. From the above it is concluded that geochemistry is a potentially valuable technique for e.g. landslide research, but it is recognised that still a lot of work has to be done before the technique can be applied in engineering practice.
Willits, Iain; Cole, Helen; Jones, Roseanne; Carter, Kimberley; Arber, Mick; Jenks, Michelle; Craig, Joyce; Sims, Andrew
2017-08-01
The Spectra Optia ® automated apheresis system, indicated for red blood cell exchange in people with sickle cell disease, underwent evaluation by the National Institute for Health and Care Excellence, which uses its Medical Technologies Advisory Committee to make recommendations. The company (Terumo Medical Corporation) produced a submission making a case for adoption of its technology, which was critiqued by the Newcastle and York external assessment centre. Thirty retrospective observational studies were identified in their clinical submission. The external assessment centre considered these were of low methodological and reporting quality. Most were single-armed studies, with only six studies providing comparative data. The available data showed that, compared with manual red blood cell exchange, Spectra Optia reduces the frequency of exchange procedures as well as their duration, but increases the requirement for donor blood. However, other clinical and patient benefits were equivocal because of an absence of robust clinical evidence. The company provided a de novo model to support the economic proposition of the technology, and reported that in most scenarios Spectra Optia was cost saving, primarily through reduced requirement of chelation therapy to manage iron overload. The external assessment centre considered that although the cost-saving potential of Spectra Optia was plausible, the model and its clinical inputs were not sufficiently robust to demonstrate this. However, taking the evidence together with expert and patient advice, the Medical Technologies Advisory Committee considered Spectra Optia was likely to save costs, provide important patient benefits, and reduce inequality, and gave the technology a positive recommendation in Medical Technology Guidance 28.
NASA Astrophysics Data System (ADS)
Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz
2015-02-01
In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.
A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm
Chen, Jui-Le; Yang, Chu-Sing
2013-01-01
The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864
NASA Astrophysics Data System (ADS)
Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.
2010-10-01
We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.
Do we understand the temperature profile of air-water interface?
NASA Astrophysics Data System (ADS)
Solcerova, A.; van Emmerik, T. H. M.; Uittenbogaard, R.; van de Ven, F. H. M.; Van De Giesen, N.
2017-12-01
Lakes and reservoirs exchange energy with the atmosphere through long-wave radiation and turbulent heat fluxes. Calculation of those fluxes often depend on the surface temperature. Several recent studies used high resolution Distributed Temperature Sensing (DTS) to measure the temperature of air-water interface. We present results of three of such studies conducted on three different locations with three different climates (Ghana, Israel, The Netherland). Measurements from all presented studies show a distinct temperature drop close to the water surface during daytime. We provide several possible explanations for existence of such deviation of temperature, and discuss the plausibility of each. Explaining the measured temperature drop is crucial for a better understanding of the energy balance of lake surface, and estimation of the surface energy balance.
NASA Astrophysics Data System (ADS)
Rachmawati, D.; Budiman, M. A.; Siburian, W. S. E.
2018-05-01
On the process of exchanging files, security is indispensable to avoid the theft of data. Cryptography is one of the sciences used to secure the data by way of encoding. Fast Data Encipherment Algorithm (FEAL) is a block cipher symmetric cryptographic algorithms. Therefore, the file which wants to protect is encrypted and decrypted using the algorithm FEAL. To optimize the security of the data, session key that is utilized in the algorithm FEAL encoded with the Goldwasser-Micali algorithm, which is an asymmetric cryptographic algorithm and using probabilistic concept. In the encryption process, the key was converted into binary form. The selection of values of x that randomly causes the results of the cipher key is different for each binary value. The concept of symmetry and asymmetry algorithm merger called Hybrid Cryptosystem. The use of the algorithm FEAL and Goldwasser-Micali can restore the message to its original form and the algorithm FEAL time required for encryption and decryption is directly proportional to the length of the message. However, on Goldwasser- Micali algorithm, the length of the message is not directly proportional to the time of encryption and decryption.
Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing
2016-03-03
This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.
Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E
2017-04-15
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.
A Sequence of Sorting Strategies.
ERIC Educational Resources Information Center
Duncan, David R.; Litwiller, Bonnie H.
1984-01-01
Describes eight increasingly sophisticated and efficient sorting algorithms including linear insertion, binary insertion, shellsort, bubble exchange, shakersort, quick sort, straight selection, and tree selection. Provides challenges for the reader and the student to program these efficiently. (JM)
Cooperative Optimal Coordination for Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Ren, Wei
In this paper, we consider the optimal coordination problem for distributed energy resources (DERs) including distributed generators and energy storage devices. We propose an algorithm based on the push-sum and gradient method to optimally coordinate storage devices and distributed generators in a distributed manner. In the proposed algorithm, each DER only maintains a set of variables and updates them through information exchange with a few neighbors over a time-varying directed communication network. We show that the proposed distributed algorithm solves the optimal DER coordination problem if the time-varying directed communication network is uniformly jointly strongly connected, which is a mildmore » condition on the connectivity of communication topologies. The proposed distributed algorithm is illustrated and validated by numerical simulations.« less
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
Mountain, James E.; Santer, Peter; O’Neill, David P.; Smith, Nicholas M. J.; Ciaffoni, Luca; Couper, John H.; Ritchie, Grant A. D.; Hancock, Gus; Whiteley, Jonathan P.
2018-01-01
Inhomogeneity in the lung impairs gas exchange and can be an early marker of lung disease. We hypothesized that highly precise measurements of gas exchange contain sufficient information to quantify many aspects of the inhomogeneity noninvasively. Our aim was to explore whether one parameterization of lung inhomogeneity could both fit such data and provide reliable parameter estimates. A mathematical model of gas exchange in an inhomogeneous lung was developed, containing inhomogeneity parameters for compliance, vascular conductance, and dead space, all relative to lung volume. Inputs were respiratory flow, cardiac output, and the inspiratory and pulmonary arterial gas compositions. Outputs were expiratory and pulmonary venous gas compositions. All values were specified every 10 ms. Some parameters were set to physiologically plausible values. To estimate the remaining unknown parameters and inputs, the model was embedded within a nonlinear estimation routine to minimize the deviations between model and data for CO2, O2, and N2 flows during expiration. Three groups, each of six individuals, were studied: young (20–30 yr); old (70–80 yr); and patients with mild to moderate chronic obstructive pulmonary disease (COPD). Each participant undertook a 15-min measurement protocol six times. For all parameters reflecting inhomogeneity, highly significant differences were found between the three participant groups (P < 0.001, ANOVA). Intraclass correlation coefficients were 0.96, 0.99, and 0.94 for the parameters reflecting inhomogeneity in deadspace, compliance, and vascular conductance, respectively. We conclude that, for the particular participants selected, highly repeatable estimates for parameters reflecting inhomogeneity could be obtained from noninvasive measurements of respiratory gas exchange. NEW & NOTEWORTHY This study describes a new method, based on highly precise measures of gas exchange, that quantifies three distributions that are intrinsic to the lung. These distributions represent three fundamentally different types of inhomogeneity that together give rise to ventilation-perfusion mismatch and result in impaired gas exchange. The measurement technique has potentially broad clinical applicability because it is simple for both patient and operator, it does not involve ionizing radiation, and it is completely noninvasive. PMID:29074714
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
Exchange inlet optimization by genetic algorithm for improved RBCC performance
NASA Astrophysics Data System (ADS)
Chorkawy, G.; Etele, J.
2017-09-01
A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.
Melanoma detection using smartphone and multimode hyperspectral imaging
NASA Astrophysics Data System (ADS)
MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.
2016-04-01
This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.
Coupled Protein Diffusion and Folding in the Cell
Guo, Minghao; Gelman, Hannah; Gruebele, Martin
2014-01-01
When a protein unfolds in the cell, its diffusion coefficient is affected by its increased hydrodynamic radius and by interactions of exposed hydrophobic residues with the cytoplasmic matrix, including chaperones. We characterize protein diffusion by photobleaching whole cells at a single point, and imaging the concentration change of fluorescent-labeled protein throughout the cell as a function of time. As a folded reference protein we use green fluorescent protein. The resulting region-dependent anomalous diffusion is well characterized by 2-D or 3-D diffusion equations coupled to a clustering algorithm that accounts for position-dependent diffusion. Then we study diffusion of a destabilized mutant of the enzyme phosphoglycerate kinase (PGK) and of its stable control inside the cell. Unlike the green fluorescent protein control's diffusion coefficient, PGK's diffusion coefficient is a non-monotonic function of temperature, signaling ‘sticking’ of the protein in the cytosol as it begins to unfold. The temperature-dependent increase and subsequent decrease of the PGK diffusion coefficient in the cytosol is greater than a simple size-scaling model suggests. Chaperone binding of the unfolding protein inside the cell is one plausible candidate for even slower diffusion of PGK, and we test the plausibility of this hypothesis experimentally, although we do not rule out other candidates. PMID:25436502
Coupled protein diffusion and folding in the cell.
Guo, Minghao; Gelman, Hannah; Gruebele, Martin
2014-01-01
When a protein unfolds in the cell, its diffusion coefficient is affected by its increased hydrodynamic radius and by interactions of exposed hydrophobic residues with the cytoplasmic matrix, including chaperones. We characterize protein diffusion by photobleaching whole cells at a single point, and imaging the concentration change of fluorescent-labeled protein throughout the cell as a function of time. As a folded reference protein we use green fluorescent protein. The resulting region-dependent anomalous diffusion is well characterized by 2-D or 3-D diffusion equations coupled to a clustering algorithm that accounts for position-dependent diffusion. Then we study diffusion of a destabilized mutant of the enzyme phosphoglycerate kinase (PGK) and of its stable control inside the cell. Unlike the green fluorescent protein control's diffusion coefficient, PGK's diffusion coefficient is a non-monotonic function of temperature, signaling 'sticking' of the protein in the cytosol as it begins to unfold. The temperature-dependent increase and subsequent decrease of the PGK diffusion coefficient in the cytosol is greater than a simple size-scaling model suggests. Chaperone binding of the unfolding protein inside the cell is one plausible candidate for even slower diffusion of PGK, and we test the plausibility of this hypothesis experimentally, although we do not rule out other candidates.
EMISSION AND SURFACE EXCHANGE PROCESS
This task supports the development, evaluation, and application of emission and dry deposition algorithms in air quality simulation models, such as the Models-3/Community Multiscale Air Quality (CMAQ) modeling system. Emission estimates influence greatly the accuracy of air qual...
An Uncertainty-Based Distributed Fault Detection Mechanism for Wireless Sensor Networks
Yang, Yang; Gao, Zhipeng; Zhou, Hang; Qiu, Xuesong
2014-01-01
Exchanging too many messages for fault detection will cause not only a degradation of the network quality of service, but also represents a huge burden on the limited energy of sensors. Therefore, we propose an uncertainty-based distributed fault detection through aided judgment of neighbors for wireless sensor networks. The algorithm considers the serious influence of sensing measurement loss and therefore uses Markov decision processes for filling in missing data. Most important of all, fault misjudgments caused by uncertainty conditions are the main drawbacks of traditional distributed fault detection mechanisms. We draw on the experience of evidence fusion rules based on information entropy theory and the degree of disagreement function to increase the accuracy of fault detection. Simulation results demonstrate our algorithm can effectively reduce communication energy overhead due to message exchanges and provide a higher detection accuracy ratio. PMID:24776937
Global optimization algorithm for heat exchanger networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quesada, I.; Grossmann, I.E.
This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less
NASA Astrophysics Data System (ADS)
Pacheco-Vega, Arturo
2016-09-01
In this work a new set of correlation equations is developed and introduced to accurately describe the thermal performance of compact heat exchangers with possible condensation. The feasible operating conditions for the thermal system correspond to dry- surface, dropwise condensation, and film condensation. Using a prescribed form for each condition, a global regression analysis for the best-fit correlation to experimental data is carried out with a simulated annealing optimization technique. The experimental data were taken from the literature and algorithmically classified into three groups -related to the possible operating conditions- with a previously-introduced Gaussian-mixture-based methodology. Prior to their use in the analysis, the correct data classification was assessed and confirmed via artificial neural networks. Predictions from the correlations obtained for the different conditions are within the uncertainty of the experiments and substantially more accurate than those commonly used.
NASA Astrophysics Data System (ADS)
Longmore, S. P.; Knaff, J. A.; Schumacher, A.; Dostalek, J.; DeMaria, R.; Chirokova, G.; Demaria, M.; Powell, D. C.; Sigmund, A.; Yu, W.
2014-12-01
The Colorado State University (CSU) Cooperative Institute for Research in the Atmosphere (CIRA) has recently deployed a tropical cyclone (TC) intensity and surface wind radii estimation algorithm that utilizes Suomi National Polar-orbiting Partnership (S-NPP) satellite Advanced Technology Microwave Sounder (ATMS) and Advanced Microwave Sounding Unit (AMSU) from the NOAA18, NOAA19 and METOPA polar orbiting satellites for testing, integration and operations for the Product System Development and Implementation (PSDI) projects at NOAA's National Environmental Satellite, Data, and Information Service (NESDIS). This presentation discusses the evolution of the CIRA NPP/AMSU TC algorithms internally at CIRA and its migration and integration into the NOAA Data Exploitation (NDE) development and testing frameworks. The discussion will focus on 1) the development cycle of internal NPP/AMSU TC algorithms components by scientists and software engineers, 2) the exchange of these components into the NPP/AMSU TC software systems using the subversion version control system and other exchange methods, 3) testing, debugging and integration of the NPP/AMSU TC systems both at CIRA/NESDIS and 4) the update cycle of new releases through continuous integration. Lastly, a discussion of the methods that were effective and those that need revision will be detailed for the next iteration of the NPP/AMSU TC system.
Structure-preserving and rank-revealing QR-factorizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Hansen, P.C.
1991-11-01
The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less
Ratner, Lloyd E; Ratner, Emily R; Kelly, Joan; Carrol, Maureen; Cherwinski, Karyn; Ernst, Victoria; Rana, Abbas
2008-01-01
Paired kidney exchanges are being used with increasing frequency to overcome humoral immunologic incompatibilities between patients in need of renal transplantation and their potential live donors. Altruistic unbalanced exchanges utilize compatible donor/recipient pairs in order to facilitate the transplantation of a patient with an incompatible donor. We have now performed several altruistic unbalanced paired kidney exchanges at our institution. Also, we have surveyed potential donors and recipients regarding their attitudes toward participating in altruistic unbalanced paired kidney exchanges. Patients are most amenable to participation if they perceive a benefit from trading away a compatible donor. Given the number of compatible live donor transplants performed annually, if practiced on a broad scale, altruistic unbalancedpaired kidney exchanges can have a profound impact upon the supply of kidneys for transplantation. These exchanges can be performed at individual centers without the requirement for largesharing pools or complex computer algorithms. However, there are a number of ethical and logistical considerations that must be addressed. Altruistic unbalanced paired kidney exchanges represent a major paradigm shift in renal transplantation, in that a private resource (i.e. the live kidney donor) is converted to a shared or public one.
1976-11-11
exchange. The basis for this choice was derived from several factors . One was a timing analysis that was made for certain basic time-critical software...randidate 6jrstem designs were developed and _*xamined with respect to L their capability to demonstrate the workability of the basic concept and for factors ...algorithm recuires a bit time completion, while SOF production allows byte timing and the involved = SOF correlation procedure may be perfor-med during
Coordination Logic for Repulsive Resolution Maneuvers
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.; Dutle, Aaron M.
2016-01-01
This paper presents an algorithm for determining the direction an aircraft should maneuver in the event of a potential conflict with another aircraft. The algorithm is implicitly coordinated, meaning that with perfectly reliable computations and information, it will in- dependently provide directional information that is guaranteed to be coordinated without any additional information exchange or direct communication. The logic is inspired by the logic of TCAS II, the airborne system designed to reduce the risk of mid-air collisions between aircraft. TCAS II provides pilots with only vertical resolution advice, while the proposed algorithm, using a similar logic, provides implicitly coordinated vertical and horizontal directional advice.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less
Svoboda, David; Ulman, Vladimir
2017-01-01
The proper analysis of biological microscopy images is an important and complex task. Therefore, it requires verification of all steps involved in the process, including image segmentation and tracking algorithms. It is generally better to verify algorithms with computer-generated ground truth datasets, which, compared to manually annotated data, nowadays have reached high quality and can be produced in large quantities even for 3D time-lapse image sequences. Here, we propose a novel framework, called MitoGen, which is capable of generating ground truth datasets with fully 3D time-lapse sequences of synthetic fluorescence-stained cell populations. MitoGen shows biologically justified cell motility, shape and texture changes as well as cell divisions. Standard fluorescence microscopy phenomena such as photobleaching, blur with real point spread function (PSF), and several types of noise, are simulated to obtain realistic images. The MitoGen framework is scalable in both space and time. MitoGen generates visually plausible data that shows good agreement with real data in terms of image descriptors and mean square displacement (MSD) trajectory analysis. Additionally, it is also shown in this paper that four publicly available segmentation and tracking algorithms exhibit similar performance on both real and MitoGen-generated data. The implementation of MitoGen is freely available.
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Development and evaluation of a predictive algorithm for telerobotic task complexity
NASA Technical Reports Server (NTRS)
Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.
1993-01-01
There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
NASA Astrophysics Data System (ADS)
Tchagang, Alain B.; Tewfik, Ahmed H.
2006-12-01
Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
Resolving Conflicts Between Syntax and Plausibility in Sentence Comprehension
Andrews, Glenda; Ogden, Jessica E.; Halford, Graeme S.
2017-01-01
Comprehension of plausible and implausible object- and subject-relative clause sentences with and without prepositional phrases was examined. Undergraduates read each sentence then evaluated a statement as consistent or inconsistent with the sentence. Higher acceptance of consistent than inconsistent statements indicated reliance on syntactic analysis. Higher acceptance of plausible than implausible statements reflected reliance on semantic plausibility. There was greater reliance on semantic plausibility and lesser reliance on syntactic analysis for more complex object-relatives and sentences with prepositional phrases than for less complex subject-relatives and sentences without prepositional phrases. Comprehension accuracy and confidence were lower when syntactic analysis and semantic plausibility yielded conflicting interpretations. The conflict effect on comprehension was significant for complex sentences but not for less complex sentences. Working memory capacity predicted resolution of the syntax-plausibility conflict in more and less complex items only when sentences and statements were presented sequentially. Fluid intelligence predicted resolution of the conflict in more and less complex items under sequential and simultaneous presentation. Domain-general processes appear to be involved in resolving syntax-plausibility conflicts in sentence comprehension. PMID:28458748
Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard
2008-01-01
Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205
Counterfactual Plausibility and Comparative Similarity.
Stanley, Matthew L; Stewart, Gregory W; Brigard, Felipe De
2017-05-01
Counterfactual thinking involves imagining hypothetical alternatives to reality. Philosopher David Lewis (1973, 1979) argued that people estimate the subjective plausibility that a counterfactual event might have occurred by comparing an imagined possible world in which the counterfactual statement is true against the current, actual world in which the counterfactual statement is false. Accordingly, counterfactuals considered to be true in possible worlds comparatively more similar to ours are judged as more plausible than counterfactuals deemed true in possible worlds comparatively less similar. Although Lewis did not originally develop his notion of comparative similarity to be investigated as a psychological construct, this study builds upon his idea to empirically investigate comparative similarity as a possible psychological strategy for evaluating the perceived plausibility of counterfactual events. More specifically, we evaluate judgments of comparative similarity between episodic memories and episodic counterfactual events as a factor influencing people's judgments of plausibility in counterfactual simulations, and we also compare it against other factors thought to influence judgments of counterfactual plausibility, such as ease of simulation and prior simulation. Our results suggest that the greater the perceived similarity between the original memory and the episodic counterfactual event, the greater the perceived plausibility that the counterfactual event might have occurred. While similarity between actual and counterfactual events, ease of imagining, and prior simulation of the counterfactual event were all significantly related to counterfactual plausibility, comparative similarity best captured the variance in ratings of counterfactual plausibility. Implications for existing theories on the determinants of counterfactual plausibility are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Successful attack on permutation-parity-machine-based neural cryptography.
Seoane, Luís F; Ruttor, Andreas
2012-02-01
An algorithm is presented which implements a probabilistic attack on the key-exchange protocol based on permutation parity machines. Instead of imitating the synchronization of the communicating partners, the strategy consists of a Monte Carlo method to sample the space of possible weights during inner rounds and an analytic approach to convey the extracted information from one outer round to the next one. The results show that the protocol under attack fails to synchronize faster than an eavesdropper using this algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Utgikar, Vivek; Sun, Xiaodong; Christensen, Richard
2016-12-29
The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate themore » models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.« less
Deformable image registration for adaptive radiotherapy with guaranteed local rigidity constraints.
König, Lars; Derksen, Alexander; Papenberg, Nils; Haas, Benjamin
2016-09-20
Deformable image registration (DIR) is a key component in many radiotherapy applications. However, often resulting deformations are not satisfying, since varying deformation properties of different anatomical regions are not considered. To improve the plausibility of DIR in adaptive radiotherapy in the male pelvic area, this work integrates a local rigidity deformation model into a DIR algorithm. A DIR framework is extended by constraints, enforcing locally rigid deformation behavior for arbitrary delineated structures. The approach restricts those structures to rigid deformations, while surrounding tissue is still allowed to deform elastically. The algorithm is tested on ten CT/CBCT male pelvis datasets with active rigidity constraints on bones and prostate and compared to the Varian SmartAdapt deformable registration (VSA) on delineations of bladder, prostate and bones. The approach with no rigid structures (REG0) obtains an average dice similarity coefficient (DSC) of 0.87 ± 0.06 and a Hausdorff-Distance (HD) of 8.74 ± 5.95 mm. The new approach with rigid bones (REG1) yields a DSC of 0.87 ± 0.07, HD 8.91 ± 5.89 mm. Rigid deformation of bones and prostate (REG2) obtains 0.87 ± 0.06, HD 8.73 ± 6.01 mm, while VSA yields a DSC of 0.86 ± 0.07, HD 10.22 ± 6.62 mm. No deformation grid foldings are observed for REG0 and REG1 in 7 of 10 cases; for REG2 in 8 of 10 cases, with no grid foldings in prostate, an average of 0.08 % in bladder (REG2: no foldings) and 0.01 % inside the body contour. VSA exhibits grid foldings in each case, with an average percentage of 1.81 % for prostate, 1.74 % for bladder and 0.12 % for the body contour. While REG1 and REG2 keep bones rigid, elastic bone deformations are observed with REG0 and VSA. An average runtime of 26.2 s was achieved with REG1; 31.1 s with REG2, compared to 10.5 s with REG0 and 10.7 s with VMS. With accuracy in the range of VSA, the new approach with constraints delivers physically more plausible deformations in the pelvic area with guaranteed rigidity of arbitrary structures. Although the algorithm uses an advanced deformation model, clinically feasible runtimes are achieved.
Extent of Fock-exchange mixing for a hybrid van der Waals density functional?
NASA Astrophysics Data System (ADS)
Jiao, Yang; Schröder, Elsebeth; Hyldgaard, Per
2018-05-01
The vdW-DF-cx0 exchange-correlation hybrid design [K. Berland et al., J. Chem. Phys. 146, 234106 (2017)] has a truly nonlocal correlation component and aims to facilitate concurrent descriptions of both covalent and non-covalent molecular interactions. The vdW-DF-cx0 design mixes a fixed ratio, a, of the Fock exchange into the consistent-exchange van der Waals density functional, vdW-DF-cx [K. Berland and P. Hyldgaard, Phys. Rev. B 89, 035412 (2014)]. The mixing value a is sometimes taken as a semi-empirical parameter in hybrid formulations. Here, instead, we assert a plausible optimum average a value for the vdW-DF-cx0 design from a formal analysis; A new, independent determination of the mixing a is necessary since the Becke fit [A. D. Becke, J. Chem. Phys. 98, 5648 (1993)], yielding a' = 0.2, is restricted to semilocal correlation and does not reflect non-covalent interactions. To proceed, we adapt the so-called two-legged hybrid construction [K. Burke et al., Chem. Phys. Lett. 265, 115 (1997)] to a starting point in the vdW-DF-cx functional. For our approach, termed vdW-DF-tlh, we estimate the properties of the adiabatic-connection specification of the exact exchange-correlation functional, by combining calculations of the Fock exchange and of the coupling-constant variation in vdW-DF-cx. We find that such vdW-DF-tlh hybrid constructions yield accurate characterizations of molecular interactions (even if they lack self-consistency). The accuracy motivates trust in the vdW-DF-tlh determination of system-specific values of the Fock-exchange mixing. We find that an average value a' = 0.2 best characterizes the vdW-DF-tlh description of covalent and non-covalent interactions, although there exists some scatter. This finding suggests that the original Becke value, a' = 0.2, also represents an optimal average Fock-exchange mixing for the new, truly nonlocal-correlation hybrids. To enable self-consistent calculations, we furthermore define and test a zero-parameter hybrid functional vdW-DF-cx0p (having fixed mixing a' = 0.2) and document that this truly nonlocal correlation hybrid works for general molecular interactions (at reference and at relaxed geometries). It is encouraging that the vdW-DF-cx0p functional remains useful also for descriptions of some extended systems.
Faulkner, Jonathan; Hu, Bill X; Kish, Stephen; Hua, Fei
2009-11-03
New mathematical and laboratory methods have been developed for simulating groundwater flow and solute transport in karst aquifers having conduits imbedded in a porous medium, such as limestone. The Stokes equations are used to model the flow in the conduits and the Darcy equation is used for the flow in the matrix. The Beavers-Joseph interface boundary conditions are adopted to describe the flow exchange at the interface boundary between the two domains. A laboratory analog is used to simulate the conduit and matrix domains of a karst aquifer. The conduit domain is located at the bottom of the transparent plexiglas laboratory analog and glass beads occupy the remaining space to represent the matrix domain. Water flows into and out of the two domains separately and each has its own supply and outflow reservoirs. Water and solute are exchanged through an interface between the two domains. Pressure transducers located within the matrix and conduit domains of the analog provide data that is processed and stored in digital format. Dye tracing experiments are recorded using time-lapse imaging. The data and images produced are analyzed by a spatial analysis program. The experiments provide not only hydraulic head distribution but also capture solute front images and mass exchange measurements between the conduit and matrix domains. In the experiment, we measure and record pressures, and quantify flow rates and solute transport. The results present a plausible argument that laboratory analogs can characterize groundwater water flow, solute transport, and mass exchange between the conduit and matrix domains in a karst aquifer. The analog validates the predictions of a numerical model and demonstrates the need of laboratory analogs to provide verification of proposed theories and the calibration of mathematical models.
Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun
2014-01-01
An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940
Xu, Lei; Jeavons, Peter
2015-11-01
Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
A Multipopulation Coevolutionary Strategy for Multiobjective Immune Algorithm
Shi, Jiao; Gong, Maoguo; Ma, Wenping; Jiao, Licheng
2014-01-01
How to maintain the population diversity is an important issue in designing a multiobjective evolutionary algorithm. This paper presents an enhanced nondominated neighbor-based immune algorithm in which a multipopulation coevolutionary strategy is introduced for improving the population diversity. In the proposed algorithm, subpopulations evolve independently; thus the unique characteristics of each subpopulation can be effectively maintained, and the diversity of the entire population is effectively increased. Besides, the dynamic information of multiple subpopulations is obtained with the help of the designed cooperation operator which reflects a mutually beneficial relationship among subpopulations. Subpopulations gain the opportunity to exchange information, thereby expanding the search range of the entire population. Subpopulations make use of the reference experience from each other, thereby improving the efficiency of evolutionary search. Compared with several state-of-the-art multiobjective evolutionary algorithms on well-known and frequently used multiobjective and many-objective problems, the proposed algorithm achieves comparable results in terms of convergence, diversity metrics, and running time on most test problems. PMID:24672330
Direct reciprocity in animals: The roles of bonding and affective processes.
Freidin, Esteban; Carballo, Fabricio; Bentosela, Mariana
2017-04-01
The presence of direct reciprocity in animals is a debated topic, because, despite its evolutionary plausibility, it is believed to be uncommon. Some authors claim that stable reciprocal exchanges require sophisticated cognition which has acted as a constraint on its evolution across species. In contrast, a more recent trend of research has focused on the possibility that direct reciprocity occurs within long-term bonds and relies on simple as well as more complex affective mechanisms such as emotional book-keeping, rudimentary and higher forms of empathy, and inequity aversion, among others. First, we present evidence supporting the occurrence of long-term reciprocity in the context of existing bonds in social birds and mammals. Second, we discuss the evidence for affective responses which, modulated by bonding, may underlie altruistic behaviours in different species. We conclude that the mechanisms that may underlie reciprocal exchanges are diverse, and that some act in interaction with bonding processes. From simple associative learning in social contexts, through emotional contagion and behavioural mimicry, to empathy and a sense of fairness, widespread and diverse social affective mechanisms may explain why direct reciprocity may not be a rare phenomenon among social vertebrates. © 2015 International Union of Psychological Science.
Modeling Europa's Ice-Ocean Interface
NASA Astrophysics Data System (ADS)
Elsenousy, A.; Vance, S.; Bills, B. G.
2014-12-01
This work focuses on modeling the ice-ocean interface on Jupiter's Moon (Europa); mainly from the standpoint of heat and salt transfer relationship with emphasis on the basal ice growth rate and its implications to Europa's tidal response. Modeling the heat and salt flux at Europa's ice/ocean interface is necessary to understand the dynamics of Europa's ocean and its interaction with the upper ice shell as well as the history of active turbulence at this area. To achieve this goal, we used McPhee et al., 2008 parameterizations on Earth's ice/ocean interface that was developed to meet Europa's ocean dynamics. We varied one parameter at a time to test its influence on both; "h" the basal ice growth rate and on "R" the double diffusion tendency strength. The double diffusion tendency "R" was calculated as the ratio between the interface heat exchange coefficient αh to the interface salt exchange coefficient αs. Our preliminary results showed a strong double diffusion tendency R ~200 at Europa's ice-ocean interface for plausible changes in the heat flux due to onset or elimination of a hydrothermal activity, suggesting supercooling and a strong tendency for forming frazil ice.
Adsorption of nucleotides onto Fe-Mg-Al rich swelling clays
NASA Astrophysics Data System (ADS)
Feuillie, Cécile; Daniel, Isabelle; Michot, Laurent J.; Pedreira-Segade, Ulysse
2013-11-01
Mineral surfaces may have played a role in the origin of the first biopolymers, by concentrating organic monomers from a dilute ocean. Swelling clays provide a high surface area for the concentration of prebiotic monomers, and have therefore been the subject of numerous investigations. In that context, montmorillonite, the most abundant swelling clay in modern environments, has been extensively studied with regard to adsorption and polymerization of nucleic acids. However, montmorillonite was probably rather marginal on the primitive ocean floor compared to iron-magnesium rich phyllosilicates such as nontronite that results from the hydrothermal alteration of a mafic or ultramafic oceanic crust. In the present paper, we study the adsorption of nucleotides on montmorillonite and nontronite, at various pH and ionic strength conditions plausible for Archean sea-water. A thorough characterization of the mineral surfaces shows that nucleotide adsorb mainly on the edge faces of the smectites by ligand exchange between the phosphate groups of the nucleotides and the -OH groups from the edge sites over a wide pH range (4-10). Nontronite is more reactive than montmorillonite. At low pH, additional ion exchange may play a role as the nucleotides become positively charged.
The Long and Viscous Road: Uncovering Nuclear Diffusion Barriers in Closed Mitosis
Zavala, Eder; Marquez-Lago, Tatiana T.
2014-01-01
Diffusion barriers are effective means for constraining protein lateral exchange in cellular membranes. In Saccharomyces cerevisiae, they have been shown to sustain parental identity through asymmetric segregation of ageing factors during closed mitosis. Even though barriers have been extensively studied in the plasma membrane, their identity and organization within the nucleus remains poorly understood. Based on different lines of experimental evidence, we present a model of the composition and structural organization of a nuclear diffusion barrier during anaphase. By means of spatial stochastic simulations, we propose how specialised lipid domains, protein rings, and morphological changes of the nucleus may coordinate to restrict protein exchange between mother and daughter nuclear lobes. We explore distinct, plausible configurations of these diffusion barriers and offer testable predictions regarding their protein exclusion properties and the diffusion regimes they generate. Our model predicts that, while a specialised lipid domain and an immobile protein ring at the bud neck can compartmentalize the nucleus during early anaphase; a specialised lipid domain spanning the elongated bridge between lobes would be entirely sufficient during late anaphase. Our work shows how complex nuclear diffusion barriers in closed mitosis may arise from simple nanoscale biophysical interactions. PMID:25032937
16O enrichments in aluminum-rich chondrules from ordinary chondrites
NASA Astrophysics Data System (ADS)
Russell, Sara S.; MacPherson, Glenn J.; Leshin, Laurie A.; McKeegan, Kevin D.
2000-12-01
The oxygen isotopic compositions of seven Al-rich chondrules from four unequilibrated ordinary chondrites were measured in situ using an ion microprobe. On an oxygen three isotope plot, the data are continuous with the ordinary chondrite ferromagnesian chondrule field but extend it to more 16O-enriched values along a mixing line of slope=0.83±0.09, with the lightest value recorded at δ18O=-15.7±1.8‰ and δ17O=-13.5±2.6‰. If Al-rich chondrules were mixtures of ferromagnesian chondrules and CAI material, their bulk chemical compositions would require them to exhibit larger 16O enrichments than we observe. Therefore, Al-rich chondrules are not simple mixtures of these two components. Three chondrules exhibit significant internal isotopic heterogeneity indicative of partial exchange with a gaseous reservoir. Porphyritic Al-rich chondrules are consistently 16O-rich relative to nonporphyritic ones, suggesting that degree of melting is a key factor and pointing to a nebular setting for the isotopic exchange process. Because Al-rich chondrules are closely related to ferromagnesian chondrules, their radiogenic Mg isotopic abundances can plausibly be applied to help constrain the timing or location of chondrule formation.
Effect of barium on diffusion of sodium in borosilicate glass.
Mishra, R K; Kumar, Sumit; Tomar, B S; Tyagi, A K; Kaushik, C P; Raj, Kanwar; Manchanda, V K
2008-08-15
Diffusion coefficients of sodium in barium borosilicate glasses having varying concentration of barium were determined by heterogeneous isotopic exchange method using (24)Na as the radiotracer for sodium. The measurements were carried out at various temperatures (748-798 K) to obtain the activation energy (E(a)) of diffusion. The E(a) values were found to increase with increasing barium content of the glass, indicating that introduction of barium in the borosilicate glass hinders the diffusion of alkali metal ions from the glass matrix. The results have been explained in terms of the electrostatic and structural factors, with the increasing barium concentration resulting in population of low energy sites by Na(+) ions and, plausibly, formation of more tight glass network. The leach rate measurements on the glass samples show similar trend.
Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package
NASA Astrophysics Data System (ADS)
Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack
Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.
NASA Astrophysics Data System (ADS)
Bogaard, T. A.; Buma, J. T.; Klawer, C. J. M.
2004-03-01
This paper's objective is to determine how useful geochemistry can be in landslide investigations. More specifically, what additional information can be gained by analysing the cation exchange capacity (CEC) and cation composition in respect to the hydrological system of a landslide area in clayey material. Two cores from the Boulc-Mondorès landslide (France) and one core from the Alvera landslide (Italy) were analysed. The NH 4Ac and NaCl laboratory techniques are tested. The geochemical results are compared with the core descriptions and interpreted with respect to their usefulness. Both analysis techniques give identical results for CEC, and are plausible on the basis of the available clay content information. The determination of the exchangeable cations was more difficult, since part of the marls dissolved. With the ammonium-acetate method more of the marls are dissolved than with the sodium-chloride method. The NaCl method is preferred for the determination of the cation fractions at the complex, be it that this method has the disadvantage that the sodium fraction cannot be determined. To overcome this problem, it is recommended to try other displacement fluids. In the Boulc-Mondorès example, the subsurface information that can be extracted from CEC analyses was presented. In the Boulc-Mondorès cores deviant intervals of CEC could be identified. These are interpreted as weathered layers (and preferential flow paths) that may develop or have already developed into slip surfaces. The major problem of the CEC analyses was to explain the origin of the differences found in the core samples. Both Alvera and Boulc-Mondorès examples show transitions in cation composition with depth. It was shown that the exchangeable caution fractions can be useful in locating boundaries between water types, especially the boundary between the superficial, rain-fed hydrological system and the lower, regional groundwater system. This information may be important for landslide interventions since the hydrological system and the origin of the water need to be known in detail. It is also plausible that long-term predictions of slope stability may be improved by knowledge of the hydrogeochemical evolution of clayey landslides. From the analysis, it is concluded that geochemistry is a potentially valuable technique for landslide research, but it is recognized that a lot of work still has to be done before the technique can be applied in engineering practice.
Atom exchange between aqueous Fe(II) and structural Fe in clay minerals.
Neumann, Anke; Wu, Lingling; Li, Weiqiang; Beard, Brian L; Johnson, Clark M; Rosso, Kevin M; Frierdich, Andrew J; Scherer, Michelle M
2015-03-03
Due to their stability toward reductive dissolution, Fe-bearing clay minerals are viewed as a renewable source of Fe redox activity in diverse environments. Recent findings of interfacial electron transfer between aqueous Fe(II) and structural Fe in clay minerals and electron conduction in octahedral sheets of nontronite, however, raise the question whether Fe interaction with clay minerals is more dynamic than previously thought. Here, we use an enriched isotope tracer approach to simultaneously trace Fe atom movement from the aqueous phase to the solid ((57)Fe) and from the solid into the aqueous phase ((56)Fe). Over 6 months, we observed a significant decrease in aqueous (57)Fe isotope fraction, with a fast initial decrease which slowed after 3 days and stabilized after about 50 days. For the aqueous (56)Fe isotope fraction, we observed a similar but opposite trend, indicating that Fe atom movement had occurred in both directions: from the aqueous phase into the solid and from the solid into aqueous phase. We calculated that 5-20% of structural Fe in clay minerals NAu-1, NAu-2, and SWa-1 exchanged with aqueous Fe(II), which significantly exceeds the Fe atom layer exposed directly to solution. Calculations based on electron-hopping rates in nontronite suggest that the bulk conduction mechanism previously demonstrated for hematite1 and suggested as an explanation for the significant Fe atom exchange observed in goethite2 may be a plausible mechanism for Fe atom exchange in Fe-bearing clay minerals. Our finding of 5-20% Fe atom exchange in clay minerals indicates that we need to rethink how Fe mobility affects the macroscopic properties of Fe-bearing phyllosilicates and its role in Fe biogeochemical cycling, as well as its use in a variety of engineered applications, such as landfill liners and nuclear repositories.
New latent heat storage system with nanoparticles for thermal management of electric vehicles
NASA Astrophysics Data System (ADS)
Javani, N.; Dincer, I.; Naterer, G. F.
2014-12-01
In this paper, a new passive thermal management system for electric vehicles is developed. A latent heat thermal energy storage with nanoparticles is designed and optimized. A genetic algorithm method is employed to minimize the length of the heat exchanger tubes. The results show that even the optimum length of a shell and tube heat exchanger becomes too large to be employed in a vehicle. This is mainly due to the very low thermal conductivity of phase change material (PCM) which fills the shell side of the heat exchanger. A carbon nanotube (CNT) and PCM mixture is then studied where the probability of nanotubes in a series configuration is defined as a deterministic design parameter. Various heat transfer rates, ranging from 300 W to 600 W, are utilized to optimize battery cooling options in the heat exchanger. The optimization results show that smaller tube diameters minimize the heat exchanger length. Furthermore, finned tubes lead to a higher heat exchanger length due to more heat transfer resistance. By increasing the CNT concentration, the optimum length of the heat exchanger decreases and makes the improved thermal management system a more efficient and competitive with air and liquid thermal management systems.
Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin
2013-01-01
Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.
A formal model of interpersonal inference
Moutoussis, Michael; Trujillo-Barreto, Nelson J.; El-Deredy, Wael; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Introduction: We propose that active Bayesian inference—a general framework for decision-making—can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: (1) Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to “mentalizing” in the psychological literature, is based upon the outcomes of interpersonal exchanges. (2) We show how some well-known social-psychological phenomena (e.g., self-serving biases) can be explained in terms of active interpersonal inference. (3) Mentalizing naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one's own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modeling intersubject variability in mentalizing during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalizing is distorted. PMID:24723872
Using MODFLOW with CFP to understand conduit-matrix exchange in a karst aquifer during flooding
NASA Astrophysics Data System (ADS)
Spellman, P.; Screaton, E.; Martin, J. B.; Gulley, J.; Brown, A.
2011-12-01
Karst springs may reverse flow when allogenic runoff increases river stage faster than groundwater heads and may exchange of surface water with groundwater in the surrounding aquifer matrix. Recharged flood water is rich in nutrients, metals, and organic matter and is undersaturated with respect to calcite. Understanding the physical processes controlling this exchange of water is critical to understanding metal cycling, redox chemistry and dissolution in the subsurface. Ultimately the magnitude of conduit-matrix exchange should be governed by head gradients between the conduit and the aquifer which are affected by the hydraulic conductivity of the matrix, conduit properties and antecedent groundwater heads. These parameters are interrelated and it is unknown which ones exert the greatest control over the magnitude of exchange. This study uses MODFLOW-2005 coupled with the Conduit Flow Processes (CFP) package to determine how physical properties of conduits and aquifers influence the magnitude of surface water-groundwater exchange. We use hydraulic data collected during spring reversals in a mapped underwater cave that sources Madison Blue Spring in north-central Florida to explore which factors are most important in governing exchange. The simulation focused on a major flood in 2009, when river stage increased by about 10 meters over 9 days. In a series of simulations, we varied hydraulic conductivity, conduit diameter, roughness height and tortuosity in addition to antecedent groundwater heads to estimate the relative effects of each parameter on the magnitude of conduit-matrix exchange. Each parameter was varied across plausible ranges for karst aquifers. Antecedent groundwater heads were varied using well data recorded through wet and dry seasons throughout the spring shed. We found hydraulic conductivity was the most important factor governing exchange. The volume of exchange increased by about 61% from the lowest value (1.8x10-6 m/d) to the highest value (6 m/d) of matrix hydraulic conductivity. Other factors increased the amount of exchange by 1% or less, with tortuosity (which varied from 1 to 2) being most significant with a 1% increase, followed by conduit diameter (1 to 5 m) and roughness height (0.1 to 5m) with increases in exchange of 0.4% and 0.3% respectively. Antecedent aquifer conditions were also seen to exert important controls on influencing exchange with greater exchange occurring in floods following dry periods than during wet periods. These preliminary results indicate that heterogeneity of the hydraulic conductivity across karst aquifers will control the distribution of flood waters that enter into the aquifer matrix. Because flood waters are typically undersaturated with respect to the carbonate minerals, the location of this infiltrated water into the highest hydraulic conductivity zones should enhance dissolution, thereby increasing hydraulic conductivity in a feedback loop that will enhance future infiltration of floodwater. Portions of the aquifer prone to infiltrating flood water and dissolution will also be most sensitive to contamination from surface water infiltration.
Bays, Rebecca B; Zabrucky, Karen M; Gagne, Phill
2012-01-01
In the current study we examined whether prevalence information and imagery encoding influence participants' general plausibility, personal plausibility, belief, and memory ratings for suggested childhood events. Results showed decreases in general and personal plausibility ratings for low prevalence events when encoding instructions were not elaborate; however, instructions to repeatedly imagine suggested events elicited personal plausibility increases for low-prevalence events, evidence that elaborate imagery negated the effect of our prevalence manipulation. We found no evidence of imagination inflation or false memory construction. We discuss critical differences in researchers' manipulations of plausibility and imagery that may influence results of false memory studies in the literature. In future research investigators should focus on the specific nature of encoding instructions when examining the development of false memories.
Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I
2011-09-26
A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.
Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks
NASA Technical Reports Server (NTRS)
Rahmani, Amirreza; Mesbahi, Mehran; Fathpour, Nanaz; Hadaegh, Fred Y.
2008-01-01
In this work, we develop an approach to formation estimation by explicitly characterizing formation's system-theoretic attributes in terms of the underlying inter-spacecraft information-exchange network. In particular, we approach the formation observer/estimator design by relaxing the accessibility to the global state information by a centralized observer/estimator- and in turn- providing an analysis and synthesis framework for formation observers/estimators that rely on local measurements. The noveltyof our approach hinges upon the explicit examination of the underlying distributed spacecraft network in the realm of guidance, navigation, and control algorithmic analysis and design. The overarching goal of our general research program, some of whose results are reported in this paper, is the development of distributed spacecraft estimation algorithms that are scalable, modular, and robust to variations inthe topology and link characteristics of the formation information exchange network. In this work, we consider the observability of a spacecraft formation from a single observation node and utilize the agreement protocol as a mechanism for observing formation states from local measurements. Specifically, we show how the symmetry structure of the network, characterized in terms of its automorphism group, directly relates to the observability of the corresponding multi-agent system The ramification of this notion of observability over networks is then explored in the context of distributed formation estimation.
Chodera, John D; Shirts, Michael R
2011-11-21
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
Chattopadhyay, Aditya; Zheng, Min; Waller, Mark Paul; Priyakumar, U Deva
2018-05-23
Knowledge of the structure and dynamics of biomolecules is essential for elucidating the underlying mechanisms of biological processes. Given the stochastic nature of many biological processes, like protein unfolding, it's almost impossible that two independent simulations will generate the exact same sequence of events, which makes direct analysis of simulations difficult. Statistical models like Markov Chains, transition networks etc. help in shedding some light on the mechanistic nature of such processes by predicting long-time dynamics of these systems from short simulations. However, such methods fall short in analyzing trajectories with partial or no temporal information, for example, replica exchange molecular dynamics or Monte Carlo simulations. In this work we propose a probabilistic algorithm, borrowing concepts from graph theory and machine learning, to extract reactive pathways from molecular trajectories in the absence of temporal data. A suitable vector representation was chosen to represent each frame in the macromolecular trajectory (as a series of interaction and conformational energies) and dimensionality reduction was performed using principal component analysis (PCA). The trajectory was then clustered using a density-based clustering algorithm, where each cluster represents a metastable state on the potential energy surface (PES) of the biomolecule under study. A graph was created with these clusters as nodes with the edges learnt using an iterative expectation maximization algorithm. The most reactive path is conceived as the widest path along this graph. We have tested our method on RNA hairpin unfolding trajectory in aqueous urea solution. Our method makes the understanding of the mechanism of unfolding in RNA hairpin molecule more tractable. As this method doesn't rely on temporal data it can be used to analyze trajectories from Monte Carlo sampling techniques and replica exchange molecular dynamics (REMD).
Singer, Y
1997-08-01
A constant rebalanced portfolio is an asset allocation algorithm which keeps the same distribution of wealth among a set of assets along a period of time. Recently, there has been work on on-line portfolio selection algorithms which are competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991; Helmbold et al., 1996; Cover and Ordentlich, 1996). By their nature, these algorithms employ the assumption that high returns can be achieved using a fixed asset allocation strategy. However, stock markets are far from being stationary and in many cases the wealth achieved by a constant rebalanced portfolio is much smaller than the wealth achieved by an ad hoc investment strategy that adapts to changes in the market. In this paper we present an efficient portfolio selection algorithm that is able to track a changing market. We also describe a simple extension of the algorithm for the case of a general transaction cost, including the transactions cost models recently investigated in (Blum and Kalai, 1997). We provide a simple analysis of the competitiveness of the algorithm and check its performance on real stock data from the New York Stock Exchange accumulated during a 22-year period. On this data, our algorithm outperforms all the algorithms referenced above, with and without transaction costs.
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve algorithm performance accuracy include incorporating additional triggering factors such as tectonic activity, anthropogenic impacts and soil moisture into the algorithm calculation. Despite these limitations, the methodology presented in this regional evaluation is both straightforward to calculate and easy to interpret, making results transferable between regions and allowing findings to be placed within an inter-comparison framework. The regional algorithm scenario represents an important step in advancing regional and global-scale landslide hazard assessment and forecasting.
Preserving the Boltzmann ensemble in replica-exchange molecular dynamics.
Cooke, Ben; Schmidler, Scott C
2008-10-28
We consider the convergence behavior of replica-exchange molecular dynamics (REMD) [Sugita and Okamoto, Chem. Phys. Lett. 314, 141 (1999)] based on properties of the numerical integrators in the underlying isothermal molecular dynamics (MD) simulations. We show that a variety of deterministic algorithms favored by molecular dynamics practitioners for constant-temperature simulation of biomolecules fail either to be measure invariant or irreducible, and are therefore not ergodic. We then show that REMD using these algorithms also fails to be ergodic. As a result, the entire configuration space may not be explored even in an infinitely long simulation, and the simulation may not converge to the desired equilibrium Boltzmann ensemble. Moreover, our analysis shows that for initial configurations with unfavorable energy, it may be impossible for the system to reach a region surrounding the minimum energy configuration. We demonstrate these failures of REMD algorithms for three small systems: a Gaussian distribution (simple harmonic oscillator dynamics), a bimodal mixture of Gaussians distribution, and the alanine dipeptide. Examination of the resulting phase plots and equilibrium configuration densities indicates significant errors in the ensemble generated by REMD simulation. We describe a simple modification to address these failures based on a stochastic hybrid Monte Carlo correction, and prove that this is ergodic.
Wavefront Control Toolbox for James Webb Space Telescope Testbed
NASA Technical Reports Server (NTRS)
Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin
2007-01-01
We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Bouhaddou, Omar; Bennett, Jamie; Cromwell, Tim; Nixon, Graham; Teal, Jennifer; Davis, Mike; Smith, Robert; Fischetti, Linda; Parker, David; Gillen, Zachary; Mattison, John
2011-01-01
The Nationwide Health Information Network allow for the secure exchange of Electronic Health Records over the Internet. The Department of Veterans Affairs, Department of Defense, and Kaiser Permanente, participated in an implementation of the NwHIN specifications in San Diego, California. This paper focuses primarily on patient involvement. Specifically, it describes how the shared patients were identified, were invited to participate and to provide consent for disclosing parts of their medical record, and were matched across organizations. A total 1,144 were identified as shared patients. Invitation letters containing consent forms were mailed and resulted in 42% participation. Invalid consent forms were a significant issue (25%). Initially, the identity matching algorithms yielded low success rate (5%). However, elimination of certain traits and abbreviations and probabilistic algorithms have significantly increased matching rate. Access to information from external sources better informs providers, improves decisions and efficiency, and helps meet the meaningful use criteria. PMID:22195064
Bouhaddou, Omar; Bennett, Jamie; Cromwell, Tim; Nixon, Graham; Teal, Jennifer; Davis, Mike; Smith, Robert; Fischetti, Linda; Parker, David; Gillen, Zachary; Mattison, John
2011-01-01
The Nationwide Health Information Network allow for the secure exchange of Electronic Health Records over the Internet. The Department of Veterans Affairs, Department of Defense, and Kaiser Permanente, participated in an implementation of the NwHIN specifications in San Diego, California. This paper focuses primarily on patient involvement. Specifically, it describes how the shared patients were identified, were invited to participate and to provide consent for disclosing parts of their medical record, and were matched across organizations. A total 1,144 were identified as shared patients. Invitation letters containing consent forms were mailed and resulted in 42% participation. Invalid consent forms were a significant issue (25%). Initially, the identity matching algorithms yielded low success rate (5%). However, elimination of certain traits and abbreviations and probabilistic algorithms have significantly increased matching rate. Access to information from external sources better informs providers, improves decisions and efficiency, and helps meet the meaningful use criteria.
Modelling Trial-by-Trial Changes in the Mismatch Negativity
Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.
2013-01-01
The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989
Uninformative Prior Multiple Target Tracking Using Evidential Particle Filters
NASA Astrophysics Data System (ADS)
Worthy, J. L., III; Holzinger, M. J.
Space situational awareness requires the ability to initialize state estimation from short measurements and the reliable association of observations to support the characterization of the space environment. The electro-optical systems used to observe space objects cannot fully characterize the state of an object given a short, unobservable sequence of measurements. Further, it is difficult to associate these short-arc measurements if many such measurements are generated through the observation of a cluster of satellites, debris from a satellite break-up, or from spurious detections of an object. An optimization based, probabilistic short-arc observation association approach coupled with a Dempster-Shafer based evidential particle filter in a multiple target tracking framework is developed and proposed to address these problems. The optimization based approach is shown in literature to be computationally efficient and can produce probabilities of association, state estimates, and covariances while accounting for systemic errors. Rigorous application of Dempster-Shafer theory is shown to be effective at enabling ignorance to be properly accounted for in estimation by augmenting probability with belief and plausibility. The proposed multiple hypothesis framework will use a non-exclusive hypothesis formulation of Dempster-Shafer theory to assign belief mass to candidate association pairs and generate tracks based on the belief to plausibility ratio. The proposed algorithm is demonstrated using simulated observations of a GEO satellite breakup scenario.
Dyspnoea after antiplatelet agents: the AZD6140 controversy.
Serebruany, V L; Stebbing, J; Atar, D
2007-03-01
Recent randomised studies suggest that experimental oral reversible platelet P2Y12 receptor inhibitor, AZD6140, causes dyspnoea. This also raises similar concerns about the parent compound, and another adenosine triphosphate (ATP) analogue (AR-69931MX or cangrelor), which is currently in Phase 3 trial in patients undergoing coronary interventions. We analysed package inserts, and available clinical trials safety data for antiplatelet agents with regard to the incidence of dyspnoea. We found that dyspnoea is a very rare complication of the presently approved platelet inhibitors, mostly caused by underlying disease, rather than antiplatelet therapy per se. The main reasons for respiratory distress after oral (AZD6140), and intravenous (cangrelor) agents may be the development of mild asymptomatic thrombotic thrombocytopenic purpura, fluid retention and dyspnoea because of the reversible nature of these drugs. Also, these agents are ATP analogues, which rapidly metabolise to adenosine, a well-known bronchoprovocator causing dyspnoea as well. In summary, dyspnoea is seldom considered, there are no treatment algorithms when it does occur, plausible mechanisms exist and despite these plausible mechanisms, the true cause of dyspnoea in these exposed individuals is unknown. Additional pulmonary function testing, immunological investigations and platelet receptor studies are urgently needed to determine the cause of dyspnoea after AZD6140, and to point out how such serious adverse reactions can be prevented, or at least minimised, raising potential concerns about this drug.
IoT security with one-time pad secure algorithm based on the double memory technique
NASA Astrophysics Data System (ADS)
Wiśniewski, Remigiusz; Grobelny, Michał; Grobelna, Iwona; Bazydło, Grzegorz
2017-11-01
Secure encryption of data in Internet of Things is especially important as many information is exchanged every day and the number of attack vectors on IoT elements still increases. In the paper a novel symmetric encryption method is proposed. The idea bases on the one-time pad technique. The proposed solution applies double memory concept to secure transmitted data. The presented algorithm is considered as a part of communication protocol and it has been initially validated against known security issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitriy Morozov, Tom Peterka
2014-07-29
Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets. As the scale of simulations and observations surpasses billions of particles, a distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this software is a distributed-memory parallel Delaunay and Voronoi tessellation algorithm based on existing serial computational geometry libraries that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include the addition of periodic and wall boundary conditions.
Zewdie, Getie A.; Cox, Dennis D.; Neely Atkinson, E.; Cantor, Scott B.; MacAulay, Calum; Davies, Kalatu; Adewole, Isaac; Buys, Timon P. H.; Follen, Michele
2012-01-01
Abstract. Optical spectroscopy has been proposed as an accurate and low-cost alternative for detection of cervical intraepithelial neoplasia. We previously published an algorithm using optical spectroscopy as an adjunct to colposcopy and found good accuracy (sensitivity=1.00 [95% confidence interval (CI)=0.92 to 1.00], specificity=0.71 [95% CI=0.62 to 0.79]). Those results used measurements taken by expert colposcopists as well as the colposcopy diagnosis. In this study, we trained and tested an algorithm for the detection of cervical intraepithelial neoplasia (i.e., identifying those patients who had histology reading CIN 2 or worse) that did not include the colposcopic diagnosis. Furthermore, we explored the interaction between spectroscopy and colposcopy, examining the importance of probe placement expertise. The colposcopic diagnosis-independent spectroscopy algorithm had a sensitivity of 0.98 (95% CI=0.89 to 1.00) and a specificity of 0.62 (95% CI=0.52 to 0.71). The difference in the partial area under the ROC curves between spectroscopy with and without the colposcopic diagnosis was statistically significant at the patient level (p=0.05) but not the site level (p=0.13). The results suggest that the device has high accuracy over a wide range of provider accuracy and hence could plausibly be implemented by providers with limited training. PMID:22559693
Welch, Catherine A; Petersen, Irene; Bartlett, Jonathan W; White, Ian R; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Carpenter, James
2014-01-01
Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co-linearity and over-fitting. The new two-fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two-fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time-to-event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two-fold FCS algorithm. The results show that the two-fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. PMID:24782349
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Lin, Long-Hui; Ji, Xiang; Diong, Cheong-Hoong; Du, Yu; Lin, Chi-Xian
2010-08-01
Butterfly lizards of the genus Leiolepis (Agamidae) are widely distributed in coastal regions of Southeast Asia and South China, with the Reevese's Butterfly Lizard Leiolepis reevesii having a most northerly distribution that ranges from Vietnam to South China. To assess the genetic diversity within L. reevesii, and its population structure and evolutionary history, we sequenced 1004 bp of cytochrome b for 448 individuals collected from 28 localities covering almost the whole range of the lizard. One hundred and forty variable sites were observed, and 93 haplotypes were defined. We identified three genetically distinct clades, of which Clade A includes haplotypes mainly from southeastern Hainan, Clade B from Guangdong and northern Hainan, and Clade C from Vietnam and the other localities of China. Clade A was well distinguished and divergent from the other two. The Wuzhishan and Yinggeling mountain ranges were important barriers limiting gene exchange between populations on the both sides of the mountain series, whereas the Gulf of Tonkin and the Qiongzhou Strait were not. One plausible scenario to explain our genetic data is a historical dispersion of L. reevesii as proceeding from Vietnam to Hainan, followed by a second wave of dispersal from Hainan to Guangdong and Guangxi. Another equally plausible scenario is a historically widespread population that has been structured by vicariant factors such as the mountains in Hainan and sea level fluctuations. Copyright 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang
2015-12-15
Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less
Bernatowicz, Piotr; Nowakowski, Michał; Dodziuk, Helena; Ejchart, Andrzej
2006-08-01
Association constants in weak molecular complexes can be determined by analysis of chemical shifts variations resulting from changes of guest to host concentration ratio. In the regime of very fast exchange, i.e., when exchange rate is several orders of magnitude larger than the Larmor angular frequency difference of the observed resonance in free and complexed molecule, the apparent position of averaged resonance is a population-weighted mean of resonances of particular forms involved in the equilibrium. The assumption of very fast exchange is often, however, tacitly admitted in literature even in cases where the process of interest is much slower than required. We show that such an unjustified simplification may, under certain circumstances, lead to significant underestimation of association constant and, in consequence, to non-negligible errors in Gibbs free energy under determination. We present a general method, based on iterative numerical NMR line shape analysis, which allows one for the compensation of chemical exchange effects, and delivers both the correct association constants and the exchange rates. The latter are not delivered by the other mentioned method. Practical application of our algorithm is illustrated by the case of camphor-alpha-cyclodextrin complexes.
Asynchronous Replica Exchange Software for Grid and Heterogeneous Computing.
Gallicchio, Emilio; Xia, Junchao; Flynn, William F; Zhang, Baofeng; Samlalsingh, Sade; Mentes, Ahmet; Levy, Ronald M
2015-11-01
Parallel replica exchange sampling is an extended ensemble technique often used to accelerate the exploration of the conformational ensemble of atomistic molecular simulations of chemical systems. Inter-process communication and coordination requirements have historically discouraged the deployment of replica exchange on distributed and heterogeneous resources. Here we describe the architecture of a software (named ASyncRE) for performing asynchronous replica exchange molecular simulations on volunteered computing grids and heterogeneous high performance clusters. The asynchronous replica exchange algorithm on which the software is based avoids centralized synchronization steps and the need for direct communication between remote processes. It allows molecular dynamics threads to progress at different rates and enables parameter exchanges among arbitrary sets of replicas independently from other replicas. ASyncRE is written in Python following a modular design conducive to extensions to various replica exchange schemes and molecular dynamics engines. Applications of the software for the modeling of association equilibria of supramolecular and macromolecular complexes on BOINC campus computational grids and on the CPU/MIC heterogeneous hardware of the XSEDE Stampede supercomputer are illustrated. They show the ability of ASyncRE to utilize large grids of desktop computers running the Windows, MacOS, and/or Linux operating systems as well as collections of high performance heterogeneous hardware devices.
NASA Astrophysics Data System (ADS)
Tóth, B.; Lillo, F.; Farmer, J. D.
2010-11-01
We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.
Sriram, Vinay K; Montgomery, Doug
2017-07-01
The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.
Distributed Coordination of Energy Storage with Distributed Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Stoorvogel, Antonie A.
2016-07-18
With a growing emphasis on energy efficiency and system flexibility, a great effort has been made recently in developing distributed energy resources (DER), including distributed generators and energy storage systems. This paper first formulates an optimal coordination problem considering constraints at both system and device levels, including power balance constraint, generator output limits, storage energy and power capacity and charging/discharging efficiencies. An algorithm is then proposed to dynamically and automatically coordinate DERs in a distributed manner. With the proposed algorithm, the agent at each DER only maintains a local incremental cost and updates it through information exchange with a fewmore » neighbors, without relying on any central decision maker. Simulation results are used to illustrate and validate the proposed algorithm.« less
Solution of the Fokker-Planck equation with mixing of angular harmonics by beam-beam charge exchange
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, D.R.
1989-09-01
A method for solving the linear Fokker-Planck equation with anisotropic beam-beam charge exchange loss is presented. The 2-D equation is transformed to a system of coupled 1-D equations which are solved iteratively as independent equations. Although isotropic approximations to the beam-beam losses lead to inaccurate fast ion distributions, typically only a few angular harmonics are needed to include accurately the effect of the beam-beam charge exchange loss on the usual integrals of the fast ion distribution. Consequently, the algorithm converges very rapidly and is much more efficient than a 2-D finite difference method. A convenient recursion formula for the couplingmore » coefficients is given and generalization of the method is discussed. 13 refs., 2 figs.« less
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
NASA Astrophysics Data System (ADS)
Nawir, Mukrimah; Amir, Amiza; Lynn, Ong Bi; Yaakob, Naimah; Badlishah Ahmad, R.
2018-05-01
The rapid growth of technologies might endanger them to various network attacks due to the nature of data which are frequently exchange their data through Internet and large-scale data that need to be handle. Moreover, network anomaly detection using machine learning faced difficulty when dealing the involvement of dataset where the number of labelled network dataset is very few in public and this caused many researchers keep used the most commonly network dataset (KDDCup99) which is not relevant to employ the machine learning (ML) algorithms for a classification. Several issues regarding these available labelled network datasets are discussed in this paper. The aim of this paper to build a network anomaly detection system using machine learning algorithms that are efficient, effective and fast processing. The finding showed that AODE algorithm is performed well in term of accuracy and processing time for binary classification towards UNSW-NB15 dataset.
New Secure E-mail System Based on Bio-Chaos Key Generation and Modified AES Algorithm
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Radi, A. M.
2018-05-01
The E-mail messages exchanged between sender’s Mailbox and recipient’s Mailbox over the open systems and insecure Networks. These messages may be vulnerable to eavesdropping and itself poses a real threat to the privacy and data integrity from unauthorized persons. The E-mail Security includes the following properties (Confidentiality, Authentication, Message integrity). We need a safe encryption algorithm to encrypt Email messages such as the algorithm Advanced Encryption Standard (AES) or Data Encryption Standard DES, as well as biometric recognition and chaotic system. The proposed E-mail system security uses modified AES algorithm and uses secret key-bio-chaos that consist of biometric (Fingerprint) and chaotic system (Lu and Lorenz). This modification makes the proposed system more sensitive and random. The execution time for both encryption and decryption of the proposed system is much less from original AES, in addition to being compatible with all Mail Servers.
Computing rank-revealing QR factorizations of dense matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science
1998-06-01
We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliablemore » algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.« less
Pilgrims sailing the Titanic: plausibility effects on memory for misinformation.
Hinze, Scott R; Slaten, Daniel G; Horton, William S; Jenkins, Ryan; Rapp, David N
2014-02-01
People rely on information they read even when it is inaccurate (Marsh, Meade, & Roediger, Journal of Memory and Language 49:519-536, 2003), but how ubiquitous is this phenomenon? In two experiments, we investigated whether this tendency to encode and rely on inaccuracies from text might be influenced by the plausibility of misinformation. In Experiment 1, we presented stories containing inaccurate plausible statements (e.g., "The Pilgrims' ship was the Godspeed"), inaccurate implausible statements (e.g., . . . the Titanic), or accurate statements (e.g., . . . the Mayflower). On a subsequent test of general knowledge, participants relied significantly less on implausible than on plausible inaccuracies from the texts but continued to rely on accurate information. In Experiment 2, we replicated these results with the addition of a think-aloud procedure to elicit information about readers' noticing and evaluative processes for plausible and implausible misinformation. Participants indicated more skepticism and less acceptance of implausible than of plausible inaccuracies. In contrast, they often failed to notice, completely ignored, and at times even explicitly accepted the misinformation provided by plausible lures. These results offer insight into the conditions under which reliance on inaccurate information occurs and suggest potential mechanisms that may underlie reported misinformation effects.
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
Lang, Jun
2012-01-30
In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.
Dai, Zongli; Zhao, Aiwu; He, Jie
2018-01-01
In this paper, we propose a hybrid method to forecast the stock prices called High-order-fuzzy-fluctuation-Trends-based Back Propagation(HTBP)Neural Network model. First, we compare each value of the historical training data with the previous day's value to obtain a fluctuation trend time series (FTTS). On this basis, the FTTS blur into fuzzy time series (FFTS) based on the fluctuation of the increasing, equality, decreasing amplitude and direction. Since the relationship between FFTS and future wave trends is nonlinear, the HTBP neural network algorithm is used to find the mapping rules in the form of self-learning. Finally, the results of the algorithm output are used to predict future fluctuations. The proposed model provides some innovative features:(1)It combines fuzzy set theory and neural network algorithm to avoid overfitting problems existed in traditional models. (2)BP neural network algorithm can intelligently explore the internal rules of the actual existence of sequential data, without the need to analyze the influence factors of specific rules and the path of action. (3)The hybrid modal can reasonably remove noises from the internal rules by proper fuzzy treatment. This paper takes the TAIEX data set of Taiwan stock exchange as an example, and compares and analyzes the prediction performance of the model. The experimental results show that this method can predict the stock market in a very simple way. At the same time, we use this method to predict the Shanghai stock exchange composite index, and further verify the effectiveness and universality of the method. PMID:29420584
Guan, Hongjun; Dai, Zongli; Zhao, Aiwu; He, Jie
2018-01-01
In this paper, we propose a hybrid method to forecast the stock prices called High-order-fuzzy-fluctuation-Trends-based Back Propagation(HTBP)Neural Network model. First, we compare each value of the historical training data with the previous day's value to obtain a fluctuation trend time series (FTTS). On this basis, the FTTS blur into fuzzy time series (FFTS) based on the fluctuation of the increasing, equality, decreasing amplitude and direction. Since the relationship between FFTS and future wave trends is nonlinear, the HTBP neural network algorithm is used to find the mapping rules in the form of self-learning. Finally, the results of the algorithm output are used to predict future fluctuations. The proposed model provides some innovative features:(1)It combines fuzzy set theory and neural network algorithm to avoid overfitting problems existed in traditional models. (2)BP neural network algorithm can intelligently explore the internal rules of the actual existence of sequential data, without the need to analyze the influence factors of specific rules and the path of action. (3)The hybrid modal can reasonably remove noises from the internal rules by proper fuzzy treatment. This paper takes the TAIEX data set of Taiwan stock exchange as an example, and compares and analyzes the prediction performance of the model. The experimental results show that this method can predict the stock market in a very simple way. At the same time, we use this method to predict the Shanghai stock exchange composite index, and further verify the effectiveness and universality of the method.
Ma, Jian; Lu, Chen; Liu, Hongmei
2015-01-01
The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010
Ma, Jian; Lu, Chen; Liu, Hongmei
2015-01-01
The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.
Impacts of Freshets on Hyporheic Exchange Flow under Neutral Conditions
NASA Astrophysics Data System (ADS)
Singh, T.; Wu, L.; Worman, A. L. E.; Hannah, D. M.; Krause, S.; Gomez-Velez, J. D.
2016-12-01
Hyporheic zones (HZs) are characterized by the exchange of water, solutes, momentum and energy between streams and aquifers. Hyporheic exchange flow (HEF) is driven by pressure gradients along the sediment-water interface, which in turn are caused by interactions between channel flow and bed topography. With this in mind, changes in channel flow can have significant effects in the hydrodynamic and transport characteristics of HZs. While previous research has improved our understanding of the drivers and controls of HEF, little attention has been paid to the potential impacts of transient dynamic hydrologic forcing, such as freshets. In this study, we use a two-dimensional, homogeneous flow and transport model with a time-varying pressure distribution at the sediment-water interface to explore the dynamic development of HZ characteristics in response to discharge fluctuations (i.e., freshets). With this model, we explore a wide range of plausible scenarios for discharge and bed geometry. Our modelling results show that a single freshet alters the spatial extent and penetration of the HZ, though quantitatively different, when investigated using hydrological (streamlines/flow field) and geochemical (>90% of surface water in streambed) approaches of HZ. We summarize the results of a detailed sensitivity analysis where the effects of hydraulic geometry (slope, amplitude of the streambed), flood characteristics (duration, skewness and magnitude of the flood wave) and biogeochemical timescales (time-scale for oxygen consumption) on the HZ's extent, mean age, and oxic/anoxic zonation are explored. Taking into consideration these multiple morphological characteristics along with variable hydrological controls has clear potential to facilitate process understanding and upscaling.
Experimental Concepts for Testing Seismic Hazard Models
NASA Astrophysics Data System (ADS)
Marzocchi, W.; Jordan, T. H.
2015-12-01
Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.
The morphing of geographical features by Fourier transformation.
Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.
Meshless Modeling of Deformable Shapes and their Motion
Adams, Bart; Ovsjanikov, Maks; Wand, Michael; Seidel, Hans-Peter; Guibas, Leonidas J.
2010-01-01
We present a new framework for interactive shape deformation modeling and key frame interpolation based on a meshless finite element formulation. Starting from a coarse nodal sampling of an object’s volume, we formulate rigidity and volume preservation constraints that are enforced to yield realistic shape deformations at interactive frame rates. Additionally, by specifying key frame poses of the deforming shape and optimizing the nodal displacements while targeting smooth interpolated motion, our algorithm extends to a motion planning framework for deformable objects. This allows reconstructing smooth and plausible deformable shape trajectories in the presence of possibly moving obstacles. The presented results illustrate that our framework can handle complex shapes at interactive rates and hence is a valuable tool for animators to realistically and efficiently model and interpolate deforming 3D shapes. PMID:24839614
Bouhaddou, Omar; Davis, Mike; Donahue, Margaret; Mallia, Anthony; Griffin, Stephania; Teal, Jennifer; Nebeker, Jonathan
2016-01-01
Care coordination across healthcare organizations depends upon health information exchange. Various policies and laws govern permissible exchange, particularly when the information includes privacy sensitive conditions. The Department of Veterans Affairs (VA) privacy policy has required either blanket consent or manual sensitivity review prior to exchanging any health information. The VA experience has been an expensive, administratively demanding burden on staffand Veterans alike, particularly for patients without privacy sensitive conditions. Until recently, automatic sensitivity determination has not been feasible. This paper proposes a policy-driven algorithmic approach (Security Labeling Service or SLS) to health information exchange that automatically detects the presence or absence of specific privacy sensitive conditions and then, to only require a Veteran signed consent for release when actually present. The SLS was applied successfully to a sample of real patient Consolidated-Clinical Document Architecture(C-CDA) documents. The SLS identified standard terminology codes by both parsing structured entries and analyzing textual information using Natural Language Processing (NLP). PMID:28269828
Bouhaddou, Omar; Davis, Mike; Donahue, Margaret; Mallia, Anthony; Griffin, Stephania; Teal, Jennifer; Nebeker, Jonathan
2016-01-01
Care coordination across healthcare organizations depends upon health information exchange. Various policies and laws govern permissible exchange, particularly when the information includes privacy sensitive conditions. The Department of Veterans Affairs (VA) privacy policy has required either blanket consent or manual sensitivity review prior to exchanging any health information. The VA experience has been an expensive, administratively demanding burden on staffand Veterans alike, particularly for patients without privacy sensitive conditions. Until recently, automatic sensitivity determination has not been feasible. This paper proposes a policy-driven algorithmic approach (Security Labeling Service or SLS) to health information exchange that automatically detects the presence or absence of specific privacy sensitive conditions and then, to only require a Veteran signed consent for release when actually present. The SLS was applied successfully to a sample of real patient Consolidated-Clinical Document Architecture(C-CDA) documents. The SLS identified standard terminology codes by both parsing structured entries and analyzing textual information using Natural Language Processing (NLP).
NASA Astrophysics Data System (ADS)
Damle, R. M.; Ardhapurkar, P. M.; Atrey, M. D.
2016-12-01
In J-T cryocoolers operating with mixed refrigerants (nitrogen-hydrocarbons), the recuperative heat exchange takes place under two-phase conditions. Simultaneous boiling of the low pressure stream and condensation of the high pressure stream results in higher heat transfer coefficients. The mixture composition, operating conditions and the heat exchanger design are crucial for obtaining the required cryogenic temperature. In this work, a one-dimensional transient algorithm is developed for the simulation of the two-phase heat transfer in the recuperative heat exchanger of a mixed refrigerant J-T cryocooler. Modified correlation is used for flow boiling of the high pressure fluid while different condensation correlations are employed with and without the correction for the low pressure fluid. Simulations are carried out for different mixture compositions and numerical predictions are compared with the experimental data. The overall heat transfer is predicted reasonably well and the qualitative trends of the temperature profiles are also captured by the developed numerical model.
Plausibility Judgments in Conceptual Change and Epistemic Cognition
ERIC Educational Resources Information Center
Lombardi, Doug; Nussbaum, E. Michael; Sinatra, Gale M.
2016-01-01
Plausibility judgments rarely have been addressed empirically in conceptual change research. Recent research, however, suggests that these judgments may be pivotal to conceptual change about certain topics where a gap exists between what scientists and laypersons find plausible. Based on a philosophical and empirical foundation, this article…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livingood, W.; Stein, J.; Considine, T.
Retailers who participate in the U.S. Department of Energy Commercial Building Energy Alliances (CBEA) identified the need to enhance communication standards. The means are available to collect massive numbers of buildings operational data, but CBEA members have difficulty transforming the data into usable information and energy-saving actions. Implementing algorithms for automated fault detection and diagnostics and linking building operational data to computerized maintenance management systems are important steps in the right direction, but have limited scalability for large building portfolios because the algorithms must be configured for each building.
Source Effects and Plausibility Judgments When Reading about Climate Change
ERIC Educational Resources Information Center
Lombardi, Doug; Seyranian, Viviane; Sinatra, Gale M.
2014-01-01
Gaps between what scientists and laypeople find plausible may act as a barrier to learning complex and/or controversial socioscientific concepts. For example, individuals may consider scientific explanations that human activities are causing current climate change as implausible. This plausibility judgment may be due-in part-to individuals'…
Plausibility and Perspective Influence the Processing of Counterfactual Narratives
ERIC Educational Resources Information Center
Ferguson, Heather J.; Jayes, Lewis T.
2018-01-01
Previous research has established that readers' eye movements are sensitive to the difficulty with which a word is processed. One important factor that influences processing is the fit of a word within the wider context, including its plausibility. Here we explore the influence of plausibility in counterfactual language processing. Counterfactuals…
Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.
Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam
2016-01-01
We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.
Predicting the transmembrane secondary structure of ligand-gated ion channels.
Bertaccini, E; Trudell, J R
2002-06-01
Recent mutational analyses of ligand-gated ion channels (LGICs) have demonstrated a plausible site of anesthetic action within their transmembrane domains. Although there is a consensus that the transmembrane domain is formed from four membrane-spanning segments, the secondary structure of these segments is not known. We utilized 10 state-of-the-art bioinformatics techniques to predict the transmembrane topology of the tetrameric regions within six members of the LGIC family that are relevant to anesthetic action. They are the human forms of the GABA alpha 1 receptor, the glycine alpha 1 receptor, the 5HT3 serotonin receptor, the nicotinic AChR alpha 4 and alpha 7 receptors and the Torpedo nAChR alpha 1 receptor. The algorithms utilized were HMMTOP, TMHMM, TMPred, PHDhtm, DAS, TMFinder, SOSUI, TMAP, MEMSAT and TOPPred2. The resulting predictions were superimposed on to a multiple sequence alignment of the six amino acid sequences created using the CLUSTAL W algorithm. There was a clear statistical consensus for the presence of four alpha helices in those regions experimentally thought to span the membrane. The consensus of 10 topology prediction techniques supports the hypothesis that the transmembrane subunits of the LGICs are tetrameric bundles of alpha helices.
Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.
Slażyński, Leszek; Bohte, Sander
2012-01-01
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
Biomedical Terminology Mapper for UML projects.
Thibault, Julien C; Frey, Lewis
2013-01-01
As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies.
Biomedical Terminology Mapper for UML projects
Thibault, Julien C.; Frey, Lewis
As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies. PMID:24303278
NASA Astrophysics Data System (ADS)
Wagner, Rick; Castanotto, Giuseppe; Goldberg, Kenneth A.
1995-11-01
The Internet offers tremendous potential for rapid development of mechanical products to meet global competition. In the past several years, a number of geometric algorithms have been developed to evaluate manufacturing properties such as feedability, fixturability, assemblability, etc. This class of algorithms is sometimes termed `DFX: Design for X'. One problem is that most of these algorithms are tailored to a particular CAD system and format and so have not been widely tested by industry. the World Wide Web may offer a solution: its simple interface language may become a de facto standard for the exchange of geometric data. In this preliminary paper we describe one model for remote analysis of CAD models that we believe holds promise for use in industry (e.g. during the design cycle) and in research (e.g. to encourage verification of results).
Semantic and Plausibility Preview Benefit Effects in English: Evidence from Eye Movements
Schotter, Elizabeth R.; Jia, Annie
2016-01-01
Theories of preview benefit in reading hinge on integration across saccades and the idea that preview benefit is greater the more similar the preview and target are. Schotter (2013) reported preview benefit from a synonymous preview, but it is unclear whether this effect occurs because of similarity between the preview and target (integration), or because of contextual fit of the preview—synonyms satisfy both accounts. Studies in Chinese have found evidence for preview benefit for words that are unrelated to the target, but are contextually plausible (Yang, Li, Wang, Slattery, & Rayner, 2014; Yang, Wang, Tong, & Rayner, 2012), which is incompatible with an integration account but supports a contextual fit account. Here, we used plausible and implausible unrelated previews in addition to plausible synonym, antonym, and identical previews to further investigate these accounts for readers of English. Early reading measures were shorter for all plausible preview conditions compared to the implausible preview condition. In later reading measures, a benefit for the plausible unrelated preview condition was not observed. In a second experiment, we asked questions that probed whether the reader encoded the preview or target. Readers were more likely to report the preview when they had skipped the word and not regressed to it, and when the preview was plausible. Thus, under certain circumstances, the preview word is processed to a high level of representation (i.e., semantic plausibility) regardless of its relationship to the target, but its influence on reading is relatively short-lived, being replaced by the target word, when fixated. PMID:27123754
Günther, Fritz; Marelli, Marco
2016-01-01
Noun compounds, consisting of two nouns (the head and the modifier) that are combined into a single concept, differ in terms of their plausibility: school bus is a more plausible compound than saddle olive. The present study investigates which factors influence the plausibility of attested and novel noun compounds. Distributional Semantic Models (DSMs) are used to obtain formal (vector) representations of word meanings, and compositional methods in DSMs are employed to obtain such representations for noun compounds. From these representations, different plausibility measures are computed. Three of those measures contribute in predicting the plausibility of noun compounds: The relatedness between the meaning of the head noun and the compound (Head Proximity), the relatedness between the meaning of modifier noun and the compound (Modifier Proximity), and the similarity between the head noun and the modifier noun (Constituent Similarity). We find non-linear interactions between Head Proximity and Modifier Proximity, as well as between Modifier Proximity and Constituent Similarity. Furthermore, Constituent Similarity interacts non-linearly with the familiarity with the compound. These results suggest that a compound is perceived as more plausible if it can be categorized as an instance of the category denoted by the head noun, if the contribution of the modifier to the compound meaning is clear but not redundant, and if the constituents are sufficiently similar in cases where this contribution is not clear. Furthermore, compounds are perceived to be more plausible if they are more familiar, but mostly for cases where the relation between the constituents is less clear. PMID:27732599
Dube, Timothy; Mutanga, Onisimo; Adam, Elhadi; Ismail, Riyad
2014-01-01
The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB) and Random Forest (RF) regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1). We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands. PMID:25140631
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Building test data from real outbreaks for evaluating detection algorithms
Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159
ERIC Educational Resources Information Center
Gauld, Colin
1998-01-01
Reports that many students do not believe Newton's law of action and reaction and suggests ways in which its plausibility might be enhanced. Reviews how this law has been made more plausible over time by Newton and those who succeeded him. Contains 25 references. (DDR)
Plausibility Reappraisals and Shifts in Middle School Students' Climate Change Conceptions
ERIC Educational Resources Information Center
Lombardi, Doug; Sinatra, Gale M.; Nussbaum, E. Michael
2013-01-01
Plausibility is a central but under-examined topic in conceptual change research. Climate change is an important socio-scientific topic; however, many view human-induced climate change as implausible. When learning about climate change, students need to make plausibility judgments but they may not be sufficiently critical or reflective. The…
A Method to Estimate the Hydraulic Conductivity of the Ground by TRT Analysis.
Liuzzo Scorpo, Alberto; Nordell, Bo; Gehlin, Signhild
2017-01-01
The knowledge of hydraulic properties of aquifers is important in many engineering applications. Careful design of ground-coupled heat exchangers requires that the hydraulic characteristics and thermal properties of the aquifer must be well understood. Knowledge of groundwater flow rate and aquifer thermal properties is the basis for proper design of such plants. Different methods have been developed in order to estimate hydraulic conductivity by evaluating the transport of various tracers (chemical, heat etc.); thermal response testing (TRT) is a specific type of heat tracer that allows including the hydraulic properties in an effective thermal conductivity value. Starting from these considerations, an expeditious, graphical method was proposed to estimate the hydraulic conductivity of the aquifer, using TRT data and plausible assumption. Suggested method, which is not yet verified or proven to be reliable, should be encouraging further studies and development in this direction. © 2016, National Ground Water Association.
Interaction of packaging motor with the polymerase complex of dsRNA bacteriophage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisal, Jiri; Kainov, Denis E.; Lam, TuKiet T.
2006-07-20
Many viruses employ molecular motors to package their genomes into preformed empty capsids (procapsids). In dsRNA bacteriophages the packaging motor is a hexameric ATPase P4, which is an integral part of the multisubunit procapsid. Structural and biochemical studies revealed a plausible RNA-translocation mechanism for the isolated hexamer. However, little is known about the structure and regulation of the hexamer within the procapsid. Here we use hydrogen-deuterium exchange and mass spectrometry to delineate the interactions of the P4 hexamer with the bacteriophage phi12 procapsid. P4 associates with the procapsid via its C-terminal face. The interactions also stabilize subunit interfaces within themore » hexamer. The conformation of the virus-bound hexamer is more stable than the hexamer in solution, which is prone to spontaneous ring openings. We propose that the stabilization within the viral capsid increases the packaging processivity and confers selectivity during RNA loading.« less
NASA Astrophysics Data System (ADS)
van Buren, Simon; Hertle, Ellen; Figueiredo, Patric; Kneer, Reinhold; Rohlfs, Wilko
2017-11-01
Frost formation is a common, often undesired phenomenon in heat exchanges such as air coolers. Thus, air coolers have to be defrosted periodically, causing significant energy consumption. For the design and optimization, prediction of defrosting by a CFD tool is desired. This paper presents a one-dimensional transient model approach suitable to be used as a zero-dimensional wall-function in CFD for modeling the defrost process at the fin and tube interfaces. In accordance to previous work a multi stage defrost model is introduced (e.g. [1, 2]). In the first instance the multi stage model is implemented and validated using MATLAB. The defrost process of a one-dimensional frost segment is investigated. Fixed boundary conditions are provided at the frost interfaces. The simulation results verify the plausibility of the designed model. The evaluation of the simulated defrost process shows the expected convergent behavior of the three-stage sequence.
A theoretical study of thorium titanium-based alloys
NASA Astrophysics Data System (ADS)
Obodo, K. O.; Chetty, N.
2013-09-01
Using theoretical quantum chemical methods, we investigate the dearth of ordered alloys involving thorium and titanium. Whereas both these elements are known to alloy very readily with various other elements, for example with oxygen, current experimental data suggests that Th and Ti do not alloy very readily with each other. In this work, we consider a variety of ordered alloys at varying stoichiometries involving these elements within the framework of density functional theory using the generalized gradient approximation for the exchange and correlation functional. By probing the energetics, electronic, phonon and elastic properties of these systems, we confirm the scarcity of ordered alloys involving Th and Ti, since for a variety of reasons many of the systems that we considered were found to be unfavorable. However, our investigations resulted in one plausible ordered structure: We propose ThTi3 in the Cr3Si structure as a metastable ordered alloy.
Forsythe, Jay G; Yu, Sheng-Sheng; Mamajanov, Irena; Grover, Martha A; Krishnamurthy, Ramanarayanan; Fernández, Facundo M; Hud, Nicholas V
2015-08-17
Although it is generally accepted that amino acids were present on the prebiotic Earth, the mechanism by which α-amino acids were condensed into polypeptides before the emergence of enzymes remains unsolved. Here, we demonstrate a prebiotically plausible mechanism for peptide (amide) bond formation that is enabled by α-hydroxy acids, which were likely present along with amino acids on the early Earth. Together, α-hydroxy acids and α-amino acids form depsipeptides-oligomers with a combination of ester and amide linkages-in model prebiotic reactions that are driven by wet-cool/dry-hot cycles. Through a combination of ester-amide bond exchange and ester bond hydrolysis, depsipeptides are enriched with amino acids over time. These results support a long-standing hypothesis that peptides might have arisen from ester-based precursors. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wealth Transmission and Inequality Among Hunter-Gatherers
Hill, Kim; Marlowe, Frank; Nolin, David; Wiessner, Polly; Gurven, Michael; Bowles, Samuel; Mulder, Monique Borgerhoff; Hertz, Tom; Bell, Adrian
2010-01-01
We report quantitative estimates of intergenerational transmission and population-wide inequality for wealth measures in a set of hunter-gatherer populations. Wealth is defined broadly as factors that contribute to individual or household well-being, ranging from embodied forms such as weight and hunting success to material forms such household goods, as well as relational wealth in exchange partners. Intergenerational wealth transmission is low to moderate in these populations, but is still expected to have measurable influence on an individual’s life chances. Wealth inequality (measured with Gini coefficients) is moderate for most wealth types, matching what qualitative ethnographic research has generally indicated (if not the stereotype of hunter-gatherers as extreme egalitarians). We discuss some plausible mechanisms for these patterns, and suggest ways in which future research could resolve questions about the role of wealth in hunter-gatherer social and economic life. PMID:21151711
Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu
2014-09-11
The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks,more » between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.« less
Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation
Liu, Yang; Liu, Junfei
2016-01-01
This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency. PMID:27725826
Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation.
Liu, Yang; Liu, Junfei; Tian, Liwei; Ma, Lianbo
2016-01-01
This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency.
A new algorithm for DNS of turbulent polymer solutions using the FENE-P model
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance; Robert, Ashish; Brasseur, James
2004-11-01
Direct numerical simulations (DNS) of polymer solutions based on the finite extensible nonlinear elastic model with the Peterlin closure (FENE-P) solve for a conformation tensor with properties that must be maintained by the numerical algorithm. In particular, the eigenvalues of the tensor are all positive (to maintain positive definiteness) and the sum is bounded by the maximum extension length. Loss of either of these properties will give rise to unphysical instabilities. In earlier work, Vaithianathan & Collins (2003) devised an algorithm based on an eigendecomposition that allows you to update the eigenvalues of the conformation tensor directly, making it easier to maintain the necessary conditions for a stable calculation. However, simple fixes (such as ceilings and floors) yield results that violate overall conservation. The present finite-difference algorithm is inherently designed to satisfy all of the bounds on the eigenvalues, and thus restores overall conservation. New results suggest that the earlier algorithm may have exaggerated the energy exchange at high wavenumbers. In particular, feedback of the polymer elastic energy to the isotropic turbulence is now greatly reduced.
Emergency management of heat exchanger leak on cardiopulmonary bypass with hypothermia.
Gukop, P; Tiezzi, A; Mattam, K; Sarsam, M
2015-11-01
Heat exchanger leak on cardiopulmonary bypass is very rare, but serious. The exact incidence is not known. It is an emergency associated with the potential risk of blood contamination, air embolism and haemolysis, difficulty with re-warming, acidosis, subsequent septic shock, multi-organ failure and death. We present a prompt, highly co-ordinated algorithm for the successful management of this important rare complication. There is need for further research to look for safety devices that detect leaks and techniques to reduce bacterial load. It is essential that teams practice oxygenator change-out routines and have a well-established change-out protocol. © The Author(s) 2015.
Usage of the hybrid encryption in a cloud instant messages exchange system
NASA Astrophysics Data System (ADS)
Kvyetnyy, Roman N.; Romanyuk, Olexander N.; Titarchuk, Evgenii O.; Gromaszek, Konrad; Mussabekov, Nazarbek
2016-09-01
A new approach for constructing cloud instant messaging represented in this article allows users to encrypt data locally by using Diffie - Hellman key exchange protocol. The described approach allows to construct a cloud service which operates only by users encrypted messages; encryption and decryption takes place locally at the user party using a symmetric AES encryption. A feature of the service is the conferences support without the need for messages reecryption for each participant. In the article it is given an example of the protocol implementation on the ECC and RSA encryption algorithms basis, as well as a comparison of these implementations.
Surface Ocean pCO2 Seasonality and Sea-Air CO2 Flux Estimates for the North American East Coast
NASA Technical Reports Server (NTRS)
Signorini, Sergio; Mannino, Antonio; Najjar, Raymond G., Jr.; Friedrichs, Marjorie A. M.; Cai, Wei-Jun; Salisbury, Joe; Wang, Zhaohui Aleck; Thomas, Helmuth; Shadwick, Elizabeth
2013-01-01
Underway and in situ observations of surface ocean pCO2, combined with satellite data, were used to develop pCO2 regional algorithms to analyze the seasonal and interannual variability of surface ocean pCO2 and sea-air CO2 flux for five physically and biologically distinct regions of the eastern North American continental shelf: the South Atlantic Bight (SAB), the Mid-Atlantic Bight (MAB), the Gulf of Maine (GoM), Nantucket Shoals and Georges Bank (NS+GB), and the Scotian Shelf (SS). Temperature and dissolved inorganic carbon variability are the most influential factors driving the seasonality of pCO2. Estimates of the sea-air CO2 flux were derived from the available pCO2 data, as well as from the pCO2 reconstructed by the algorithm. Two different gas exchange parameterizations were used. The SS, GB+NS, MAB, and SAB regions are net sinks of atmospheric CO2 while the GoM is a weak source. The estimates vary depending on the use of surface ocean pCO2 from the data or algorithm, as well as with the use of the two different gas exchange parameterizations. Most of the regional estimates are in general agreement with previous studies when the range of uncertainty and interannual variability are taken into account. According to the algorithm, the average annual uptake of atmospheric CO2 by eastern North American continental shelf waters is found to be between 3.4 and 5.4 Tg C/yr (areal average of 0.7 to 1.0 mol CO2 /sq m/yr) over the period 2003-2010.
SPEEDUP{trademark} ion exchange column model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hang, T.
2000-03-06
A transient model to describe the process of loading a solute onto the granular fixed bed in an ion exchange (IX) column has been developed using the SpeedUp{trademark} software package. SpeedUp offers the advantage of smooth integration into other existing SpeedUp flowsheet models. The mathematical algorithm of a porous particle diffusion model was adopted to account for convection, axial dispersion, film mass transfer, and pore diffusion. The method of orthogonal collocation on finite elements was employed to solve the governing transport equations. The model allows the use of a non-linear Langmuir isotherm based on an effective binary ionic exchange process.more » The SpeedUp column model was tested by comparing to the analytical solutions of three transport problems from the ion exchange literature. In addition, a sample calculation of a train of three crystalline silicotitanate (CST) IX columns in series was made using both the SpeedUp model and Purdue University's VERSE-LC code. All test cases showed excellent agreement between the SpeedUp model results and the test data. The model can be readily used for SuperLig{trademark} ion exchange resins, once the experimental data are complete.« less
Protein hydrogen exchange: Testing current models
Skinner, John J; Lim, Woon K; Bédard, Sabrina; Black, Ben E; Englander, S Walter
2012-01-01
To investigate the determinants of protein hydrogen exchange (HX), HX rates of most of the backbone amide hydrogens of Staphylococcal nuclease were measured by NMR methods. A modified analysis was used to improve accuracy for the faster hydrogens. HX rates of both near surface and well buried hydrogens are spread over more than 7 orders of magnitude. These results were compared with previous hypotheses for HX rate determination. Contrary to a common assumption, proximity to the surface of the native protein does not usually produce fast exchange. The slow HX rates for unprotected surface hydrogens are not well explained by local electrostatic field. The ability of buried hydrogens to exchange is not explained by a solvent penetration mechanism. The exchange rates of structurally protected hydrogens are not well predicted by algorithms that depend only on local interactions or only on transient unfolding reactions. These observations identify some of the present difficulties of HX rate prediction and suggest the need for returning to a detailed hydrogen by hydrogen analysis to examine the bases of structure-rate relationships, as described in the companion paper (Skinner et al., Protein Sci 2012;21:996–1005). PMID:22544567
NASA Astrophysics Data System (ADS)
Domino, Krzysztof
2017-02-01
The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deptuch, G. W.; Fahim, F.; Grybos, P.
An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel that recovers composite signals and event driven strobes to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32×32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3 μm X-ray beam. The results of these tests are given in the paper assessing physical implementation of the algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel
An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less
A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations
Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary
2016-01-01
There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699
NASA Astrophysics Data System (ADS)
Lombardi, D.
2011-12-01
Plausibility judgments-although well represented in conceptual change theories (see, for example, Chi, 2005; diSessa, 1993; Dole & Sinatra, 1998; Posner et al., 1982)-have received little empirical attention until our recent work investigating teachers' and students' understanding of and perceptions about human-induced climate change (Lombardi & Sinatra, 2010, 2011). In our first study with undergraduate students, we found that greater plausibility perceptions of human-induced climate accounted for significantly greater understanding of weather and climate distinctions after instruction, even after accounting for students' prior knowledge (Lombardi & Sinatra, 2010). In a follow-up study with inservice science and preservice elementary teachers, we showed that anger about the topic of climate change and teaching about climate change was significantly related to implausible perceptions about human-induced climate change (Lombardi & Sinatra, 2011). Results from our recent studies helped to inform our development of a model of the role of plausibility judgments in conceptual change situations. The model applies to situations involving cognitive dissonance, where background knowledge conflicts with an incoming message. In such situations, we define plausibility as a judgment on the relative potential truthfulness of incoming information compared to one's existing mental representations (Rescher, 1976). Students may not consciously think when making plausibility judgments, expending only minimal mental effort in what is referred to as an automatic cognitive process (Stanovich, 2009). However, well-designed instruction could facilitate students' reappraisal of plausibility judgments in more effortful and conscious cognitive processing. Critical evaluation specifically may be one effective method to promote plausibility reappraisal in a classroom setting (Lombardi & Sinatra, in progress). In science education, critical evaluation involves the analysis of how evidentiary data support a hypothesis and its alternatives. The presentation will focus on how instruction promoting critical evaluation can encourage individuals to reappraise their plausibility judgments and initiate knowledge reconstruction. In a recent pilot study, teachers experienced an instructional scaffold promoting critical evaluation of two competing climate change theories (i.e., human-induced and increasing solar irradiance) and significantly changed both their plausibility judgments and perceptions of correctness toward the scientifically-accepted model of human-induced climate change. A comparison group of teachers who did not experience the critical evaluation activity showed no significant change. The implications of these studies for future research and instruction will be discussed in the presentation, including effective ways to increase students' and teachers' ability to be critically evaluative and reappraise their plausibility judgments. With controversial science issues, such as climate change, such abilities may be necessary to facilitate conceptual change.
Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel; ...
2017-06-28
An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T.
2015-10-28
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for optionmore » pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.« less
NASA Astrophysics Data System (ADS)
Atanassov, E.; Dimitrov, D.; Gurov, T.
2015-10-01
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.
NASA Astrophysics Data System (ADS)
Lund, M.; Zona, D.; Jackowicz-Korczynski, M.; Xu, X.
2017-12-01
The eddy covariance methodology is the primary tool for studying landscape-scale land-atmosphere exchange of greenhouse gases. Since the choice of instrumental setup and processing algorithms may influence the results, efforts within the international flux community have been made towards methodological harmonization and standardization. Performing eddy covariance measurements in high-latitude, Arctic tundra sites involves several challenges, related not only to remoteness and harsh climate conditions but also to the choice of processing algorithms. Partitioning of net ecosystem exchange (NEE) of CO2 into gross primary production (GPP) and ecosystem respiration (Reco) in the FLUXNET2015 dataset is made using either Nighttime or Daytime methods. These variables, GPP and Reco, are essential for calibration and validation of Earth system models. North of the Arctic Circle, sun remains visible at local midnight for a period of time, the number of days per year with midnight sun being dependent on latitude. The absence of nighttime conditions during Arctic summers renders the Nighttime method uncertain, however, no extensive assessment on the implications for flux partitioning has yet been made. In this study, we will assess the performance and validity of both partitioning methods along a latitudinal transect of northern sites included in the FLUXNET2015 dataset. We will evaluate the partitioned flux components against model simulations using the Community Land Model (CLM). Our results will be valuable for users interested in simulating Arctic and global carbon cycling.
The morphing of geographical features by Fourier transformation
Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344
Pitch-informed solo and accompaniment separation towards its use in music education applications
NASA Astrophysics Data System (ADS)
Cano, Estefanía; Schuller, Gerald; Dittmar, Christian
2014-12-01
We present a system for the automatic separation of solo instruments and music accompaniment in polyphonic music recordings. Our approach is based on a pitch detection front-end and a tone-based spectral estimation. We assess the plausibility of using sound separation technologies to create practice material in a music education context. To better understand the sound separation quality requirements in music education, a listening test was conducted to determine the most perceptually relevant signal distortions that need to be improved. Results from the listening test show that solo and accompaniment tracks pose different quality requirements and should be optimized differently. We propose and evaluate algorithm modifications to better understand their effects on objective perceptual quality measures. Finally, we outline possible ways of optimizing our separation approach to better suit the requirements of music education applications.
Sharma, Arun; Raghavendra, Kamaraju; Adak, Tridibesh; Dash, Aditya P
2008-01-01
Background The diverse physiological and pathological role of nitric oxide in innate immune defenses against many intra and extracellular pathogens, have led to the development of various methods for determining nitric oxide (NO) synthesis. NO metabolites, nitrite (NO2-) and nitrate (NO3-) are produced by the action of an inducible Anopheles culicifacies NO synthase (AcNOS) in mosquito mid-guts and may be central to anti-parasitic arsenal of these mosquitoes. Method While exploring a plausible mechanism of refractoriness based on nitric oxide synthase physiology among the sibling species of An. culicifacies, a sensitive, specific and cost effective high performance liquid chromatography (HPLC) method was developed, which is not influenced by the presence of biogenic amines, for the determination of NO2- and NO3- from mosquito mid-guts and haemolymph. Results This method is based on extraction, efficiency, assay reproducibility and contaminant minimization. It entails de-proteinization by centrifugal ultra filtration through ultracel 3 K filter and analysis by high performance anion exchange liquid chromatography (Sphereclone, 5 μ SAX column) with UV detection at 214 nm. The lower detection limit of the assay procedure is 50 pmoles in all midgut and haemolymph samples. Retention times for NO2- and NO3- in standards and in mid-gut samples were 3.42 and 4.53 min. respectively. Assay linearity for standards ranged between 50 nM and 1 mM. Recoveries of NO2- and NO3- from spiked samples (1–100 μM) and from the extracted standards (1–100 μM) were calculated to be 100%. Intra-assay and inter assay variations and relative standard deviations (RSDs) for NO2- and NO3- in spiked and un-spiked midgut samples were 5.7% or less. Increased levels NO2- and NO3- in midguts and haemolymph of An. culicifacies sibling species B in comparison to species A reflect towards a mechanism of refractoriness based on AcNOS physiology. Conclusion HPLC is a sensitive and accurate technique for identification and quantifying pmole levels of NO metabolites in mosquito midguts and haemolymph samples that can be useful for clinical investigations of NO biochemistry, physiology and pharmacology in various biological samples. PMID:18442373
Negotiating plausibility: intervening in the future of nanotechnology.
Selin, Cynthia
2011-12-01
The national-level scenarios project NanoFutures focuses on the social, political, economic, and ethical implications of nanotechnology, and is initiated by the Center for Nanotechnology in Society at Arizona State University (CNS-ASU). The project involves novel methods for the development of plausible visions of nanotechnology-enabled futures, elucidates public preferences for various alternatives, and, using such preferences, helps refine future visions for research and outreach. In doing so, the NanoFutures project aims to address a central question: how to deliberate the social implications of an emergent technology whose outcomes are not known. The solution pursued by the NanoFutures project is twofold. First, NanoFutures limits speculation about the technology to plausible visions. This ambition introduces a host of concerns about the limits of prediction, the nature of plausibility, and how to establish plausibility. Second, it subjects these visions to democratic assessment by a range of stakeholders, thus raising methodological questions as to who are relevant stakeholders and how to activate different communities so as to engage the far future. This article makes the dilemmas posed by decisions about such methodological issues transparent and therefore articulates the role of plausibility in anticipatory governance.
Arbitrary temporal shape pulsed fiber laser based on SPGD algorithm
NASA Astrophysics Data System (ADS)
Jiang, Min; Su, Rongtao; Zhang, Pengfei; Zhou, Pu
2018-06-01
A novel adaptive pulse shaping method for a pulsed master oscillator power amplifier fiber laser to deliver an arbitrary pulse shape is demonstrated. Numerical simulation has been performed to validate the feasibility of the scheme and provide meaningful guidance for the design of the algorithm control parameters. In the proof-of-concept experiment, information on the temporal property of the laser is exchanged and evaluated through a local area network, and the laser adjusted the parameters of the seed laser according to the monitored output of the system automatically. Various pulse shapes, including a rectangular shape, ‘M’ shape, and elliptical shape are achieved through experimental iterations.
NASA Astrophysics Data System (ADS)
Castagnoli, Giuseppe
2017-05-01
The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete as it lacks the initial measurement. We extend it to the process of setting the problem. An initial measurement selects a problem setting at random, and a unitary transformation sends it into the desired setting. The extended representation must be with respect to Bob, the problem setter, and any external observer. It cannot be with respect to Alice, the problem solver. It would tell her the problem setting and thus the solution of the problem implicit in it. In the representation to Alice, the projection of the quantum state due to the initial measurement should be postponed until the end of the quantum algorithm. In either representation, there is a unitary transformation between the initial and final measurement outcomes. As a consequence, the final measurement of any ℛ-th part of the solution could select back in time a corresponding part of the random outcome of the initial measurement; the associated projection of the quantum state should be advanced by the inverse of that unitary transformation. This, in the representation to Alice, would tell her, before she begins her problem solving action, that part of the solution. The quantum algorithm should be seen as a sum over classical histories in each of which Alice knows in advance one of the possible ℛ-th parts of the solution and performs the oracle queries still needed to find it - this for the value of ℛ that explains the algorithm's speedup. We have a relation between retrocausality ℛ and the number of oracle queries needed to solve an oracle problem quantumly. All the oracle problems examined can be solved with any value of ℛ up to an upper bound attained by the optimal quantum algorithm. This bound is always in the vicinity of 1/2 . Moreover, ℛ =1/2 always provides the order of magnitude of the number of queries needed to solve the problem in an optimal quantum way. If this were true for any oracle problem, as plausible, it would solve the quantum query complexity problem.
Uncertainty in eddy covariance flux estimates resulting from spectral attenuation [Chapter 4
W. J. Massman; R. Clement
2004-01-01
Surface exchange fluxes measured by eddy covariance tend to be underestimated as a result of limitations in sensor design, signal processing methods, and finite flux-averaging periods. But, careful system design, modern instrumentation, and appropriate data processing algorithms can minimize these losses, which, if not too large, can be estimated and corrected using...
Li, Hongzhi; Yang, Wei
2007-03-21
An approach is developed in the replica exchange framework to enhance conformational sampling for the quantum mechanical (QM) potential based molecular dynamics simulations. Importantly, with our enhanced sampling treatment, a decent convergence for electronic structure self-consistent-field calculation is robustly guaranteed, which is made possible in our replica exchange design by avoiding direct structure exchanges between the QM-related replicas and the activated (scaled by low scaling parameters or treated with high "effective temperatures") molecular mechanical (MM) replicas. Although the present approach represents one of the early efforts in the enhanced sampling developments specifically for quantum mechanical potentials, the QM-based simulations treated with the present technique can possess the similar sampling efficiency to the MM based simulations treated with the Hamiltonian replica exchange method (HREM). In the present paper, by combining this sampling method with one of our recent developments (the dual-topology alchemical HREM approach), we also introduce a method for the sampling enhanced QM-based free energy calculations.
Foundations and latest advances in replica exchange transition interface sampling.
Cabriolu, Raffaela; Skjelbred Refsnes, Kristin M; Bolhuis, Peter G; van Erp, Titus S
2017-10-21
Nearly 20 years ago, transition path sampling (TPS) emerged as an alternative method to free energy based approaches for the study of rare events such as nucleation, protein folding, chemical reactions, and phase transitions. TPS effectively performs Monte Carlo simulations with relatively short molecular dynamics trajectories, with the advantage of not having to alter the actual potential energy surface nor the underlying physical dynamics. Although the TPS approach also introduced a methodology to compute reaction rates, this approach was for a long time considered theoretically attractive, providing the exact same results as extensively long molecular dynamics simulations, but still expensive for most relevant applications. With the increase of computer power and improvements in the algorithmic methodology, quantitative path sampling is finding applications in more and more areas of research. In particular, the transition interface sampling (TIS) and the replica exchange TIS (RETIS) algorithms have, in turn, improved the efficiency of quantitative path sampling significantly, while maintaining the exact nature of the approach. Also, open-source software packages are making these methods, for which implementation is not straightforward, now available for a wider group of users. In addition, a blooming development takes place regarding both applications and algorithmic refinements. Therefore, it is timely to explore the wide panorama of the new developments in this field. This is the aim of this article, which focuses on the most efficient exact path sampling approach, RETIS, as well as its recent applications, extensions, and variations.
Foundations and latest advances in replica exchange transition interface sampling
NASA Astrophysics Data System (ADS)
Cabriolu, Raffaela; Skjelbred Refsnes, Kristin M.; Bolhuis, Peter G.; van Erp, Titus S.
2017-10-01
Nearly 20 years ago, transition path sampling (TPS) emerged as an alternative method to free energy based approaches for the study of rare events such as nucleation, protein folding, chemical reactions, and phase transitions. TPS effectively performs Monte Carlo simulations with relatively short molecular dynamics trajectories, with the advantage of not having to alter the actual potential energy surface nor the underlying physical dynamics. Although the TPS approach also introduced a methodology to compute reaction rates, this approach was for a long time considered theoretically attractive, providing the exact same results as extensively long molecular dynamics simulations, but still expensive for most relevant applications. With the increase of computer power and improvements in the algorithmic methodology, quantitative path sampling is finding applications in more and more areas of research. In particular, the transition interface sampling (TIS) and the replica exchange TIS (RETIS) algorithms have, in turn, improved the efficiency of quantitative path sampling significantly, while maintaining the exact nature of the approach. Also, open-source software packages are making these methods, for which implementation is not straightforward, now available for a wider group of users. In addition, a blooming development takes place regarding both applications and algorithmic refinements. Therefore, it is timely to explore the wide panorama of the new developments in this field. This is the aim of this article, which focuses on the most efficient exact path sampling approach, RETIS, as well as its recent applications, extensions, and variations.
ERIC Educational Resources Information Center
Staub, Adrian; Rayner, Keith; Pollatsek, Alexander; Hyona, Jukka; Majewski, Helen
2007-01-01
Readers' eye movements were monitored as they read sentences containing noun-noun compounds that varied in frequency (e.g., elevator mechanic, mountain lion). The left constituent of the compound was either plausible or implausible as a head noun at the point at which it appeared, whereas the compound as a whole was always plausible. When the head…
Nanomaterials Versus Ambient Ultrafine Particles: An Opportunity to Exchange Toxicology Knowledge
Miller, Mark R.; Clift, Martin J.D.; Elder, Alison; Mills, Nicholas L.; Møller, Peter; Schins, Roel P.F.; Vogel, Ulla; Kreyling, Wolfgang G.; Alstrup Jensen, Keld; Kuhlbusch, Thomas A.J.; Schwarze, Per E.; Hoet, Peter; Pietroiusti, Antonio; De Vizcaya-Ruiz, Andrea; Baeza-Squiban, Armelle; Teixeira, João Paulo; Tran, C. Lang; Cassee, Flemming R.
2017-01-01
Background: A rich body of literature exists that has demonstrated adverse human health effects following exposure to ambient air particulate matter (PM), and there is strong support for an important role of ultrafine (nanosized) particles. At present, relatively few human health or epidemiology data exist for engineered nanomaterials (NMs) despite clear parallels in their physicochemical properties and biological actions in in vitro models. Objectives: NMs are available with a range of physicochemical characteristics, which allows a more systematic toxicological analysis. Therefore, the study of ultrafine particles (UFP, <100 nm in diameter) provides an opportunity to identify plausible health effects for NMs, and the study of NMs provides an opportunity to facilitate the understanding of the mechanism of toxicity of UFP. Methods: A workshop of experts systematically analyzed the available information and identified 19 key lessons that can facilitate knowledge exchange between these discipline areas. Discussion: Key lessons range from the availability of specific techniques and standard protocols for physicochemical characterization and toxicology assessment to understanding and defining dose and the molecular mechanisms of toxicity. This review identifies a number of key areas in which additional research prioritization would facilitate both research fields simultaneously. Conclusion: There is now an opportunity to apply knowledge from NM toxicology and use it to better inform PM health risk research and vice versa. https://doi.org/10.1289/EHP424 PMID:29017987
Nanomaterials Versus Ambient Ultrafine Particles: An Opportunity to Exchange Toxicology Knowledge.
Stone, Vicki; Miller, Mark R; Clift, Martin J D; Elder, Alison; Mills, Nicholas L; Møller, Peter; Schins, Roel P F; Vogel, Ulla; Kreyling, Wolfgang G; Alstrup Jensen, Keld; Kuhlbusch, Thomas A J; Schwarze, Per E; Hoet, Peter; Pietroiusti, Antonio; De Vizcaya-Ruiz, Andrea; Baeza-Squiban, Armelle; Teixeira, João Paulo; Tran, C Lang; Cassee, Flemming R
2017-10-10
A rich body of literature exists that has demonstrated adverse human health effects following exposure to ambient air particulate matter (PM), and there is strong support for an important role of ultrafine (nanosized) particles. At present, relatively few human health or epidemiology data exist for engineered nanomaterials (NMs) despite clear parallels in their physicochemical properties and biological actions in in vitro models. NMs are available with a range of physicochemical characteristics, which allows a more systematic toxicological analysis. Therefore, the study of ultrafine particles (UFP, <100 nm in diameter) provides an opportunity to identify plausible health effects for NMs, and the study of NMs provides an opportunity to facilitate the understanding of the mechanism of toxicity of UFP. A workshop of experts systematically analyzed the available information and identified 19 key lessons that can facilitate knowledge exchange between these discipline areas. Key lessons range from the availability of specific techniques and standard protocols for physicochemical characterization and toxicology assessment to understanding and defining dose and the molecular mechanisms of toxicity. This review identifies a number of key areas in which additional research prioritization would facilitate both research fields simultaneously. There is now an opportunity to apply knowledge from NM toxicology and use it to better inform PM health risk research and vice versa. https://doi.org/10.1289/EHP424.
Two-component mixture model: Application to palm oil and exchange rate
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-12-01
Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.
Quantum communication complexity of establishing a shared reference frame.
Rudolph, Terry; Grover, Lov
2003-11-21
We discuss the aligning of spatial reference frames from a quantum communication complexity perspective. This enables us to analyze multiple rounds of communication and give several simple examples demonstrating tradeoffs between the number of rounds and the type of communication. Using a distributed variant of a quantum computational algorithm, we give an explicit protocol for aligning spatial axes via the exchange of spin-1/2 particles which makes no use of either exchanged entangled states, or of joint measurements. This protocol achieves a worst-case fidelity for the problem of "direction finding" that is asymptotically equivalent to the optimal average case fidelity achievable via a single forward communication of entangled states.
NASA Astrophysics Data System (ADS)
Plumer, M. L.; Almudallal, A. M.; Mercer, J. I.; Whitehead, J. P.; Fal, T. J.
The kinetic Monte Carlo (KMC) method developed for thermally activated magnetic reversal processes in single-layer recording media has been extended to study dual-layer Exchange Coupled Composition (ECC) media used in current and next generations of disc drives. The attempt frequency is derived from the Langer formalism with the saddle point determined using a variant of Bellman Ford algorithm. Complication (such as stagnation) arising from coupled grains having metastable states are addressed. MH-hysteresis loops are calculated over a wide range of anisotropy ratios, sweep rates and inter-layer coupling parameter. Results are compared with standard micromagnetics at fast sweep rates and experimental results at slow sweep rates.
Effects of plausibility on structural priming.
Christianson, Kiel; Luke, Steven G; Ferreira, Fernanda
2010-03-01
We report a replication and extension of Ferreira (2003), in which it was observed that native adult English speakers misinterpret passive sentences that relate implausible but not impossible semantic relationships (e.g., The angler was caught by the fish) significantly more often than they do plausible passives or plausible or implausible active sentences. In the experiment reported here, participants listened to the same plausible and implausible passive and active sentences as in Ferreira (2003), answered comprehension questions, and then orally described line drawings of simple transitive actions. The descriptions were analyzed as a measure of structural priming (Bock, 1986). Question accuracy data replicated Ferreira (2003). Production data yielded an interaction: Passive descriptions were produced more often after plausible passives and implausible actives. We interpret these results as indicative of a language processor that proceeds along differentiated morphosyntactic and semantic routes. The processor may end up adjudicating between conflicting outputs from these routes by settling on a "good enough" representation that is not completely faithful to the input.
Zhang, Ying; Wang, Jun; Hao, Guan
2018-01-08
With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.
Zhang, Ying; Wang, Jun; Hao, Guan
2018-01-01
With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms. PMID:29316702
Simultaneous beam sampling and aperture shape optimization for SPORT.
Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei
2015-02-01
Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.
Simultaneous beam sampling and aperture shape optimization for SPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less
The Plausibility of a String Quartet Performance in Virtual Reality.
Bergstrom, Ilias; Azevedo, Sergio; Papiotis, Panos; Saldanha, Nuno; Slater, Mel
2017-04-01
We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. 'Plausibility' refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant's movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.
What if? Neural activity underlying semantic and episodic counterfactual thinking.
Parikh, Natasha; Ruzic, Luka; Stewart, Gregory W; Spreng, R Nathan; De Brigard, Felipe
2018-05-25
Counterfactual thinking (CFT) is the process of mentally simulating alternative versions of known facts. In the past decade, cognitive neuroscientists have begun to uncover the neural underpinnings of CFT, particularly episodic CFT (eCFT), which activates regions in the default network (DN) also activated by episodic memory (eM) recall. However, the engagement of DN regions is different for distinct kinds of eCFT. More plausible counterfactuals and counterfactuals about oneself show stronger activity in DN regions compared to implausible and other- or object-focused counterfactuals. The current study sought to identify a source for this difference in DN activity. Specifically, self-focused counterfactuals may also be more plausible, suggesting that DN core regions are sensitive to the plausibility of a simulation. On the other hand, plausible and self-focused counterfactuals may involve more episodic information than implausible and other-focused counterfactuals, which would imply DN sensitivity to episodic information. In the current study, we compared episodic and semantic counterfactuals generated to be plausible or implausible against episodic and semantic memory reactivation using fMRI. Taking multivariate and univariate approaches, we found that the DN is engaged more during episodic simulations, including eM and all eCFT, than during semantic simulations. Semantic simulations engaged more inferior temporal and lateral occipital regions. The only region that showed strong plausibility effects was the hippocampus, which was significantly engaged for implausible CFT but not for plausible CFT, suggestive of binding more disparate information. Consequences of these findings for the cognitive neuroscience of mental simulation are discussed. Published by Elsevier Inc.
Schmid, Annina B; Coppieters, Michel W
2011-12-01
A high prevalence of dual nerve disorders is frequently reported. How a secondary nerve disorder may develop following a primary nerve disorder remains largely unknown. Although still frequently cited, most explanatory theories were formulated many years ago. Considering recent advances in neuroscience, it is uncertain whether these theories still reflect current expert opinion. A Delphi study was conducted to update views on potential mechanisms underlying dual nerve disorders. In three rounds, seventeen international experts in the field of peripheral nerve disorders were asked to list possible mechanisms and rate their plausibility. Mechanisms with a median plausibility rating of ≥7 out of 10 were considered highly plausible. The experts identified fourteen mechanisms associated with a first nerve disorder that may predispose to the development of another nerve disorder. Of these fourteen mechanisms, nine have not previously been linked to double crush. Four mechanisms were considered highly plausible (impaired axonal transport, ion channel up or downregulation, inflammation in the dorsal root ganglia and neuroma-in-continuity). Eight additional mechanisms were listed which are not triggered by a primary nerve disorder, but may render the nervous system more vulnerable to multiple nerve disorders, such as systemic diseases and neurotoxic medication. Even though many mechanisms were classified as plausible or highly plausible, overall plausibility ratings varied widely. Experts indicated that a wide range of mechanisms has to be considered to better understand dual nerve disorders. Previously listed theories cannot be discarded, but may be insufficient to explain the high prevalence of dual nerve disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.
Buske, Orion J.; Schiettecatte, François; Hutton, Benjamin; Dumitriu, Sergiu; Misyura, Andriy; Huang, Lijia; Hartley, Taila; Girdea, Marta; Sobreira, Nara; Mungall, Chris; Brudno, Michael
2016-01-01
Despite the increasing prevalence of clinical sequencing, the difficulty of identifying additional affected families is a key obstacle to solving many rare diseases. There may only be a handful of similar patients worldwide, and their data may be stored in diverse clinical and research databases. Computational methods are necessary to enable finding similar patients across the growing number of patient repositories and registries. We present the Matchmaker Exchange Application Programming Interface (MME API), a protocol and data format for exchanging phenotype and genotype profiles to enable matchmaking among patient databases, facilitate the identification of additional cohorts, and increase the rate with which rare diseases can be researched and diagnosed. We designed the API to be straightforward and flexible in order to simplify its adoption on a large number of data types and workflows. We also provide a public test data set, curated from the literature, to facilitate implementation of the API and development of new matching algorithms. The initial version of the API has been successfully implemented by three members of the Matchmaker Exchange and was immediately able to reproduce previously-identified matches and generate several new leads currently being validated. The API is available at https://github.com/ga4gh/mme-apis. PMID:26255989
Buske, Orion J; Schiettecatte, François; Hutton, Benjamin; Dumitriu, Sergiu; Misyura, Andriy; Huang, Lijia; Hartley, Taila; Girdea, Marta; Sobreira, Nara; Mungall, Chris; Brudno, Michael
2015-10-01
Despite the increasing prevalence of clinical sequencing, the difficulty of identifying additional affected families is a key obstacle to solving many rare diseases. There may only be a handful of similar patients worldwide, and their data may be stored in diverse clinical and research databases. Computational methods are necessary to enable finding similar patients across the growing number of patient repositories and registries. We present the Matchmaker Exchange Application Programming Interface (MME API), a protocol and data format for exchanging phenotype and genotype profiles to enable matchmaking among patient databases, facilitate the identification of additional cohorts, and increase the rate with which rare diseases can be researched and diagnosed. We designed the API to be straightforward and flexible in order to simplify its adoption on a large number of data types and workflows. We also provide a public test data set, curated from the literature, to facilitate implementation of the API and development of new matching algorithms. The initial version of the API has been successfully implemented by three members of the Matchmaker Exchange and was immediately able to reproduce previously identified matches and generate several new leads currently being validated. The API is available at https://github.com/ga4gh/mme-apis. © 2015 WILEY PERIODICALS, INC.
An improved molecular dynamics algorithm to study thermodiffusion in binary hydrocarbon mixtures
NASA Astrophysics Data System (ADS)
Antoun, Sylvie; Saghir, M. Ziad; Srinivasan, Seshasai
2018-03-01
In multicomponent liquid mixtures, the diffusion flow of chemical species can be induced by temperature gradients, which leads to a separation of the constituent components. This cross effect between temperature and concentration is known as thermodiffusion or the Ludwig-Soret effect. The performance of boundary driven non-equilibrium molecular dynamics along with the enhanced heat exchange (eHEX) algorithm was studied by assessing the thermodiffusion process in n-pentane/n-decane (nC5-nC10) binary mixtures. The eHEX algorithm consists of an extended version of the HEX algorithm with an improved energy conservation property. In addition to this, the transferable potentials for phase equilibria-united atom force field were employed in all molecular dynamics (MD) simulations to precisely model the molecular interactions in the fluid. The Soret coefficients of the n-pentane/n-decane (nC5-nC10) mixture for three different compositions (at 300.15 K and 0.1 MPa) were calculated and compared with the experimental data and other MD results available in the literature. Results of our newly employed MD algorithm showed great agreement with experimental data and a better accuracy compared to other MD procedures.
EDDA: An Efficient Distributed Data Replication Algorithm in VANETs.
Zhu, Junyu; Huang, Chuanhe; Fan, Xiying; Guo, Sipei; Fu, Bin
2018-02-10
Efficient data dissemination in vehicular ad hoc networks (VANETs) is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA). The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead.
EDDA: An Efficient Distributed Data Replication Algorithm in VANETs
Zhu, Junyu; Huang, Chuanhe; Fan, Xiying; Guo, Sipei; Fu, Bin
2018-01-01
Efficient data dissemination in vehicular ad hoc networks (VANETs) is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA). The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead. PMID:29439443
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Dewitte, V; Cagnie, B; Barbe, T; Beernaert, A; Vanthillo, B; Danneels, L
2015-06-01
Recent systematic reviews have demonstrated reasonable evidence that lumbar mobilization and manipulation techniques are beneficial. However, knowledge on optimal techniques and doses, and its clinical reasoning is currently lacking. To address this, a clinical algorithm is presented so as to guide therapists in their clinical reasoning to identify patients who are likely to respond to lumbar mobilization and/or manipulation and to direct appropriate technique selection. Key features in subjective and clinical examination suggestive of mechanical nociceptive pain probably arising from articular structures, can categorize patients into distinct articular dysfunction patterns. Based on these patterns, specific mobilization and manipulation techniques are suggested. This clinical algorithm is merely based on empirical clinical expertise and complemented through knowledge exchange between international colleagues. The added value of the proposed articular dysfunction patterns should be considered within a broader perspective. Copyright © 2014 Elsevier Ltd. All rights reserved.
ePMV embeds molecular modeling into professional animation software environments.
Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J
2011-03-09
Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.
ePMV Embeds Molecular Modeling into Professional Animation Software Environments
Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.
2011-01-01
SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181
A hybrid dynamic harmony search algorithm for identical parallel machines scheduling
NASA Astrophysics Data System (ADS)
Chen, Jing; Pan, Quan-Ke; Wang, Ling; Li, Jun-Qing
2012-02-01
In this article, a dynamic harmony search (DHS) algorithm is proposed for the identical parallel machines scheduling problem with the objective to minimize makespan. First, an encoding scheme based on a list scheduling rule is developed to convert the continuous harmony vectors to discrete job assignments. Second, the whole harmony memory (HM) is divided into multiple small-sized sub-HMs, and each sub-HM performs evolution independently and exchanges information with others periodically by using a regrouping schedule. Third, a novel improvisation process is applied to generate a new harmony by making use of the information of harmony vectors in each sub-HM. Moreover, a local search strategy is presented and incorporated into the DHS algorithm to find promising solutions. Simulation results show that the hybrid DHS (DHS_LS) is very competitive in comparison to its competitors in terms of mean performance and average computational time.
Distributed optimisation problem with communication delay and external disturbance
NASA Astrophysics Data System (ADS)
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn
2014-11-14
Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbormore » points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, Gerald G.
What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectramore » along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.« less
Auction-based Security Game for Multiuser Cooperative Networks
NASA Astrophysics Data System (ADS)
Wang, An; Cai, Yueming; Yang, Wendong; Cheng, Yunpeng
2013-04-01
In this paper, we develop an auction-based algorithm to allocate the relay power efficiently to improve the system secrecy rate in a cooperative network, where several source-destination pairs and one cooperative relay are involved. On the one hand, the cooperative relay assists these pairs to transmit under a peak power constraint. On the other hand, the relay is untrusty and is also a passive eavesdropper. The whole auction process is completely distributed and no instantaneous channel state information exchange is needed. We also prove the existence and uniqueness of the Nash Equilibrium (NE) for the proposed power auction game. Moreover, the Pareto optimality is also validated. Simulation results show that our proposed auction-based algorithm can effectively improve the system secrecy rate. Besides, the proposed auction-based algorithm can converge to the unique NE point within a finite number of iterations. More interestingly, we also find that the proposed power auction mechanism is cheat-proof.
NASA Astrophysics Data System (ADS)
Whalen, Daniel; Norman, Michael L.
2006-02-01
Radiation hydrodynamical transport of ionization fronts (I-fronts) in the next generation of cosmological reionization simulations holds the promise of predicting UV escape fractions from first principles as well as investigating the role of photoionization in feedback processes and structure formation. We present a multistep integration scheme for radiative transfer and hydrodynamics for accurate propagation of I-fronts and ionized flows from a point source in cosmological simulations. The algorithm is a photon-conserving method that correctly tracks the position of I-fronts at much lower resolutions than nonconservative techniques. The method applies direct hierarchical updates to the ionic species, bypassing the need for the costly matrix solutions required by implicit methods while retaining sufficient accuracy to capture the true evolution of the fronts. We review the physics of ionization fronts in power-law density gradients, whose analytical solutions provide excellent validation tests for radiation coupling schemes. The advantages and potential drawbacks of direct and implicit schemes are also considered, with particular focus on problem time-stepping, which if not properly implemented can lead to morphologically plausible I-front behavior that nonetheless departs from theory. We also examine the effect of radiation pressure from very luminous central sources on the evolution of I-fronts and flows.
The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian
2010-01-01
Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A combinatorial morphospace for angiosperm pollen
NASA Astrophysics Data System (ADS)
Mander, Luke
2016-04-01
The morphology of angiosperm (flowering plant) pollen is extraordinarily diverse. This diversity results from variations in the morphology of discrete anatomical components. These components include the overall shape of a pollen grain, the stratification of the exine, the number and form of any apertures, the type of dispersal unit, and the nature of any surface ornamentation. Different angiosperm pollen morphotypes reflect different combinations of these discrete components. In this talk, I ask the following question: given the anatomical components of angiosperm pollen that are known to exist in the plant kingdom, how many unique biologically plausible combinations of these components are there? I explore this question from the perspective of enumerative combinatorics using an algorithm I have written in the Python programming language. This algorithm (1) calculates the number of combinations of these components; (2) enumerates those combinations; and (3) graphically displays those combinations. The result is a combinatorial morphospace that reflects an underlying notion that the process of morphogenesis in angiosperm pollen can be thought of as an n choose k counting problem. I compare the morphology of extant and fossil angiosperm pollen grains to this morphospace, and suggest that from a combinatorial point of view angiosperm pollen is not as diverse as it could be, which may be a result of developmental constraints.
Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin
2014-01-01
This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG.
Ball, Kenneth; Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals.
Predictive representations can link model-based reinforcement learning to model-free mechanisms.
Russek, Evan M; Momennejad, Ida; Botvinick, Matthew M; Gershman, Samuel J; Daw, Nathaniel D
2017-09-01
Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.
Simulation of minimally invasive vascular interventions for training purposes.
Alderliesten, Tanja; Konings, Maurits K; Niessen, Wiro J
2004-01-01
To master the skills required to perform minimally invasive vascular interventions, proper training is essential. A computer simulation environment has been developed to provide such training. The simulation is based on an algorithm specifically developed to simulate the motion of a guide wire--the main instrument used during these interventions--in the human vasculature. In this paper, the design and model of the computer simulation environment is described and first results obtained with phantom and patient data are presented. To simulate minimally invasive vascular interventions, a discrete representation of a guide wire is used which allows modeling of guide wires with different physical properties. An algorithm for simulating the propagation of a guide wire within a vascular system, on the basis of the principle of minimization of energy, has been developed. Both longitudinal translation and rotation are incorporated as possibilities for manipulating the guide wire. The simulation is based on quasi-static mechanics. Two types of energy are introduced: internal energy related to the bending of the guide wire, and external energy resulting from the elastic deformation of the vessel wall. A series of experiments were performed on phantom and patient data. Simulation results are qualitatively compared with 3D rotational angiography data. The results indicate plausible behavior of the simulation.
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG
Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals. PMID:27340397
Predictive representations can link model-based reinforcement learning to model-free mechanisms
Botvinick, Matthew M.
2017-01-01
Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation. PMID:28945743
The optimization on flow scheme of helium liquefier with genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, H. R.; Xiong, L. Y.; Peng, N.; Liu, L. Q.
2017-01-01
There are several ways to organize the flow scheme of the helium liquefiers, such as arranging the expanders in parallel (reverse Brayton stage) or in series (modified Brayton stages). In this paper, the inlet mass flow and temperatures of expanders in Collins cycle are optimized using genetic algorithm (GA). Results show that maximum liquefaction rate can be obtained when the system is working at the optimal parameters. However, the reliability of the system is not well due to high wheel speed of the first turbine. Study shows that the scheme in which expanders are arranged in series with heat exchangers between them has higher operation reliability but lower plant efficiency when working at the same situation. Considering both liquefaction rate and system stability, another flow scheme is put forward hoping to solve the dilemma. The three configurations are compared from different aspects, they are respectively economic cost, heat exchanger size, system reliability and exergy efficiency. In addition, the effect of heat capacity ratio on heat transfer efficiency is discussed. A conclusion of choosing liquefier configuration is given in the end, which is meaningful for the optimal design of helium liquefier.
Providing integrity, authenticity, and confidentiality for header and pixel data of DICOM images.
Al-Haj, Ali
2015-04-01
Exchange of medical images over public networks is subjected to different types of security threats. This has triggered persisting demands for secured telemedicine implementations that will provide confidentiality, authenticity, and integrity for the transmitted images. The medical image exchange standard (DICOM) offers mechanisms to provide confidentiality for the header data of the image but not for the pixel data. On the other hand, it offers mechanisms to achieve authenticity and integrity for the pixel data but not for the header data. In this paper, we propose a crypto-based algorithm that provides confidentially, authenticity, and integrity for the pixel data, as well as for the header data. This is achieved by applying strong cryptographic primitives utilizing internally generated security data, such as encryption keys, hashing codes, and digital signatures. The security data are generated internally from the header and the pixel data, thus a strong bond is established between the DICOM data and the corresponding security data. The proposed algorithm has been evaluated extensively using DICOM images of different modalities. Simulation experiments show that confidentiality, authenticity, and integrity have been achieved as reflected by the results we obtained for normalized correlation, entropy, PSNR, histogram analysis, and robustness.
Learning in engineered multi-agent systems
NASA Astrophysics Data System (ADS)
Menon, Anup
Consider the problem of maximizing the total power produced by a wind farm. Due to aerodynamic interactions between wind turbines, each turbine maximizing its individual power---as is the case in present-day wind farms---does not lead to optimal farm-level power capture. Further, there are no good models to capture the said aerodynamic interactions, rendering model based optimization techniques ineffective. Thus, model-free distributed algorithms are needed that help turbines adapt their power production on-line so as to maximize farm-level power capture. Motivated by such problems, the main focus of this dissertation is a distributed model-free optimization problem in the context of multi-agent systems. The set-up comprises of a fixed number of agents, each of which can pick an action and observe the value of its individual utility function. An individual's utility function may depend on the collective action taken by all agents. The exact functional form (or model) of the agent utility functions, however, are unknown; an agent can only measure the numeric value of its utility. The objective of the multi-agent system is to optimize the welfare function (i.e. sum of the individual utility functions). Such a collaborative task requires communications between agents and we allow for the possibility of such inter-agent communications. We also pay attention to the role played by the pattern of such information exchange on certain aspects of performance. We develop two algorithms to solve this problem. The first one, engineered Interactive Trial and Error Learning (eITEL) algorithm, is based on a line of work in the Learning in Games literature and applies when agent actions are drawn from finite sets. While in a model-free setting, we introduce a novel qualitative graph-theoretic framework to encode known directed interactions of the form "which agents' action affect which others' payoff" (interaction graph). We encode explicit inter-agent communications in a directed graph (communication graph) and, under certain conditions, prove convergence of agent joint action (under eITEL) to the welfare optimizing set. The main condition requires that the union of interaction and communication graphs be strongly connected; thus the algorithm combines an implicit form of communication (via interactions through utility functions) with explicit inter-agent communications to achieve the given collaborative goal. This work has kinship with certain evolutionary computation techniques such as Simulated Annealing; the algorithm steps are carefully designed such that it describes an ergodic Markov chain with a stationary distribution that has support over states where agent joint actions optimize the welfare function. The main analysis tool is perturbed Markov chains and results of broader interest regarding these are derived as well. The other algorithm, Collaborative Extremum Seeking (CES), uses techniques from extremum seeking control to solve the problem when agent actions are drawn from the set of real numbers. In this case, under the assumption of existence of a local minimizer for the welfare function and a connected undirected communication graph between agents, a result regarding convergence of joint action to a small neighborhood of a local optimizer of the welfare function is proved. Since extremum seeking control uses a simultaneous gradient estimation-descent scheme, gradient information available in the continuous action space formulation is exploited by the CES algorithm to yield improved convergence speeds. The effectiveness of this algorithm for the wind farm power maximization problem is evaluated via simulations. Lastly, we turn to a different question regarding role of the information exchange pattern on performance of distributed control systems by means of a case study for the vehicle platooning problem. In the vehicle platoon control problem, the objective is to design distributed control laws for individual vehicles in a platoon (or a road-train) that regulate inter-vehicle distances at a specified safe value while the entire platoon follows a leader-vehicle. While most of the literature on the problem deals with some inadequacy in control performance when the information exchange is of the nearest neighbor-type, we consider an arbitrary graph serving as information exchange pattern and derive a relationship between how a certain indicator of control performance is related to the information pattern. Such analysis helps in understanding qualitative features of the `right' information pattern for this problem.
Courellis, Hristos; Mullen, Tim; Poizner, Howard; Cauwenberghs, Gert; Iversen, John R.
2017-01-01
Quantification of dynamic causal interactions among brain regions constitutes an important component of conducting research and developing applications in experimental and translational neuroscience. Furthermore, cortical networks with dynamic causal connectivity in brain-computer interface (BCI) applications offer a more comprehensive view of brain states implicated in behavior than do individual brain regions. However, models of cortical network dynamics are difficult to generalize across subjects because current electroencephalography (EEG) signal analysis techniques are limited in their ability to reliably localize sources across subjects. We propose an algorithmic and computational framework for identifying cortical networks across subjects in which dynamic causal connectivity is modeled among user-selected cortical regions of interest (ROIs). We demonstrate the strength of the proposed framework using a “reach/saccade to spatial target” cognitive task performed by 10 right-handed individuals. Modeling of causal cortical interactions was accomplished through measurement of cortical activity using (EEG), application of independent component clustering to identify cortical ROIs as network nodes, estimation of cortical current density using cortically constrained low resolution electromagnetic brain tomography (cLORETA), multivariate autoregressive (MVAR) modeling of representative cortical activity signals from each ROI, and quantification of the dynamic causal interaction among the identified ROIs using the Short-time direct Directed Transfer function (SdDTF). The resulting cortical network and the computed causal dynamics among its nodes exhibited physiologically plausible behavior, consistent with past results reported in the literature. This physiological plausibility of the results strengthens the framework's applicability in reliably capturing complex brain functionality, which is required by applications, such as diagnostics and BCI. PMID:28566997
Ampudia-Blasco, Francisco Javier; García-Soidán, Francisco Javier; Rubio Sánchez, Manuela; Phan, Tra-Mi
2017-03-01
DiaScope ® is a software to help in individualized prescription of antidiabetic treatment in type 2 diabetes. This study assessed its value and acceptability by different professionals. DiaScope ® was developed based on the ADA-EASD 2012 algorithm and on the recommendation of 12 international diabetes experts using the RAND/UCLA appropriateness method. The current study was performed at a single session. In the first phase, 5 clinical scenarios were evaluated, selecting the most appropriated therapeutic option among 4 possibilities (initial test). In a second phase, the same clinical cases were evaluated with DiaScope ® (final test).Opinion surveys on DiaScope ® were also performed (questionnaire). DiaScope ® changed the selected option 1 or more times in 70.5% of cases. Among 275 evaluated questionnaires, 54.0% strongly agree that DiaScope ® allowed finding easily a similar therapeutic scenario to the corresponding patient, and 52.5 among the obtained answers were clinically plausible. Up to 58.3% will recommend it to a colleague. In particular, primary care physicians with >20 years of professional dedication found with DiaScope ® the most appropriate option for a particular situation against specialists or those with less professional dedication (p<.05). DiaScope ® is an easy to use tool for antidiabetic drug prescription that provides plausible solutions and is especially useful for primary care physicians with more years of professional practice. Copyright © 2017 SEEN. Publicado por Elsevier España, S.L.U. All rights reserved.
Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam
2018-04-01
Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.
Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M
2015-01-01
Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.
Walter, Florian; Röhrbein, Florian; Knoll, Alois
2015-12-01
The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.
He, Chenlong; Feng, Zuren; Ren, Zhigang
2018-01-01
In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.
Feng, Zuren; Ren, Zhigang
2018-01-01
In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared. PMID:29462217
A novel approach for connecting temporal-ontologies with blood flow simulations.
Weichert, F; Mertens, C; Walczak, L; Kern-Isberner, G; Wagner, M
2013-06-01
In this paper an approach for developing a temporal domain ontology for biomedical simulations is introduced. The ideas are presented in the context of simulations of blood flow in aneurysms using the Lattice Boltzmann Method. The advantages in using ontologies are manyfold: On the one hand, ontologies having been proven to be able to provide medical special knowledge e.g., key parameters for simulations. On the other hand, based on a set of rules and the usage of a reasoner, a system for checking the plausibility as well as tracking the outcome of medical simulations can be constructed. Likewise, results of simulations including data derived from them can be stored and communicated in a way that can be understood by computers. Later on, this set of results can be analyzed. At the same time, the ontologies provide a way to exchange knowledge between researchers. Lastly, this approach can be seen as a black-box abstraction of the internals of the simulation for the biomedical researcher as well. This approach is able to provide the complete parameter sets for simulations, part of the corresponding results and part of their analysis as well as e.g., geometry and boundary conditions. These inputs can be transferred to different simulation methods for comparison. Variations on the provided parameters can be automatically used to drive these simulations. Using a rule base, unphysical inputs or outputs of the simulation can be detected and communicated to the physician in a suitable and familiar way. An example for an instantiation of the blood flow simulation ontology and exemplary rules for plausibility checking are given. Copyright © 2013 Elsevier Inc. All rights reserved.
Chemical evolution of groundwater in the Wilcox aquifer of the northern Gulf Coastal Plain, USA
NASA Astrophysics Data System (ADS)
Haile, Estifanos; Fryar, Alan E.
2017-12-01
The Wilcox aquifer is a major groundwater resource in the northern Gulf Coastal Plain (lower Mississippi Valley) of the USA, yet the processes controlling water chemistry in this clastic aquifer have received relatively little attention. The current study combines analyses of solutes and stable isotopes in groundwater, petrography of core samples, and geochemical modeling to identify plausible reactions along a regional flow path ˜300 km long. The hydrochemical facies evolves from Ca-HCO3 upgradient to Na-HCO3 downgradient, with a sequential zonation of terminal electron-accepting processes from Fe(III) reduction through SO4 2- reduction to methanogenesis. In particular, decreasing SO4 2- and increasing δ34S of SO4 2- along the flow path, as well as observations of authigenic pyrite in core samples, provide evidence of SO4 2- reduction. Values of δ13C in groundwater suggest that dissolved inorganic carbon is contributed both by oxidation of sedimentary organic matter and calcite dissolution. Inverse modeling identified multiple plausible sets of reactions between sampled wells, which typically involved cation exchange, pyrite precipitation, CH2O oxidation, and dissolution of amorphous Fe(OH)3, calcite, or siderite. These reactions are consistent with processes identified in previous studies of Atlantic Coastal Plain aquifers. Contrasts in groundwater chemistry between the Wilcox and the underlying McNairy and overlying Claiborne aquifers indicate that confining units are relatively effective in limiting cross-formational flow, but localized cross-formational mixing could occur via fault zones. Consequently, increased pumping in the vicinity of fault zones could facilitate upward movement of saline water into the Wilcox.
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less
Secure Localization in the Presence of Colluders in WSNs
Barbeau, Michel; Corriveau, Jean-Pierre; Garcia-Alfaro, Joaquin; Yao, Meng
2017-01-01
We address the challenge of correctly estimating the position of wireless sensor network (WSN) nodes in the presence of malicious adversaries. We consider adversarial situations during the execution of node localization under three classes of colluding adversaries. We describe a decentralized algorithm that aims at determining the position of nodes in the presence of such colluders. Colluders are assumed to either forge or manipulate the information they exchange with the other nodes of the WSN. This algorithm allows location-unknown nodes to successfully detect adversaries within their communication range. Numeric simulation is reported to validate the approach. Results show the validity of the proposal, both in terms of localization and adversary detection. PMID:28817077
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Mori, Takaharu; Jung, Jaewoon; Sugita, Yuji
2013-12-10
Conformational sampling is fundamentally important for simulating complex biomolecular systems. The generalized-ensemble algorithm, especially the temperature replica-exchange molecular dynamics method (T-REMD), is one of the most powerful methods to explore structures of biomolecules such as proteins, nucleic acids, carbohydrates, and also of lipid membranes. T-REMD simulations have focused on soluble proteins rather than membrane proteins or lipid bilayers, because explicit membranes do not keep their structural integrity at high temperature. Here, we propose a new generalized-ensemble algorithm for membrane systems, which we call the surface-tension REMD method. Each replica is simulated in the NPγT ensemble, and surface tensions in a pair of replicas are exchanged at certain intervals to enhance conformational sampling of the target membrane system. We test the method on two biological membrane systems: a fully hydrated DPPC (1,2-dipalmitoyl-sn-glycero-3-phosphatidylcholine) lipid bilayer and a WALP23-POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine) membrane system. During these simulations, a random walk in surface tension space is realized. Large-scale lateral deformation (shrinking and stretching) of the membranes takes place in all of the replicas without collapse of the lipid bilayer structure. There is accelerated lateral diffusion of DPPC lipid molecules compared with conventional MD simulation, and a much wider range of tilt angle of the WALP23 peptide is sampled due to large deformation of the POPC lipid bilayer and through peptide-lipid interactions. Our method could be applicable to a wide variety of biological membrane systems.
Quantum Mechanics, Pattern Recognition, and the Mammalian Brain
NASA Astrophysics Data System (ADS)
Chapline, George
2008-10-01
Although the usual way of representing Markov processes is time asymmetric, there is a way of describing Markov processes, due to Schrodinger, which is time symmetric. This observation provides a link between quantum mechanics and the layered Bayesian networks that are often used in automated pattern recognition systems. In particular, there is a striking formal similarity between quantum mechanics and a particular type of Bayesian network, the Helmholtz machine, which provides a plausible model for how the mammalian brain recognizes important environmental situations. One interesting aspect of this relationship is that the "wake-sleep" algorithm for training a Helmholtz machine is very similar to the problem of finding the potential for the multi-channel Schrodinger equation. As a practical application of this insight it may be possible to use inverse scattering techniques to study the relationship between human brain wave patterns, pattern recognition, and learning. We also comment on whether there is a relationship between quantum measurements and consciousness.
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Neural Mechanism for Stochastic Behavior During a Competitive Game
Soltani, Alireza; Lee, Daeyeol; Wang, Xiao-Jing
2006-01-01
Previous studies have shown that non-human primates can generate highly stochastic choice behavior, especially when this is required during a competitive interaction with another agent. To understand the neural mechanism of such dynamic choice behavior, we propose a biologically plausible model of decision making endowed with synaptic plasticity that follows a reward-dependent stochastic Hebbian learning rule. This model constitutes a biophysical implementation of reinforcement learning, and it reproduces salient features of behavioral data from an experiment with monkeys playing a matching pennies game. Due to interaction with an opponent and learning dynamics, the model generates quasi-random behavior robustly in spite of intrinsic biases. Furthermore, non-random choice behavior can also emerge when the model plays against a non-interactive opponent, as observed in the monkey experiment. Finally, when combined with a meta-learning algorithm, our model accounts for the slow drift in the animal’s strategy based on a process of reward maximization. PMID:17015181
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
Using Grid Cells for Navigation
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-01-01
Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860
Fast Construction of Near Parsimonious Hybridization Networks for Multiple Phylogenetic Trees.
Mirzaei, Sajad; Wu, Yufeng
2016-01-01
Hybridization networks represent plausible evolutionary histories of species that are affected by reticulate evolutionary processes. An established computational problem on hybridization networks is constructing the most parsimonious hybridization network such that each of the given phylogenetic trees (called gene trees) is "displayed" in the network. There have been several previous approaches, including an exact method and several heuristics, for this NP-hard problem. However, the exact method is only applicable to a limited range of data, and heuristic methods can be less accurate and also slow sometimes. In this paper, we develop a new algorithm for constructing near parsimonious networks for multiple binary gene trees. This method is more efficient for large numbers of gene trees than previous heuristics. This new method also produces more parsimonious results on many simulated datasets as well as a real biological dataset than a previous method. We also show that our method produces topologically more accurate networks for many datasets.
Learning invariance from natural images inspired by observations in the primary visual cortex.
Teichmann, Michael; Wiltschut, Jan; Hamker, Fred
2012-05-01
The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.
Entropy and long-range memory in random symbolic additive Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2016-06-01
The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.
Entropy and long-range memory in random symbolic additive Markov chains.
Melnik, S S; Usatenko, O V
2016-06-01
The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.
Graph Matching for the Registration of Persistent Scatterers to Optical Oblique Imagery
NASA Astrophysics Data System (ADS)
Schack, L.; Soergel, U.; Heipke, C.
2016-06-01
Matching Persistent Scatterers (PS) to airborne optical imagery is one possibility to augment applications and deepen the understanding of SAR processing and products. While recently this data registration task was done with PS and optical nadir images the alternatively available optical oblique imagery is mostly neglected. Yet, the sensing geometry of oblique images is very similar in terms of viewing direction with respect to SAR.We exploit the additional information coming with these optical sensors to assign individual PS to single parts of buildings. The key idea is to incorporate topology information which is derived by grouping regularly aligned PS at facades and use it together with a geometry based measure in order to establish a consistent and meaningful matching result. We formulate this task as an optimization problem and derive a graph matching based algorithm with guaranteed convergence in order to solve it. Two exemplary case studies show the plausibility of the presented approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Devarakonda, Aditya; Racah, Evan
We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark’s data parallel model. We perform scalingmore » experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.« less
NASA Astrophysics Data System (ADS)
Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao
2018-01-01
Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.
Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza
2017-01-01
Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians' experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mechanisms apply patterns from human thought processes, such as generalization, similarity and interpolation, based on attributional, hierarchical, and relational knowledge. Plausible reasoning mechanisms include inductive reasoning , which generalizes the commonalities among the data to induce new rules, and analogical reasoning , which is guided by data similarities to infer new facts. By further leveraging rich, biomedical Semantic Web ontologies to represent medical knowledge, both known and tentative, we increase the accuracy and expressivity of plausible reasoning, and cope with issues such as data heterogeneity, inconsistency and interoperability. In this paper, we present a Semantic Web-based, multi-strategy reasoning approach, which integrates deductive and plausible reasoning and exploits Semantic Web technology to solve complex clinical decision support queries. We evaluated our system using a real-world medical dataset of patients with hepatitis, from which we randomly removed different percentages of data (5%, 10%, 15%, and 20%) to reflect scenarios with increasing amounts of incomplete medical knowledge. To increase the reliability of the results, we generated 5 independent datasets for each percentage of missing values, which resulted in 20 experimental datasets (in addition to the original dataset). The results show that plausibly inferred knowledge extends the coverage of the knowledge base by, on average, 2%, 7%, 12%, and 16% for datasets with, respectively, 5%, 10%, 15%, and 20% of missing values. This expansion in the KB coverage allowed solving complex disease diagnostic queries that were previously unresolvable, without losing the correctness of the answers. However, compared to deductive reasoning, data-intensive plausible reasoning mechanisms yield a significant performance overhead. We observed that plausible reasoning approaches, by generating tentative inferences and leveraging domain knowledge of experts, allow us to extend the coverage of medical knowledge bases, resulting in improved clinical decision support. Second, by leveraging OWL ontological knowledge, we are able to increase the expressivity and accuracy of plausible reasoning methods. Third, our approach is applicable to clinical decision support systems for a range of chronic diseases.
Fuzzy-logic based Q-Learning interference management algorithms in two-tier networks
NASA Astrophysics Data System (ADS)
Xu, Qiang; Xu, Zezhong; Li, Li; Zheng, Yan
2017-10-01
Unloading from macrocell network and enhancing coverage can be realized by deploying femtocells in the indoor scenario. However, the system performance of the two-tier network could be impaired by the co-tier and cross-tier interference. In this paper, a distributed resource allocation scheme is studied when each femtocell base station is self-governed and the resource cannot be assigned centrally through the gateway. A novel Q-Learning interference management scheme is proposed, that is divided into cooperative and independent part. In the cooperative algorithm, the interference information is exchanged between the cell-edge users which are classified by the fuzzy logic in the same cell. Meanwhile, we allocate the orthogonal subchannels to the high-rate cell-edge users to disperse the interference power when the data rate requirement is satisfied. The resource is assigned directly according to the minimum power principle in the independent algorithm. Simulation results are provided to demonstrate the significant performance improvements in terms of the average data rate, interference power and energy efficiency over the cutting-edge resource allocation algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, T.E.; Bennett, D.H.
2002-08-01
In multimedia mass-balance models, the soil compartment is an important sink as well as a conduit for transfers to vegetation and shallow groundwater. Here a novel approach for constructing soil transport algorithms for multimedia fate models is developed and evaluated. The resulting algorithms account for diffusion in gas and liquid components; advection in gas, liquid, or solid phases; and multiple transformation processes. They also provide an explicit quantification of the characteristic soil penetration depth. We construct a compartment model using three and four soil layers to replicate with high reliability the flux and mass distribution obtained from the exact analyticalmore » solution describing the transient dispersion, advection, and transformation of chemicals in soil with fixed properties and boundary conditions. Unlike the analytical solution, which requires fixed boundary conditions, the soil compartment algorithms can be dynamically linked to other compartments (air, vegetation, ground water, surface water) in multimedia fate models. We demonstrate and evaluate the performance of the algorithms in a model with applications to benzene, benzo(a)pyrene, MTBE, TCDD, and tritium.« less
Quantum plug n’ play: modular computation in the quantum regime
NASA Astrophysics Data System (ADS)
Thompson, Jayne; Modi, Kavan; Vedral, Vlatko; Gu, Mile
2018-01-01
Classical computation is modular. It exploits plug n’ play architectures which allow us to use pre-fabricated circuits without knowing their construction. This bestows advantages such as allowing parts of the computational process to be outsourced, and permitting individual circuit components to be exchanged and upgraded. Here, we introduce a formal framework to describe modularity in the quantum regime. We demonstrate a ‘no-go’ theorem, stipulating that it is not always possible to make use of quantum circuits without knowing their construction. This has significant consequences for quantum algorithms, forcing the circuit implementation of certain quantum algorithms to be rebuilt almost entirely from scratch after incremental changes in the problem—such as changing the number being factored in Shor’s algorithm. We develop a workaround capable of restoring modularity, and apply it to design a modular version of Shor’s algorithm that exhibits increased versatility and reduced complexity. In doing so we pave the way to a realistic framework whereby ‘quantum chips’ and remote servers can be invoked (or assembled) to implement various parts of a more complex quantum computation.
Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer
NASA Technical Reports Server (NTRS)
Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw
2000-01-01
Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.
Application of plausible reasoning to AI-based control systems
NASA Technical Reports Server (NTRS)
Berenji, Hamid; Lum, Henry, Jr.
1987-01-01
Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.
Parametric Optimization of Thermoelectric Generators for Waste Heat Recovery
NASA Astrophysics Data System (ADS)
Huang, Shouyuan; Xu, Xianfan
2016-10-01
This paper presents a methodology for design optimization of thermoelectric-based waste heat recovery systems called thermoelectric generators (TEGs). The aim is to maximize the power output from thermoelectrics which are used as add-on modules to an existing gas-phase heat exchanger, without negative impacts, e.g., maintaining a minimum heat dissipation rate from the hot side. A numerical model is proposed for TEG coupled heat transfer and electrical power output. This finite-volume-based model simulates different types of heat exchangers, i.e., counter-flow and cross-flow, for TEGs. Multiple-filled skutterudites and bismuth-telluride-based thermoelectric modules (TEMs) are applied, respectively, in higher and lower temperature regions. The response surface methodology is implemented to determine the optimized TEG size along and across the flow direction and the height of thermoelectric couple legs, and to analyze their covariance and relative sensitivity. A genetic algorithm is employed to verify the globality of the optimum. The presented method will be generally useful for optimizing heat-exchanger-based TEG performance.
Price dynamics and market power in an agent-based power exchange
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Guerci, Eric; Raberto, Marco
2005-05-01
This paper presents an agent-based model of a power exchange. Supply of electric power is provided by competing generating companies, whereas demand is assumed to be inelastic with respect to price and is constant over time. The transmission network topology is assumed to be a fully connected graph and no transmission constraints are taken into account. The price formation process follows a common scheme for real power exchanges: a clearing house mechanism with uniform price, i.e., with price set equal across all matched buyer-seller pairs. A single class of generating companies is considered, characterized by linear cost function for each technology. Generating companies compete for the sale of electricity through repeated rounds of the uniform auction and determine their supply functions according to production costs. However, an individual reinforcement learning algorithm characterizes generating companies behaviors in order to attain the expected maximum possible profit in each auction round. The paper investigates how the market competitive equilibrium is affected by market microstructure and production costs.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian
2018-06-01
In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.
Karen Schleeweis; Samuel N. Goward; Chengquan Huang; John L. Dwyer; Jennifer L. Dungan; Mary A. Lindsey; Andrew Michaelis; Khaldoun Rishmawi; Jeffery G. Masek
2016-01-01
Using the NASA Earth Exchange platform, the North American Forest Dynamics (NAFD) project mapped forest history wall-to-wall, annually for the contiguous US (1986-2010) using the Vegetation Change Tracker algorithm. As with any effort to identify real changes in remotely sensed time-series, data gaps, shifts in seasonality, misregistration, inconsistent radiometry and...
J. Chris Toney; Karen G. Schleeweis; Jennifer Dungan; Andrew Michaelis; Todd Schroeder; Gretchen G. Moisen
2015-01-01
The North American Forest Dynamics (NAFD) projectâs Attribution Team is completing nationwide processing of historic Landsat data to provide a comprehensive annual, wall-to-wall analysis of US disturbance history, with attribution, over the last 25+ years. Per-pixel time series analysis based on a new nonparametric curve fitting algorithm yields several metrics useful...
NASA Astrophysics Data System (ADS)
Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.
2018-05-01
A hybrid Landau Lifshitz Gilbert/kinetic Monte Carlo algorithm is used to simulate experimental magnetic hysteresis loops for dual layer exchange coupled composite media. The calculation of the rate coefficients and difficulties arising from low energy barriers, a fundamental problem of the kinetic Monte Carlo method, are discussed and the methodology used to treat them in the present work is described. The results from simulations are compared with experimental vibrating sample magnetometer measurements on dual layer CoPtCrB/CoPtCrSiO media and a quantitative relationship between the thickness of the exchange control layer separating the layers and the effective exchange constant between the layers is obtained. Estimates of the energy barriers separating magnetically reversed states of the individual grains in zero applied field as well as the saturation field at sweep rates relevant to the bit write speeds in magnetic recording are also presented. The significance of this comparison between simulations and experiment and the estimates of the material parameters obtained from it are discussed in relation to optimizing the performance of magnetic storage media.
Li, Xianfeng; Murthy, Sanjeeva; Latour, Robert A.
2011-01-01
A new empirical sampling method termed “temperature intervals with global exchange of replicas and reduced radii” (TIGER3) is presented and demonstrated to efficiently equilibrate entangled long-chain molecular systems such as amorphous polymers. The TIGER3 algorithm is a replica exchange method in which simulations are run in parallel over a range of temperature levels at and above a designated baseline temperature. The replicas sampled at temperature levels above the baseline are run through a series of cycles with each cycle containing four stages – heating, sampling, quenching, and temperature level reassignment. The method allows chain segments to pass through one another at elevated temperature levels during the sampling stage by reducing the van der Waals radii of the atoms, thus eliminating chain entanglement problems. Atomic radii are then returned to their regular values and re-equilibrated at elevated temperature prior to quenching to the baseline temperature. Following quenching, replicas are compared using a Metropolis Monte Carlo exchange process for the construction of an approximate Boltzmann-weighted ensemble of states and then reassigned to the elevated temperature levels for additional sampling. Further system equilibration is performed by periodic implementation of the previously developed TIGER2 algorithm between cycles of TIGER3, which applies thermal cycling without radii reduction. When coupled with a coarse-grained modeling approach, the combined TIGER2/TIGER3 algorithm yields fast equilibration of bulk-phase models of amorphous polymer, even for polymers with complex, highly branched structures. The developed method was tested by modeling the polyethylene melt. The calculated properties of chain conformation and chain segment packing agreed well with published data. The method was also applied to generate equilibrated structural models of three increasingly complex amorphous polymer systems: poly(methyl methacrylate), poly(butyl methacrylate), and DTB-succinate copolymer. Calculated glass transition temperature (Tg) and structural parameter profile (S(q)) for each resulting polymer model were found to be in close agreement with experimental Tg values and structural measurements obtained by x-ray diffraction, thus validating that the developed methods provide realistic models of amorphous polymer structure. PMID:21769156
NASA Astrophysics Data System (ADS)
Rinne, J.; Tuittila, E. S.; Peltola, O.; Li, X.; Raivonen, M.; Alekseychik, P.; Haapanala, S.; Pihlatie, M.; Aurela, M.; Mammarella, I.; Vesala, T.
2017-12-01
Models for calculating methane emission from wetland ecosystems typically relate the methane emission to carbon dioxide assimilation. Other parameters that control emission in these models are e.g. peat temperature and water table position. Many of these relations are derived from spatial variation between chamber measurements by space-for-time approach. Continuous longer term ecosystem scale methane emission measurements by eddy covariance method provide us independent data to assess the validity of the relations derived by space-for-time approach.We have analyzed eleven-year methane flux data-set, measured at a boreal fen, together with data on environmental parameters and carbon dioxide exchange to assess the relations to typical model drivers. The data was obtained by the eddy covariance method at Siikaneva mire complex, Southern Finland, during 2005-2015. The methane flux showed seasonal cycles in methane emission, with strongest correlation with peat temperature at 35 cm depth. The temperature relation was exponential throughout the whole peat temperature range of 0-16°C. The methane emission normalized to remove temperature dependence showed a non-monotonous relation on water table and positive correlation with gross primary production (GPP). However, inclusion of these as explaining variables improved algorithm-measurement correlation only slightly, with r2=0.74 for exponential temperature dependent algorithm, r2=0.76 for temperature - water table algorithm, and r2=0.79 for temperature - GPP algorithm. The methane emission lagged behind net ecosystem exchange (NEE) and GPP by two to three weeks. Annual methane emission ranged from 8.3 to 14 gC m-2, and was 20 % of NEE and 2.8 % of GPP. The inter-annual variation of methane emission was of similar magnitude as that of GPP and ecosystem respiration (Reco), but much smaller than that of NEE. The interannual variability of June-September average methane emission correlated significantly with that of GPP indicating a close link between these two processes in boreal fen ecosystems.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
Geochemical Reaction Mechanism Discovery from Molecular Simulation
Stack, Andrew G.; Kent, Paul R. C.
2014-11-10
Methods to explore reactions using computer simulation are becoming increasingly quantitative, versatile, and robust. In this review, a rationale for how molecular simulation can help build better geochemical kinetics models is first given. We summarize some common methods that geochemists use to simulate reaction mechanisms, specifically classical molecular dynamics and quantum chemical methods and discuss their strengths and weaknesses. Useful tools such as umbrella sampling and metadynamics that enable one to explore reactions are discussed. Several case studies wherein geochemists have used these tools to understand reaction mechanisms are presented, including water exchange and sorption on aqueous species and mineralmore » surfaces, surface charging, crystal growth and dissolution, and electron transfer. The impact that molecular simulation has had on our understanding of geochemical reactivity are highlighted in each case. In the future, it is anticipated that molecular simulation of geochemical reaction mechanisms will become more commonplace as a tool to validate and interpret experimental data, and provide a check on the plausibility of geochemical kinetic models.« less
Massive isotopic effect in vacuum UV photodissociation of N2 and implications for meteorite data
Chakraborty, Subrata; Muskatel, B. H.; Jackson, Teresa L.; Ahmed, Musahid; Levine, R. D.; Thiemens, Mark H.
2014-01-01
Nitrogen isotopic distributions in the solar system extend across an enormous range, from −400‰, in the solar wind and Jovian atmosphere, to about 5,000‰ in organic matter in carbonaceous chondrites. Distributions such as these require complex processing of nitrogen reservoirs and extraordinary isotope effects. While theoretical models invoke ion-neutral exchange reactions outside the protoplanetary disk and photochemical self-shielding on the disk surface to explain the variations, there are no experiments to substantiate these models. Experimental results of N2 photolysis at vacuum UV wavelengths in the presence of hydrogen are presented here, which show a wide range of enriched δ15N values from 648‰ to 13,412‰ in product NH3, depending upon photodissociation wavelength. The measured enrichment range in photodissociation of N2, plausibly explains the range of δ15N in extraterrestrial materials. This study suggests the importance of photochemical processing of the nitrogen reservoirs within the solar nebula. PMID:25267643
HOW MUCH FAVORABLE SELECTION IS LEFT IN MEDICARE ADVANTAGE?
PRICE, MARY; MCWILLIAMS, J. MICHAEL; HSU, JOHN; MCGUIRE, THOMAS G.
2015-01-01
The health economics literature contains two models of selection, one with endogenous plan characteristics to attract good risks and one with fixed plan characteristics; neither model contains a regulator. Medicare Advantage, a principal example of selection in the literature, is, however, subject to anti-selection regulations. Because selection causes economic inefficiency and because the historically favorable selection into Medicare Advantage plans increased government cost, the effectiveness of the anti-selection regulations is an important policy question, especially since the Medicare Advantage program has grown to comprise 30 percent of Medicare beneficiaries. Moreover, similar anti-selection regulations are being used in health insurance exchanges for those under 65. Contrary to earlier work, we show that the strengthened anti-selection regulations that Medicare introduced starting in 2004 markedly reduced government overpayment attributable to favorable selection in Medicare Advantage. At least some of the remaining selection is plausibly related to fixed plan characteristics of Traditional Medicare versus Medicare Advantage rather than changed selection strategies by Medicare Advantage plans. PMID:26389127
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tinsley, B.A.; Rohrbaugh, R.P.; Sahai, Y.
Observations have been made at Mt. Haleakala, Hawaii (dip lat.approx.22/sup 0/N) and Cachoeira Paulista, Brasil (dip lat.approx.12/sup 0/S) of emissions excited by particle precipitation during periods of magnetic activity. The first negative bands of N/sub 2//sup +/ were found to have a high degree of vibrational excitation at both sites, and withi the absence of emissions attributable to hydrogen and helium, this finding leads to the interpretation that the excitation was due to a flux of precipitating oxygen atoms or ions, more plausibly the former, produced by charge exchange of ring current O/sup +/ ions with exospheric neutral constituents. Moremore » laboratory work is needed to properly interpret the data, but crude estimates of the associated energy deposition and ionization production fall in the range 10/sup -1/ to 10/sup +1/mWm/sup -2/, and 10/sup 0/-10/sup 2/ cm/sup -3/s/sup -1/ respectively.« less
The next generation of scenarios for climate change research and assessment.
Moss, Richard H; Edmonds, Jae A; Hibbard, Kathy A; Manning, Martin R; Rose, Steven K; van Vuuren, Detlef P; Carter, Timothy R; Emori, Seita; Kainuma, Mikiko; Kram, Tom; Meehl, Gerald A; Mitchell, John F B; Nakicenovic, Nebojsa; Riahi, Keywan; Smith, Steven J; Stouffer, Ronald J; Thomson, Allison M; Weyant, John P; Wilbanks, Thomas J
2010-02-11
Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth's climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community.
Petrenko, Taras; Kossmann, Simone; Neese, Frank
2011-02-07
In this paper, we present the implementation of efficient approximations to time-dependent density functional theory (TDDFT) within the Tamm-Dancoff approximation (TDA) for hybrid density functionals. For the calculation of the TDDFT/TDA excitation energies and analytical gradients, we combine the resolution of identity (RI-J) algorithm for the computation of the Coulomb terms and the recently introduced "chain of spheres exchange" (COSX) algorithm for the calculation of the exchange terms. It is shown that for extended basis sets, the RIJCOSX approximation leads to speedups of up to 2 orders of magnitude compared to traditional methods, as demonstrated for hydrocarbon chains. The accuracy of the adiabatic transition energies, excited state structures, and vibrational frequencies is assessed on a set of 27 excited states for 25 molecules with the configuration interaction singles and hybrid TDDFT/TDA methods using various basis sets. Compared to the canonical values, the typical error in transition energies is of the order of 0.01 eV. Similar to the ground-state results, excited state equilibrium geometries differ by less than 0.3 pm in the bond distances and 0.5° in the bond angles from the canonical values. The typical error in the calculated excited state normal coordinate displacements is of the order of 0.01, and relative error in the calculated excited state vibrational frequencies is less than 1%. The errors introduced by the RIJCOSX approximation are, thus, insignificant compared to the errors related to the approximate nature of the TDDFT methods and basis set truncation. For TDDFT/TDA energy and gradient calculations on Ag-TB2-helicate (156 atoms, 2732 basis functions), it is demonstrated that the COSX algorithm parallelizes almost perfectly (speedup ~26-29 for 30 processors). The exchange-correlation terms also parallelize well (speedup ~27-29 for 30 processors). The solution of the Z-vector equations shows a speedup of ~24 on 30 processors. The parallelization efficiency for the Coulomb terms can be somewhat smaller (speedup ~15-25 for 30 processors), but their contribution to the total calculation time is small. Thus, the parallel program completes a Becke3-Lee-Yang-Parr energy and gradient calculation on the Ag-TB2-helicate in less than 4 h on 30 processors. We also present the necessary extension of the Lagrangian formalism, which enables the calculation of the TDDFT excited state properties in the frozen-core approximation. The algorithms described in this work are implemented into the ORCA electronic structure system.
NASA Astrophysics Data System (ADS)
Biswas, Rahul; Blackburn, Lindy; Cao, Junwei; Essick, Reed; Hodge, Kari Alison; Katsavounidis, Erotokritos; Kim, Kyungmin; Kim, Young-Min; Le Bigot, Eric-Olivier; Lee, Chang-Hwan; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.; Tao, Ye; Vaulin, Ruslan; Wang, Xiaoge
2013-09-01
The sensitivity of searches for astrophysical transients in data from the Laser Interferometer Gravitational-wave Observatory (LIGO) is generally limited by the presence of transient, non-Gaussian noise artifacts, which occur at a high enough rate such that accidental coincidence across multiple detectors is non-negligible. These “glitches” can easily be mistaken for transient gravitational-wave signals, and their robust identification and removal will help any search for astrophysical gravitational waves. We apply machine-learning algorithms (MLAs) to the problem, using data from auxiliary channels within the LIGO detectors that monitor degrees of freedom unaffected by astrophysical signals. Noise sources may produce artifacts in these auxiliary channels as well as the gravitational-wave channel. The number of auxiliary-channel parameters describing these disturbances may also be extremely large; high dimensionality is an area where MLAs are particularly well suited. We demonstrate the feasibility and applicability of three different MLAs: artificial neural networks, support vector machines, and random forests. These classifiers identify and remove a substantial fraction of the glitches present in two different data sets: four weeks of LIGO’s fourth science run and one week of LIGO’s sixth science run. We observe that all three algorithms agree on which events are glitches to within 10% for the sixth-science-run data, and support this by showing that the different optimization criteria used by each classifier generate the same decision surface, based on a likelihood-ratio statistic. Furthermore, we find that all classifiers obtain similar performance to the benchmark algorithm, the ordered veto list, which is optimized to detect pairwise correlations between transients in LIGO auxiliary channels and glitches in the gravitational-wave data. This suggests that most of the useful information currently extracted from the auxiliary channels is already described by this model. Future performance gains are thus likely to involve additional sources of information, rather than improvements in the classification algorithms themselves. We discuss several plausible sources of such new information as well as the ways of propagating it through the classifiers into gravitational-wave searches.
Dornay, M; Sanger, T D
1993-01-01
A planar 17 muscle model of the monkey's arm based on realistic biomechanical measurements was simulated on a Symbolics Lisp Machine. The simulator implements the equilibrium point hypothesis for the control of arm movements. Given initial and final desired positions, it generates a minimum-jerk desired trajectory of the hand and uses the backdriving algorithm to determine an appropriate sequence of motor commands to the muscles (Flash 1987; Mussa-Ivaldi et al. 1991; Dornay 1991b). These motor commands specify a temporal sequence of stable (attractive) equilibrium positions which lead to the desired hand movement. A strong disadvantage of the simulator is that it has no memory of previous computations. Determining the desired trajectory using the minimum-jerk model is instantaneous, but the laborious backdriving algorithm is slow, and can take up to one hour for some trajectories. The complexity of the required computations makes it a poor model for biological motor control. We propose a computationally simpler and more biologically plausible method for control which achieves the benefits of the backdriving algorithm. A fast learning, tree-structured network (Sanger 1991c) was trained to remember the knowledge obtained by the backdriving algorithm. The neural network learned the nonlinear mapping from a 2-dimensional cartesian planar hand position (x,y) to a 17-dimensional motor command space (u1, . . ., u17). Learning 20 training trajectories, each composed of 26 sample points [[x,y], [u1, . . ., u17] took only 20 min on a Sun-4 Sparc workstation. After the learning stage, new, untrained test trajectories as well as the original trajectories of the hand were given to the neural network as input. The network calculated the required motor commands for these movements. The resulting movements were close to the desired ones for both the training and test cases.
PGCA: An algorithm to link protein groups created from MS/MS data
Sasaki, Mayu; Hollander, Zsuzsanna; Smith, Derek; McManus, Bruce; McMaster, W. Robert; Ng, Raymond T.; Cohen Freue, Gabriela V.
2017-01-01
The quantitation of proteins using shotgun proteomics has gained popularity in the last decades, simplifying sample handling procedures, removing extensive protein separation steps and achieving a relatively high throughput readout. The process starts with the digestion of the protein mixture into peptides, which are then separated by liquid chromatography and sequenced by tandem mass spectrometry (MS/MS). At the end of the workflow, recovering the identity of the proteins originally present in the sample is often a difficult and ambiguous process, because more than one protein identifier may match a set of peptides identified from the MS/MS spectra. To address this identification problem, many MS/MS data processing software tools combine all plausible protein identifiers matching a common set of peptides into a protein group. However, this solution introduces new challenges in studies with multiple experimental runs, which can be characterized by three main factors: i) protein groups’ identifiers are local, i.e., they vary run to run, ii) the composition of each group may change across runs, and iii) the supporting evidence of proteins within each group may also change across runs. Since in general there is no conclusive evidence about the absence of proteins in the groups, protein groups need to be linked across different runs in subsequent statistical analyses. We propose an algorithm, called Protein Group Code Algorithm (PGCA), to link groups from multiple experimental runs by forming global protein groups from connected local groups. The algorithm is computationally inexpensive and enables the connection and analysis of lists of protein groups across runs needed in biomarkers studies. We illustrate the identification problem and the stability of the PGCA mapping using 65 iTRAQ experimental runs. Further, we use two biomarker studies to show how PGCA enables the discovery of relevant candidate protein group markers with similar but non-identical compositions in different runs. PMID:28562641
Ferrante, Andrea; Anderson, Matthew W; Klug, Candice S; Gorski, Jack
2008-01-01
HLA-DM (DM) mediates exchange of peptides bound to MHC class II (MHCII) during the epitope selection process. Although DM has been shown to have two activities, peptide release and MHC class II refolding, a clear characterization of the mechanism by which DM facilitates peptide exchange has remained elusive. We have previously demonstrated that peptide binding to and dissociation from MHCII in the absence of DM are cooperative processes, likely related to conformational changes in the peptide-MHCII complex. Here we show that DM promotes peptide release by a non-cooperative process, whereas it enhances cooperative folding of the exchange peptide. Through electron paramagnetic resonance (EPR) and fluorescence polarization (FP) we show that DM releases prebound peptide very poorly in the absence of a candidate peptide for the exchange process. The affinity and concentration of the candidate peptide are also important for the release of the prebound peptide. Increased fluorescence energy transfer between the prebound and exchange peptides in the presence of DM is evidence for a tetramolecular complex which resolves in favor of the peptide that has superior folding properties. This study shows that both the peptide releasing activity on loaded MHCII and the facilitating of MHCII binding by a candidate exchange peptide are integral to DM mediated epitope selection. The exchange process is initiated only in the presence of candidate peptides, avoiding possible release of a prebound peptide and loss of a potential epitope. In a tetramolecular transitional complex, the candidate peptides are checked for their ability to replace the pre-bound peptide with a geometry that allows the rebinding of the original peptide. Thus, DM promotes a "compare-exchange" sorting algorithm on an available peptide pool. Such a "third party"-mediated mechanism may be generally applicable for diverse ligand recognition in other biological systems.
Foy, Jeffrey E; LoCasto, Paul C; Briner, Stephen W; Dyar, Samantha
2017-02-01
Readers rapidly check new information against prior knowledge during validation, but research is inconsistent as to whether source credibility affects validation. We argue that readers are likely to accept highly plausible assertions regardless of source, but that high source credibility may boost acceptance of claims that are less plausible based on general world knowledge. In Experiment 1, participants read narratives with assertions for which the plausibility varied depending on the source. For high credibility sources, we found that readers were faster to read information confirming these assertions relative to contradictory information. We found the opposite patterns for low credibility characters. In Experiment 2, readers read claims from the same high or low credibility sources, but the claims were always plausible based on general world knowledge. Readers consistently took longer to read contradictory information, regardless of source. In Experiment 3, participants read modified versions of "The Tell-Tale Heart," which was narrated entirely by an unreliable source. We manipulated the plausibility of a target event, as well as whether high credibility characters within the story provided confirmatory or contradictory information about the narrator's description of the target event. Though readers rated the narrator as being insane, they were more likely to believe the narrator's assertions about the target event when it was plausible and corroborated by other characters. We argue that sourcing research would benefit from focusing on the relationship between source credibility, message credibility, and multiple sources within a text.
NASA Astrophysics Data System (ADS)
Wang, Liping; Ji, Yusheng; Liu, Fuqiang
The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.
Wang, Yue; Luo, Jin; Hao, Shiying; Xu, Haihua; Shin, Andrew Young; Jin, Bo; Liu, Rui; Deng, Xiaohong; Wang, Lijuan; Zheng, Le; Zhao, Yifan; Zhu, Chunqing; Hu, Zhongkai; Fu, Changlin; Hao, Yanpeng; Zhao, Yingzhen; Jiang, Yunliang; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Todd, Rogow; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B
2015-12-01
In order to proactively manage congestive heart failure (CHF) patients, an effective CHF case finding algorithm is required to process both structured and unstructured electronic medical records (EMR) to allow complementary and cost-efficient identification of CHF patients. We set to identify CHF cases from both EMR codified and natural language processing (NLP) found cases. Using narrative clinical notes from all Maine Health Information Exchange (HIE) patients, the NLP case finding algorithm was retrospectively (July 1, 2012-June 30, 2013) developed with a random subset of HIE associated facilities, and blind-tested with the remaining facilities. The NLP based method was integrated into a live HIE population exploration system and validated prospectively (July 1, 2013-June 30, 2014). Total of 18,295 codified CHF patients were included in Maine HIE. Among the 253,803 subjects without CHF codings, our case finding algorithm prospectively identified 2411 uncodified CHF cases. The positive predictive value (PPV) is 0.914, and 70.1% of these 2411 cases were found to be with CHF histories in the clinical notes. A CHF case finding algorithm was developed, tested and prospectively validated. The successful integration of the CHF case findings algorithm into the Maine HIE live system is expected to improve the Maine CHF care. Copyright © 2015. Published by Elsevier Ireland Ltd.
[New Retrieval Algorithms for Geophysical Products from GLI and MODIS Data
NASA Technical Reports Server (NTRS)
Dodge, James C.; Simpson, James J.
2004-01-01
Below is the 1st year progress report for NAG5-13435 "New Retrieval Algorithms for Geophysical Products from GLI and MODIS Data". Activity on this project has been coordinated with our NASA DB project NAG5-9604. For your convenience, this report has six sections and an Appendix. Sections I - III discuss specific activities undertaken during the past year to analyze/use MODIS data. Section IV formally states our intention to no longer pursue any research using JAXA's (formerly NASDA's) GLI instrument which catastrophically failed very early after launch (also see the Appendix). Section V provides some indications of directions for second year activities based on our January 2004 telephone discussions and email exchanges. A brief summary is given in Section VI.
Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model
NASA Astrophysics Data System (ADS)
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei
2014-11-01
The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process-based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.
Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model
NASA Astrophysics Data System (ADS)
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei
2014-11-01
The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process- based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.
Multivariate Cryptography Based on Clipped Hopfield Neural Network.
Wang, Jia; Cheng, Lee-Ming; Su, Tong
2018-02-01
Designing secure and efficient multivariate public key cryptosystems [multivariate cryptography (MVC)] to strengthen the security of RSA and ECC in conventional and quantum computational environment continues to be a challenging research in recent years. In this paper, we will describe multivariate public key cryptosystems based on extended Clipped Hopfield Neural Network (CHNN) and implement it using the MVC (CHNN-MVC) framework operated in space. The Diffie-Hellman key exchange algorithm is extended into the matrix field, which illustrates the feasibility of its new applications in both classic and postquantum cryptography. The efficiency and security of our proposed new public key cryptosystem CHNN-MVC are simulated and found to be NP-hard. The proposed algorithm will strengthen multivariate public key cryptosystems and allows hardware realization practicality.
Self-learning Monte Carlo method and cumulative update in fermion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Junwei; Shen, Huitao; Qi, Yang
2017-06-07
In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1981-01-01
A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.
Hamuro, Yoshitomo
2017-03-01
A new strategy to analyze amide hydrogen/deuterium exchange mass spectrometry (HDX-MS) data is proposed, utilizing a wider time window and isotope envelope analysis of each peptide. While most current scientific reports present HDX-MS data as a set of time-dependent deuteration levels of peptides, the ideal HDX-MS data presentation is a complete set of backbone amide hydrogen exchange rates. The ideal data set can provide single amide resolution, coverage of all exchange events, and the open/close ratio of each amide hydrogen in EX2 mechanism. Toward this goal, a typical HDX-MS protocol was modified in two aspects: measurement of a wider time window in HDX-MS experiments and deconvolution of isotope envelope of each peptide. Measurement of a wider time window enabled the observation of deuterium incorporation of most backbone amide hydrogens. Analysis of the isotope envelope instead of centroid value provides the deuterium distribution instead of the sum of deuteration levels in each peptide. A one-step, global-fitting algorithm optimized exchange rate and deuterium retention during the analysis of each amide hydrogen by fitting the deuterated isotope envelopes at all time points of all peptides in a region. Application of this strategy to cytochrome c yielded 97 out of 100 amide hydrogen exchange rates. A set of exchange rates determined by this approach is more appropriate for a patent or regulatory filing of a biopharmaceutical than a set of peptide deuteration levels obtained by a typical protocol. A wider time window of this method also eliminates false negatives in protein-ligand binding site identification. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Hamuro, Yoshitomo
2017-03-01
A new strategy to analyze amide hydrogen/deuterium exchange mass spectrometry (HDX-MS) data is proposed, utilizing a wider time window and isotope envelope analysis of each peptide. While most current scientific reports present HDX-MS data as a set of time-dependent deuteration levels of peptides, the ideal HDX-MS data presentation is a complete set of backbone amide hydrogen exchange rates. The ideal data set can provide single amide resolution, coverage of all exchange events, and the open/close ratio of each amide hydrogen in EX2 mechanism. Toward this goal, a typical HDX-MS protocol was modified in two aspects: measurement of a wider time window in HDX-MS experiments and deconvolution of isotope envelope of each peptide. Measurement of a wider time window enabled the observation of deuterium incorporation of most backbone amide hydrogens. Analysis of the isotope envelope instead of centroid value provides the deuterium distribution instead of the sum of deuteration levels in each peptide. A one-step, global-fitting algorithm optimized exchange rate and deuterium retention during the analysis of each amide hydrogen by fitting the deuterated isotope envelopes at all time points of all peptides in a region. Application of this strategy to cytochrome c yielded 97 out of 100 amide hydrogen exchange rates. A set of exchange rates determined by this approach is more appropriate for a patent or regulatory filing of a biopharmaceutical than a set of peptide deuteration levels obtained by a typical protocol. A wider time window of this method also eliminates false negatives in protein-ligand binding site identification.
AIDA - from Airborne Data Inversion to In-Depth Analysis
NASA Astrophysics Data System (ADS)
Meyer, U.; Goetze, H.; Schroeder, M.; Boerner, R.; Tezkan, B.; Winsemann, J.; Siemon, B.; Alvers, M.; Stoll, J. B.
2011-12-01
The rising competition in land use especially between water economy, agriculture, forestry, building material economy and other industries often leads to irreversible deterioration in the water and soil system (as salinization and degradation) which results in a long term damage of natural resources. A sustainable exploitation of the near subsurface by industry, economy and private households is a fundamental demand of a modern society. To fulfill this demand, a sound and comprehensive knowledge on structures and processes of the near subsurface is an important prerequisite. A spatial survey of the usable underground by aerogeophysical means and a subsequent ground geophysics survey targeted at special locations will deliver essential contributions within short time that make it possible to gain the needed additional knowledge. The complementary use of airborne and ground geophysics as well as the validation, assimilation and improvement of current findings by geological and hydrogeological investigations and plausibility tests leads to the following key questions: a) Which new and/or improved automatic algorithms (joint inversion, data assimilation and such) are useful to describe the structural setting of the usable subsurface by user specific characteristics as i.e. water volume, layer thicknesses, porosities etc.? b) What are the physical relations of the measured parameters (as electrical conductivities, magnetic susceptibilities, densities, etc.)? c) How can we deduce characteristics or parameters from the observations which describe near subsurface structures as ground water systems, their charge, discharge and recharge, vulnerabilities and other quantities? d) How plausible and realistic are the numerically obtained results in relation to user specific questions and parameters? e) Is it possible to compile material flux balances that describe spatial and time dependent impacts of environmental changes on aquifers and soils by repeated airborne surveys? In order to follow up these questions raised the project aims to achieve the following goals: a) Development of new and expansion of existent inversion strategies to improve structural parameter information on different space and time scales. b) Development, modification, and tests for a multi-parameter inversion (joint inversion). c) Development of new quantitative approaches in data assimilation and plausibility studies. d) Compilation of optimized work flows for fast employment by end users. e) Primary goal is to solve comparable society related problems (as salinization, erosion, contamination, degradation etc.) in regions within Germany and abroad by generalization of project results.
National Aerospace Leadership Initiative - Phase I
2008-09-30
Devised and validated CFD code for operation of a micro-channel heat exchanger. The work was published at the 2008 AIAA Annual Meeting and Exposition...and (3) preparation to implement this algorithm in TURBO. Heat Transfer Capability In the short and medium term, the following plan has been adopted...to provide heat transfer capability to the TURBO code: • Incorporation of a constant wall temperature boundary condition. This capability will be
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Implementation of Dynamic Extensible Adaptive Locally Exchangeable Measures (IDEALEM) v 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sim, Alex; Lee, Dongeun; Wu, K. John
2016-03-04
Handling large streaming data is essential for various applications such as network traffic analysis, social networks, energy cost trends, and environment modeling. However, it is in general intractable to store, compute, search, and retrieve large streaming data. This software addresses a fundamental issue, which is to reduce the size of large streaming data and still obtain accurate statistical analysis. As an example, when a high-speed network such as 100 Gbps network is monitored, the collected measurement data rapidly grows so that polynomial time algorithms (e.g., Gaussian processes) become intractable. One possible solution to reduce the storage of vast amounts ofmore » measured data is to store a random sample, such as one out of 1000 network packets. However, such static sampling methods (linear sampling) have drawbacks: (1) it is not scalable for high-rate streaming data, and (2) there is no guarantee of reflecting the underlying distribution. In this software, we implemented a dynamic sampling algorithm, based on the recent technology from the relational dynamic bayesian online locally exchangeable measures, that reduces the storage of data records in a large scale, and still provides accurate analysis of large streaming data. The software can be used for both online and offline data records.« less
Schwaiberger, David; Pickerodt, Philipp A; Pomprapa, Anake; Tjarks, Onno; Kork, Felix; Boemke, Willehad; Francis, Roland C E; Leonhardt, Steffen; Lachmann, Burkhard
2018-06-01
Adherence to low tidal volume (V T ) ventilation and selected positive end-expiratory pressures are low during mechanical ventilation for treatment of the acute respiratory distress syndrome. Using a pig model of severe lung injury, we tested the feasibility and physiological responses to a novel fully closed-loop mechanical ventilation algorithm based on the "open lung" concept. Lung injury was induced by surfactant washout in pigs (n = 8). Animals were ventilated following the principles of the "open lung approach" (OLA) using a fully closed-loop physiological feedback algorithm for mechanical ventilation. Standard gas exchange, respiratory- and hemodynamic parameters were measured. Electrical impedance tomography was used to quantify regional ventilation distribution during mechanical ventilation. Automatized mechanical ventilation provided strict adherence to low V T -ventilation for 6 h in severely lung injured pigs. Using the "open lung" approach, tidal volume delivery required low lung distending pressures, increased recruitment and ventilation of dorsal lung regions and improved arterial blood oxygenation. Physiological feedback closed-loop mechanical ventilation according to the principles of the open lung concept is feasible and provides low tidal volume ventilation without human intervention. Of importance, the "open lung approach"-ventilation improved gas exchange and reduced lung driving pressures by opening atelectasis and shifting of ventilation to dorsal lung regions.
Shen, Hujun; Czaplewski, Cezary; Liwo, Adam; Scheraga, Harold A.
2009-01-01
The kinetic-trapping problem in simulating protein folding can be overcome by using a Replica Exchange Method (REM). However, in implementing REM in molecular dynamics simulations, synchronization between processors on parallel computers is required, and communication between processors limits its ability to sample conformational space in a complex system efficiently. To minimize communication between processors during the simulation, a Serial Replica Exchange Method (SREM) has been proposed recently by Hagan et al. (J. Phys. Chem. B 2007, 111, 1416–1423). Here, we report the implementation of this new SREM algorithm with our physics-based united-residue (UNRES) force field. The method has been tested on the protein 1E0L with a temperature-independent UNRES force field and on terminally blocked deca-alanine (Ala10) and 1GAB with the recently introduced temperature-dependent UNRES force field. With the temperature-independent force field, SREM reproduces the results of REM but is more efficient in terms of wall-clock time and scales better on distributed-memory machines. However, exact application of SREM to the temperature-dependent UNRES algorithm requires the determination of a four-dimensional distribution of UNRES energy components instead of a one-dimensional energy distribution for each temperature, which is prohibitively expensive. Hence, we assumed that the temperature dependence of the force field can be ignored for neighboring temperatures. This version of SREM worked for Ala10 which is a simple system but failed to reproduce the thermodynamic results as well as regular REM on the more complex 1GAB protein. Hence, SREM can be applied to the temperature-independent but not to the temperature-dependent UNRES force field. PMID:20011673
Monte Carlo Analysis of Reservoir Models Using Seismic Data and Geostatistical Models
NASA Astrophysics Data System (ADS)
Zunino, A.; Mosegaard, K.; Lange, K.; Melnikova, Y.; Hansen, T. M.
2013-12-01
We present a study on the analysis of petroleum reservoir models consistent with seismic data and geostatistical constraints performed on a synthetic reservoir model. Our aim is to invert directly for structure and rock bulk properties of the target reservoir zone. To infer the rock facies, porosity and oil saturation seismology alone is not sufficient but a rock physics model must be taken into account, which links the unknown properties to the elastic parameters. We then combine a rock physics model with a simple convolutional approach for seismic waves to invert the "measured" seismograms. To solve this inverse problem, we employ a Markov chain Monte Carlo (MCMC) method, because it offers the possibility to handle non-linearity, complex and multi-step forward models and provides realistic estimates of uncertainties. However, for large data sets the MCMC method may be impractical because of a very high computational demand. To face this challenge one strategy is to feed the algorithm with realistic models, hence relying on proper prior information. To address this problem, we utilize an algorithm drawn from geostatistics to generate geologically plausible models which represent samples of the prior distribution. The geostatistical algorithm learns the multiple-point statistics from prototype models (in the form of training images), then generates thousands of different models which are accepted or rejected by a Metropolis sampler. To further reduce the computation time we parallelize the software and run it on multi-core machines. The solution of the inverse problem is then represented by a collection of reservoir models in terms of facies, porosity and oil saturation, which constitute samples of the posterior distribution. We are finally able to produce probability maps of the properties we are interested in by performing statistical analysis on the collection of solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winlaw, Manda; De Sterck, Hans; Sanders, Geoffrey
In very simple terms a network can be de ned as a collection of points joined together by lines. Thus, networks can be used to represent connections between entities in a wide variety of elds including engi- neering, science, medicine, and sociology. Many large real-world networks share a surprising number of properties, leading to a strong interest in model development research and techniques for building synthetic networks have been developed, that capture these similarities and replicate real-world graphs. Modeling these real-world networks serves two purposes. First, building models that mimic the patterns and prop- erties of real networks helps tomore » understand the implications of these patterns and helps determine which patterns are important. If we develop a generative process to synthesize real networks we can also examine which growth processes are plausible and which are not. Secondly, high-quality, large-scale network data is often not available, because of economic, legal, technological, or other obstacles [7]. Thus, there are many instances where the systems of interest cannot be represented by a single exemplar network. As one example, consider the eld of cybersecurity, where systems require testing across diverse threat scenarios and validation across diverse network structures. In these cases, where there is no single exemplar network, the systems must instead be modeled as a collection of networks in which the variation among them may be just as important as their common features. By developing processes to build synthetic models, so-called graph generators, we can build synthetic networks that capture both the essential features of a system and realistic variability. Then we can use such synthetic graphs to perform tasks such as simulations, analysis, and decision making. We can also use synthetic graphs to performance test graph analysis algorithms, including clustering algorithms and anomaly detection algorithms.« less
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
Scellier, Benjamin; Bengio, Yoshua
2017-01-01
We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task. PMID:28522969
NASA Astrophysics Data System (ADS)
Zhang, Rui; Yao, Yi-bin; Hu, Yue-ming; Song, Wei-wei
2017-12-01
The Global Navigation Satellite System presents a plausible and cost-effective way of computing the total electron content (TEC). But TEC estimated value could be seriously affected by the differential code biases (DCB) of frequency-dependent satellites and receivers. Unlike GPS and other satellite systems, GLONASS adopts a frequency-division multiplexing access mode to distinguish different satellites. This strategy leads to different wavelengths and inter-frequency biases (IFBs) for both pseudo-range and carrier phase observations, whose impacts are rarely considered in ionospheric modeling. We obtained observations from four groups of co-stations to analyze the characteristics of the GLONASS receiver P1P2 pseudo-range IFB with a double-difference method. The results showed that the GLONASS P1P2 pseudo-range IFB remained stable for a period of time and could catch up to several meters, which cannot be absorbed by the receiver DCB during ionospheric modeling. Given the characteristics of the GLONASS P1P2 pseudo-range IFB, we proposed a two-step ionosphere modeling method with the priori IFB information. The experimental analysis showed that the new algorithm can effectively eliminate the adverse effects on ionospheric model and hardware delay parameters estimation in different space environments. During high solar activity period, compared to the traditional GPS + GLONASS modeling algorithm, the absolute average deviation of TEC decreased from 2.17 to 2.07 TECu (TEC unit); simultaneously, the average RMS of GPS satellite DCB decreased from 0.225 to 0.219 ns, and the average deviation of GLONASS satellite DCB decreased from 0.253 to 0.113 ns with a great improvement in over 55%.
MIDAS: a database-searching algorithm for metabolite identification in metabolomics.
Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle
2014-10-07
A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.
Henrich, Andrea; Joerger, Markus; Kraff, Stefanie; Jaehde, Ulrich; Huisinga, Wilhelm; Kloft, Charlotte; Parra-Guillen, Zinnia Patricia
2017-08-01
Paclitaxel is a commonly used cytotoxic anticancer drug with potentially life-threatening toxicity at therapeutic doses and high interindividual pharmacokinetic variability. Thus, drug and effect monitoring is indicated to control dose-limiting neutropenia. Joerger et al. (2016) developed a dose individualization algorithm based on a pharmacokinetic (PK)/pharmacodynamic (PD) model describing paclitaxel and neutrophil concentrations. Furthermore, the algorithm was prospectively compared in a clinical trial against standard dosing (Central European Society for Anticancer Drug Research Study of Paclitaxel Therapeutic Drug Monitoring; 365 patients, 720 cycles) but did not substantially improve neutropenia. This might be caused by misspecifications in the PK/PD model underlying the algorithm, especially without consideration of the observed cumulative pattern of neutropenia or the platinum-based combination therapy, both impacting neutropenia. This work aimed to externally evaluate the original PK/PD model for potential misspecifications and to refine the PK/PD model while considering the cumulative neutropenia pattern and the combination therapy. An underprediction was observed for the PK (658 samples), the PK parameters, and these parameters were re-estimated using the original estimates as prior information. Neutrophil concentrations (3274 samples) were overpredicted by the PK/PD model, especially for later treatment cycles when the cumulative pattern aggravated neutropenia. Three different modeling approaches (two from the literature and one newly developed) were investigated. The newly developed model, which implemented the bone marrow hypothesis semiphysiologically, was superior. This model further included an additive effect for toxicity of carboplatin combination therapy. Overall, a physiologically plausible PK/PD model was developed that can be used for dose adaptation simulations and prospective studies to further improve paclitaxel/carboplatin combination therapy. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.
An optimization formulation for characterization of pulsatile cortisol secretion.
Faghih, Rose T; Dahleh, Munther A; Brown, Emery N
2015-01-01
Cortisol is released to relay information to cells to regulate metabolism and reaction to stress and inflammation. In particular, cortisol is released in the form of pulsatile signals. This low-energy method of signaling seems to be more efficient than continuous signaling. We hypothesize that there is a controller in the anterior pituitary that leads to pulsatile release of cortisol, and propose a mathematical formulation for such controller, which leads to impulse control as opposed to continuous control. We postulate that this controller is minimizing the number of secretory events that result in cortisol secretion, which is a way of minimizing the energy required for cortisol secretion; this controller maintains the blood cortisol levels within a specific circadian range while complying with the first order dynamics underlying cortisol secretion. We use an ℓ0-norm cost function for this controller, and solve a reweighed ℓ1-norm minimization algorithm for obtaining the solution to this optimization problem. We use four examples to illustrate the performance of this approach: (i) a toy problem that achieves impulse control, (ii) two examples that achieve physiologically plausible pulsatile cortisol release, (iii) an example where the number of pulses is not within the physiologically plausible range for healthy subjects while the cortisol levels are within the desired range. This novel approach results in impulse control where the impulses and the obtained blood cortisol levels have a circadian rhythm and an ultradian rhythm that are in agreement with the known physiology of cortisol secretion. The proposed formulation is a first step in developing intermittent controllers for curing cortisol deficiency. This type of bio-inspired pulse controllers can be employed for designing non-continuous controllers in brain-machine interface design for neuroscience applications.
Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.
Albers, Christian; Westkott, Maren; Pawelzik, Klaus
2016-01-01
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.
Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity
Albers, Christian; Westkott, Maren; Pawelzik, Klaus
2016-01-01
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns. PMID:26900845
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Large Scale Frequent Pattern Mining using MPI One-Sided Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Agarwal, Khushbu
In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less
Data quality system using reference dictionaries and edit distance algorithms
NASA Astrophysics Data System (ADS)
Karbarz, Radosław; Mulawka, Jan
2015-09-01
The real art of management it is important to make smart decisions, what in most of the cases is not a trivial task. Those decisions may lead to determination of production level, funds allocation for investments etc. Most of the parameters in decision-making process such as: interest rate, goods value or exchange rate may change. It is well know that these parameters in the decision-making are based on the data contained in datamarts or data warehouse. However, if the information derived from the processed data sets is the basis for the most important management decisions, it is required that the data is accurate, complete and current. In order to achieve high quality data and to gain from them measurable business benefits, data quality system should be used. The article describes the approach to the problem, shows the algorithms in details and their usage. Finally the test results are provide. Test results show the best algorithms (in terms of quality and quantity) for different parameters and data distribution.
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh
This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.
Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem
NASA Technical Reports Server (NTRS)
Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith
2011-01-01
The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.
Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji
2017-09-30
GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David
2017-10-01
We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.
NASA Astrophysics Data System (ADS)
Li, Shuai; Wang, Yiping; Wang, Tao; Yang, Xue; Deng, Yadong; Su, Chuqi
2017-05-01
Thermoelectric generators (TEGs) have become a topic of interest for vehicle exhaust energy recovery. Electrical power generation is deeply influenced by temperature differences, temperature uniformity and topological structures of TEGs. When the dimpled surfaces are adopted in heat exchangers, the heat transfer rates can be augmented with a minimal pressure drop. However, the temperature distribution shows a large gradient along the flow direction which has adverse effects on the power generation. In the current study, the heat exchanger performance was studied in a computational fluid dynamics (CFD) model. The dimple depth, dimple print diameter, and channel height were chosen as design variables. The objective function was defined as a combination of average temperature, temperature uniformity and pressure loss. The optimal Latin hypercube method was used to determine the experiment points as a method of design of the experiment in order to analyze the sensitivity of the design variables. A Kriging surrogate model was built and verified according to the database resulting from the CFD simulation. A multi-island genetic algorithm was used to optimize the structure in the heat exchanger based on the surrogate model. The results showed that the average temperature of the heat exchanger was most sensitive to the dimple depth. The pressure loss and temperature uniformity were most sensitive to the parameter of channel rear height, h 2. With an optimal design of channel structure, the temperature uniformity can be greatly improved compared with the initial exchanger, and the additional pressure loss also increased.
Distributed control using linear momentum exchange devices
NASA Technical Reports Server (NTRS)
Sharkey, J. P.; Waites, Henry; Doane, G. B., III
1987-01-01
MSFC has successfully employed the use of the Vibrational Control of Space Structures (VCOSS) Linear Momentum Exchange Devices (LMEDs), which was an outgrowth of the Air Force Wright Aeronautical Laboratory (AFWAL) program, in a distributed control experiment. The control experiment was conducted in MSFC's Ground Facility for Large Space Structures Control Verification (GF/LSSCV). The GF/LSSCV's test article was well suited for this experiment in that the LMED could be judiciously placed on the ASTROMAST. The LMED placements were such that vibrational mode information could be extracted from the accelerometers on the LMED. The LMED accelerometer information was processed by the control algorithms so that the LMED masses could be accelerated to produce forces which would dampen the vibrational modes of interest. Experimental results are presented showing the LMED's capabilities.
A practical guide to replica-exchange Wang—Landau simulations
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Li, Ying Wai; Landau, David P.
2018-04-01
This paper is based on a series of tutorial lectures about the replica-exchange Wang-Landau (REWL) method given at the IX Brazilian Meeting on Simulational Physics (BMSP 2017). It provides a practical guide for the implementation of the method. A complete example code for a model system is available online. In this paper, we discuss the main parallel features of this code after a brief introduction to the REWL algorithm. The tutorial section is mainly directed at users who have written a single-walker Wang–Landau program already but might have just taken their first steps in parallel programming using the Message Passing Interface (MPI). In the last section, we answer “frequently asked questions” from users about the implementation of REWL for different scientific problems.
Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min
2013-09-01
The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.
Surface Modeling to Support Small-Body Spacecraft Exploration and Proximity Operations
NASA Technical Reports Server (NTRS)
Riedel, Joseph E.; Mastrodemos, Nickolaos; Gaskell, Robert W.
2011-01-01
In order to simulate physically plausible surfaces that represent geologically evolved surfaces, demonstrating demanding surface-relative guidance navigation and control (GN&C) actions, such surfaces must be made to mimic the geological processes themselves. A report describes how, using software and algorithms to model body surfaces as a series of digital terrain maps, a series of processes was put in place that evolve the surface from some assumed nominal starting condition. The physical processes modeled in this algorithmic technique include fractal regolith substrate texturing, fractally textured rocks (of empirically derived size and distribution power laws), cratering, and regolith migration under potential energy gradient. Starting with a global model that may be determined observationally or created ad hoc, the surface evolution is begun. First, material of some assumed strength is layered on the global model in a fractally random pattern. Then, rocks are distributed according to power laws measured on the Moon. Cratering then takes place in a temporal fashion, including modeling of ejecta blankets and taking into account the gravity of the object (which determines how much of the ejecta blanket falls back to the surface), and causing the observed phenomena of older craters being progressively buried by the ejecta of earlier impacts. Finally, regolith migration occurs which stratifies finer materials from coarser, as the fine material progressively migrates to regions of lower potential energy.
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe. PMID:24223789
Plant Classification from Bat-Like Echolocation Signals
Yovel, Yossi; Franz, Matthias Otto; Stilz, Peter; Schnitzler, Hans-Ulrich
2008-01-01
Classification of plants according to their echoes is an elementary component of bat behavior that plays an important role in spatial orientation and food acquisition. Vegetation echoes are, however, highly complex stochastic signals: from an acoustical point of view, a plant can be thought of as a three-dimensional array of leaves reflecting the emitted bat call. The received echo is therefore a superposition of many reflections. In this work we suggest that the classification of these echoes might not be such a troublesome routine for bats as formerly thought. We present a rather simple approach to classifying signals from a large database of plant echoes that were created by ensonifying plants with a frequency-modulated bat-like ultrasonic pulse. Our algorithm uses the spectrogram of a single echo from which it only uses features that are undoubtedly accessible to bats. We used a standard machine learning algorithm (SVM) to automatically extract suitable linear combinations of time and frequency cues from the spectrograms such that classification with high accuracy is enabled. This demonstrates that ultrasonic echoes are highly informative about the species membership of an ensonified plant, and that this information can be extracted with rather simple, biologically plausible analysis. Thus, our findings provide a new explanatory basis for the poorly understood observed abilities of bats in classifying vegetation and other complex objects. PMID:18369425
All-sky search for gravitational-wave bursts in the first joint LIGO-GEO-Virgo run
NASA Astrophysics Data System (ADS)
Abadie, J.; Abbott, B. P.; Abbott, R.; Accadia, T.; Acernese, F.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amador Ceron, E.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Arain, M. A.; Araya, M.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Bauer, Th. S.; Behnke, B.; Beker, M. G.; Belletoile, A.; Benacquista, M.; Betzwieser, J.; Beyersdorf, P. T.; Bigotta, S.; Bilenko, I. A.; Billingsley, G.; Birindelli, S.; Biswas, R.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Boccara, C.; Bock, O.; Bodiya, T. P.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Breyer, J.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Budzyński, R.; Bulik, T.; Bullington, A.; Bulten, H. J.; Buonanno, A.; Burmeister, O.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cain, J.; Calloni, E.; Camp, J. B.; Campagna, E.; Cannizzo, J.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Cardenas, L.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande-Mottin, E.; Chatterji, S.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R. C.; Cornish, N.; Corsi, A.; Coulon, J.-P.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Davier, M.; Davies, G.; Daw, E. J.; Day, R.; Dayanga, T.; de Rosa, R.; Debra, D.; Degallaix, J.; Del Prete, M.; Dergachev, V.; Desalvo, R.; Dhurandhar, S.; di Fiore, L.; di Lieto, A.; di Paolo Emilio, M.; di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Drago, M.; Drever, R. W. P.; Driggers, J.; Dueck, J.; Duke, I.; Dumas, J.-C.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Etzel, T.; Evans, M.; Evans, T.; Fafone, V.; Fairhurst, S.; Faltas, Y.; Fan, Y.; Fazi, D.; Fehrmann, H.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flasch, K.; Foley, S.; Forrest, C.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Galimberti, M.; Gammaitoni, L.; Garofoli, J. A.; Garufi, F.; Gemme, G.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Goetz, E.; Goggin, L. M.; González, G.; Goßler, S.; Gouaty, R.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hallam, J. M.; Hammer, D.; Hammond, G. D.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Hayler, T.; Heefner, J.; Heitmann, H.; Hello, P.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Howell, E.; Hoyland, D.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Ingram, D. R.; Isogai, T.; Ivanov, A.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khan, R.; Khazanov, E.; Kim, H.; King, P. J.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kullman, J.; Kumar, R.; Kwee, P.; Lam, P. K.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lazzarini, A.; Leaci, P.; Lei, M.; Leindecker, N.; Leonor, I.; Leroy, N.; Letendre, N.; Li, T. G. F.; Lin, H.; Lindquist, P. E.; Littenberg, T. B.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Lubiński, M.; Lucianetti, A.; Lück, H.; Lundgren, A.; Machenschalk, B.; Macinnis, M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Mak, C.; Maksimovic, I.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Markowitz, J.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McKechan, D. J. A.; Mehmet, M.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Merill, L.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mino, Y.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moreau, J.; Moreno, G.; Morgado, N.; Morgia, A.; Mors, K.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; Mowlowry, C.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Nash, T.; Nawrodt, R.; Nelson, J.; Neri, I.; Newton, G.; Nishida, E.; Nishizawa, A.; Nocera, F.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; Oldenburg, R.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Pardi, S.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pathak, D.; Pedraza, M.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Postiglione, F.; Prato, M.; Principe, M.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabeling, D. S.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raics, Z.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Rehbein, H.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Röver, C.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sakata, S.; Salemi, F.; Sammut, L.; Sancho de La Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santostasi, G.; Saraf, S.; Sarin, P.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Sperandio, L.; Stein, A. J.; Stein, L. C.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szokoly, G. P.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thorne, K. A.; Thorne, K. S.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Trummer, J.; Turner, L.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; van den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; Vass, S.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; van Veggel, A. A.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Wilmut, I.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zanolin, M.; Zhang, L.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2010-05-01
We present results from an all-sky search for unmodeled gravitational-wave bursts in the data collected by the LIGO, GEO 600 and Virgo detectors between November 2006 and October 2007. The search is performed by three different analysis algorithms over the frequency band 50-6000 Hz. Data are analyzed for times with at least two of the four LIGO-Virgo detectors in coincident operation, with a total live time of 266 days. No events produced by the search algorithms survive the selection cuts. We set a frequentist upper limit on the rate of gravitational-wave bursts impinging on our network of detectors. When combined with the previous LIGO search of the data collected between November 2005 and November 2006, the upper limit on the rate of detectable gravitational-wave bursts in the 64-2048 Hz band is 2.0 events per year at 90% confidence. We also present event rate versus strength exclusion plots for several types of plausible burst waveforms. The sensitivity of the combined search is expressed in terms of the root-sum-squared strain amplitude for a variety of simulated waveforms and lies in the range 6×10-22Hz-1/2 to 2×10-20Hz-1/2. This is the first untriggered burst search to use data from the LIGO and Virgo detectors together, and the most sensitive untriggered burst search performed so far.
Interpretation of magnetic anomalies using a genetic algorithm
NASA Astrophysics Data System (ADS)
Kaftan, İlknur
2017-08-01
A genetic algorithm (GA) is an artificial intelligence method used for optimization. We applied a GA to the inversion of magnetic anomalies over a thick dike. Inversion of nonlinear geophysical problems using a GA has advantages because it does not require model gradients or well-defined initial model parameters. The evolution process consists of selection, crossover, and mutation genetic operators that look for the best fit to the observed data and a solution consisting of plausible compact sources. The efficiency of a GA on both synthetic and real magnetic anomalies of dikes by estimating model parameters, such as depth to the top of the dike ( H), the half-width of the dike ( B), the distance from the origin to the reference point ( D), the dip of the thick dike ( δ), and the susceptibility contrast ( k), has been shown. For the synthetic anomaly case, it has been considered for both noise-free and noisy magnetic data. In the real case, the vertical magnetic anomaly from the Pima copper mine in Arizona, USA, and the vertical magnetic anomaly in the Bayburt-Sarıhan skarn zone in northeastern Turkey have been inverted and interpreted. We compared the estimated parameters with the results of conventional inversion methods used in previous studies. We can conclude that the GA method used in this study is a useful tool for evaluating magnetic anomalies for dike models.
Bacciu, Davide; Starita, Antonina
2008-11-01
Determining a compact neural coding for a set of input stimuli is an issue that encompasses several biological memory mechanisms as well as various artificial neural network models. In particular, establishing the optimal network structure is still an open problem when dealing with unsupervised learning models. In this paper, we introduce a novel learning algorithm, named competitive repetition-suppression (CoRe) learning, inspired by a cortical memory mechanism called repetition suppression (RS). We show how such a mechanism is used, at various levels of the cerebral cortex, to generate compact neural representations of the visual stimuli. From the general CoRe learning model, we derive a clustering algorithm, named CoRe clustering, that can automatically estimate the unknown cluster number from the data without using a priori information concerning the input distribution. We illustrate how CoRe clustering, besides its biological plausibility, posses strong theoretical properties in terms of robustness to noise and outliers, and we provide an error function describing CoRe learning dynamics. Such a description is used to analyze CoRe relationships with the state-of-the art clustering models and to highlight CoRe similitude with rival penalized competitive learning (RPCL), showing how CoRe extends such a model by strengthening the rival penalization estimation by means of loss functions from robust statistics.
Jane: a new tool for the cophylogeny reconstruction problem.
Conow, Chris; Fielder, Daniel; Ovadia, Yaniv; Libeskind-Hadas, Ran
2010-02-03
This paper describes the theory and implementation of a new software tool, called Jane, for the study of historical associations. This problem arises in parasitology (associations of hosts and parasites), molecular systematics (associations of orderings and genes), and biogeography (associations of regions and orderings). The underlying problem is that of reconciling pairs of trees subject to biologically plausible events and costs associated with these events. Existing software tools for this problem have strengths and limitations, and the new Jane tool described here provides functionality that complements existing tools. The Jane software tool uses a polynomial time dynamic programming algorithm in conjunction with a genetic algorithm to find very good, and often optimal, solutions even for relatively large pairs of trees. The tool allows the user to provide rich timing information on both the host and parasite trees. In addition the user can limit host switch distance and specify multiple host switch costs by specifying regions in the host tree and costs for host switches between pairs of regions. Jane also provides a graphical user interface that allows the user to interactively experiment with modifications to the solutions found by the program. Jane is shown to be a useful tool for cophylogenetic reconstruction. Its functionality complements existing tools and it is therefore likely to be of use to researchers in the areas of parasitology, molecular systematics, and biogeography.
Causal discovery in the geosciences-Using synthetic data to learn how to interpret results
NASA Astrophysics Data System (ADS)
Ebert-Uphoff, Imme; Deng, Yi
2017-02-01
Causal discovery algorithms based on probabilistic graphical models have recently emerged in geoscience applications for the identification and visualization of dynamical processes. The key idea is to learn the structure of a graphical model from observed spatio-temporal data, thus finding pathways of interactions in the observed physical system. Studying those pathways allows geoscientists to learn subtle details about the underlying dynamical mechanisms governing our planet. Initial studies using this approach on real-world atmospheric data have shown great potential for scientific discovery. However, in these initial studies no ground truth was available, so that the resulting graphs have been evaluated only by whether a domain expert thinks they seemed physically plausible. The lack of ground truth is a typical problem when using causal discovery in the geosciences. Furthermore, while most of the connections found by this method match domain knowledge, we encountered one type of connection for which no explanation was found. To address both of these issues we developed a simulation framework that generates synthetic data of typical atmospheric processes (advection and diffusion). Applying the causal discovery algorithm to the synthetic data allowed us (1) to develop a better understanding of how these physical processes appear in the resulting connectivity graphs, and thus how to better interpret such connectivity graphs when obtained from real-world data; (2) to solve the mystery of the previously unexplained connections.
Bayesian tomography by interacting Markov chains
NASA Astrophysics Data System (ADS)
Romary, T.
2017-12-01
In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.
Enhanced calculation of eigen-stress field and elastic energy in atomistic interdiffusion of alloys
NASA Astrophysics Data System (ADS)
Cecilia, José M.; Hernández-Díaz, A. M.; Castrillo, Pedro; Jiménez-Alonso, J. F.
2017-02-01
The structural evolution of alloys is affected by the elastic energy associated to eigen-stress fields. However, efficient calculations of the elastic energy in evolving geometries are actually a great challenge in promising atomistic simulation techniques such as Kinetic Monte Carlo (KMC) methods. In this paper, we report two complementary algorithms to calculate the eigen-stress field by linear superposition (a.k.a. LSA, Lineal Superposition Algorithm) and the elastic energy modification in atomistic interdiffusion of alloys (the Atom Exchange Elastic Energy Evaluation (AE4) Algorithm). LSA is shown to be appropriated for fast incremental stress calculation in highly nanostructured materials, whereas AE4 provides the required input for KMC and, additionally, it can be used to evaluate the accuracy of the eigen-stress field calculated by LSA. Consequently, they are suitable to be used on-the-fly with KMC. Both algorithms are massively parallel by their definition and thus well-suited for their parallelization on modern Graphics Processing Units (GPUs). Our computational studies confirm that we can obtain significant improvements compared to conventional Finite Element Methods, and the utilization of GPUs opens up new possibilities for the development of these methods in atomistic simulation of materials.
NASA Astrophysics Data System (ADS)
Zheng, Yan
2015-03-01
Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.
Detrending moving average algorithm for multifractals
NASA Astrophysics Data System (ADS)
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
New Scheduling Algorithms for Agile All-Photonic Networks
NASA Astrophysics Data System (ADS)
Mehri, Mohammad Saleh; Ghaffarpour Rahbar, Akbar
2017-12-01
An optical overlaid star network is a class of agile all-photonic networks that consists of one or more core node(s) at the center of the star network and a number of edge nodes around the core node. In this architecture, a core node may use a scheduling algorithm for transmission of traffic through the network. A core node is responsible for scheduling optical packets that arrive from edge nodes and switching them toward their destinations. Nowadays, most edge nodes use virtual output queue (VOQ) architecture for buffering client packets to achieve high throughput. This paper presents two efficient scheduling algorithms called discretionary iterative matching (DIM) and adaptive DIM. These schedulers find maximum matching in a small number of iterations and provide high throughput and incur low delay. The number of arbiters in these schedulers and the number of messages exchanged between inputs and outputs of a core node are reduced. We show that DIM and adaptive DIM can provide better performance in comparison with iterative round-robin matching with SLIP (iSLIP). SLIP means the act of sliding for a short distance to select one of the requested connections based on the scheduling algorithm.
Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.
Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar
2017-03-01
We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Wang, W.; Ganguly, S.; Li, S.; Michaelis, A.; Higuchi, A.; Takenaka, H.; Nemani, R. R.
2017-12-01
New geostationary sensors such as the AHI (Advanced Himawari Imager on Himawari-8) and the ABI (Advanced Baseline Imager on GOES-16) have the potential to advance ecosystem modeling particularly of diurnally varying phenomenon through frequent observations. These sensors have similar channels as in MODIS (MODerate resolution Imaging Spectroradiometer), and allow us to utilize the knowledge and experience in MODIS data processing. Here, we developed sub-hourly Gross Primary Production (GPP) algorithm, leverating the MODIS 17 GPP algorithm. We run the model at 1-km resolution over Japan and Australia using geo-corrected AHI data. Solar radiation was directly calculated from AHI using a neural network technique. The other necessary climate data were derived from weather stations and other satellite data. The sub-hourly estimates of GPP were first compared with ground-measured GPP at various Fluxnet sites. We also compared the AHI GPP with MODIS 17 GPP, and analyzed the differences in spatial patterns and the effect of diurnal changes in climate forcing. The sub-hourly GPP products require massive storage and strong computational power. We use NEX (NASA Earth Exchange) facility to produce the GPP products. This GPP algorithm can be applied to other geostationary satellites including GOES-16 in future.
The Ensemble Kalman filter: a signal processing perspective
NASA Astrophysics Data System (ADS)
Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik
2017-12-01
The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.
Medical Image Encryption: An Application for Improved Padding Based GGH Encryption Algorithm
Sokouti, Massoud; Zakerolhosseini, Ali; Sokouti, Babak
2016-01-01
Medical images are regarded as important and sensitive data in the medical informatics systems. For transferring medical images over an insecure network, developing a secure encryption algorithm is necessary. Among the three main properties of security services (i.e., confidentiality, integrity, and availability), the confidentiality is the most essential feature for exchanging medical images among physicians. The Goldreich Goldwasser Halevi (GGH) algorithm can be a good choice for encrypting medical images as both the algorithm and sensitive data are represented by numeric matrices. Additionally, the GGH algorithm does not increase the size of the image and hence, its complexity will remain as simple as O(n2). However, one of the disadvantages of using the GGH algorithm is the Chosen Cipher Text attack. In our strategy, this shortcoming of GGH algorithm has been taken in to consideration and has been improved by applying the padding (i.e., snail tour XORing), before the GGH encryption process. For evaluating their performances, three measurement criteria are considered including (i) Number of Pixels Change Rate (NPCR), (ii) Unified Average Changing Intensity (UACI), and (iii) Avalanche effect. The results on three different sizes of images showed that padding GGH approach has improved UACI, NPCR, and Avalanche by almost 100%, 35%, and 45%, respectively, in comparison to the standard GGH algorithm. Also, the outcomes will make the padding GGH resist against the cipher text, the chosen cipher text, and the statistical attacks. Furthermore, increasing the avalanche effect of more than 50% is a promising achievement in comparison to the increased complexities of the proposed method in terms of encryption and decryption processes. PMID:27857824
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-12-01
We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.
Kamil, Atif; Falk, Knut; Sharma, Animesh; Raae, Arnt; Berven, Frode; Koppang, Erling Olaf; Hordvik, Ivar
2011-09-01
Atlantic salmon (Salmo salar) and brown trout (Salmo trutta) possess two distinct subpopulations of IgM which can be separated by anion exchange chromatography. Accordingly, there are two isotypic μ genes in these species, related to ancestral tetraploidy. In the present work it was verified by mass spectrometry that IgM of peak 1 (subpopulation 1) have heavy chains previously designated as μB type whereas IgM of peak 2 (subpopulation 2) have heavy chains of μA type. Two adjacent cysteine residues are present near the C-terminal part of μB, in contrast to one cysteine residue in μA. Salmon IgM of both peak 1 and peak 2 contain light chains of the two most common isotypes: IgL1 and IgL3. In contrast to salmon and brown trout, IgM of rainbow trout (Oncorhynchus mykiss) is eluted in a single peak when subjected to anion exchange chromatography. Surprisingly, a monoclonal antibody MAb4C10 against rainbow trout IgM, reacted with μA in salmon, whereas in brown trout it reacted with μB. It is plausible to assume that DNA has been exchanged between the paralogous A and B loci during evolution while maintaining the two sub-variants, with and without the extra cysteine. MAb4C10 was conjugated to magnetic beads and used to separate cells, demonstrating that μ transcripts residing from captured cells were primarily of A type in salmon and B type in brown trout. An analysis of amino acid substitutions in μA and μB of salmon and brown trout indicated that the third constant domain is essential for MAb4C10 binding. This was supported by 3D modeling and was finally verified by studies of MAb4C10 reactivity with a series of recombinant μ3 constructs. Copyright © 2011 Elsevier Ltd. All rights reserved.
Biologically Plausible, Human-scale Knowledge Representation
ERIC Educational Resources Information Center
Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris
2016-01-01
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), "mesh" binding (van der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990). Recent theoretical work has suggested that…
Matott, L Shawn; Jiang, Zhengzheng; Rabideau, Alan J; Allen-King, Richelle M
2015-01-01
Numerous isotherm expressions have been developed for describing sorption of hydrophobic organic compounds (HOCs), including "dual-mode" approaches that combine nonlinear behavior with a linear partitioning component. Choosing among these alternative expressions for describing a given dataset is an important task that can significantly influence subsequent transport modeling and/or mechanistic interpretation. In this study, a series of numerical experiments were undertaken to identify "best-in-class" isotherms by refitting 10 alternative models to a suite of 13 previously published literature datasets. The corrected Akaike Information Criterion (AICc) was used for ranking these alternative fits and distinguishing between plausible and implausible isotherms for each dataset. The occurrence of multiple plausible isotherms was inversely correlated with dataset "richness", such that datasets with fewer observations and/or a narrow range of aqueous concentrations resulted in a greater number of plausible isotherms. Overall, only the Polanyi-partition dual-mode isotherm was classified as "plausible" across all 13 of the considered datasets, indicating substantial statistical support consistent with current advances in sorption theory. However, these findings are predicated on the use of the AICc measure as an unbiased ranking metric and the adoption of a subjective, but defensible, threshold for separating plausible and implausible isotherms. Copyright © 2015 Elsevier B.V. All rights reserved.