Sample records for computationally tractable manner

  1. The tractable cognition thesis.

    PubMed

    Van Rooij, Iris

    2008-09-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories of cognition. To utilize this constraint, a precise and workable definition of "computational tractability" is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial-time computability, leading to the P-Cognition thesis. This article explains how and why the P-Cognition thesis may be overly restrictive, risking the exclusion of veridical computational-level theories from scientific investigation. An argument is made to replace the P-Cognition thesis by the FPT-Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed-parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified. 2008 Cognitive Science Society, Inc.

  2. The Tractable Cognition Thesis

    ERIC Educational Resources Information Center

    van Rooij, Iris

    2008-01-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the "Tractable Cognition thesis": Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories…

  3. Tractable Goal Selection with Oversubscribed Resources

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2009-01-01

    We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  4. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  5. A computationally tractable version of the collective model

    NASA Astrophysics Data System (ADS)

    Rowe, D. J.

    2004-05-01

    A computationally tractable version of the Bohr-Mottelson collective model is presented which makes it possible to diagonalize realistic collective models and obtain convergent results in relatively small appropriately chosen subspaces of the collective model Hilbert space. Special features of the proposed model are that it makes use of the beta wave functions given analytically by the softened-beta version of the Wilets-Jean model, proposed by Elliott et al., and a simple algorithm for computing SO(5)⊃SO(3) spherical harmonics. The latter has much in common with the methods of Chacon, Moshinsky, and Sharp but is conceptually and computationally simpler. Results are presented for collective models ranging from the spherical vibrator to the Wilets-Jean and axially symmetric rotor-vibrator models.

  6. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    PubMed

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  7. Motion of a Distinguishable Impurity in the Bose Gas: Arrested Expansion Without a Lattice and Impurity Snaking

    NASA Astrophysics Data System (ADS)

    Robinson, Neil J.; Caux, Jean-Sébastien; Konik, Robert M.

    2016-04-01

    We consider the real-time dynamics of an initially localized distinguishable impurity injected into the ground state of the Lieb-Liniger model. Focusing on the case where integrability is preserved, we numerically compute the time evolution of the impurity density operator in regimes far from analytically tractable limits. We find that the injected impurity undergoes a stuttering motion as it moves and expands. For an initially stationary impurity, the interaction-driven formation of a quasibound state with a hole in the background gas leads to arrested expansion—a period of quasistationary behavior. When the impurity is injected with a finite center-of-mass momentum, the impurity moves through the background gas in a snaking manner, arising from a quantum Newton's cradlelike scenario where momentum is exchanged back and forth between the impurity and the background gas.

  8. Motion of a distinguishable Impurity in the Bose gas: Arrested expansion without a lattice and impurity snaking

    DOE PAGES

    Neil J. Robinson; Caux, Jean -Sebastien; Konik, Robert M.

    2016-04-07

    We consider the real-time dynamics of an initially localized distinguishable impurity injected into the ground state of the Lieb-Liniger model. Focusing on the case where integrability is preserved, we numerically compute the time evolution of the impurity density operator in regimes far from analytically tractable limits. We find that the injected impurity undergoes a stuttering motion as it moves and expands. For an initially stationary impurity, the interaction-driven formation of a quasibound state with a hole in the background gas leads to arrested expansion—a period of quasistationary behavior. In conclusion, when the impurity is injected with a finite center-of-mass momentum,more » the impurity moves through the background gas in a snaking manner, arising from a quantum Newton’s cradlelike scenario where momentum is exchanged back and forth between the impurity and the background gas.« less

  9. Unfolding of Proteins: Thermal and Mechanical Unfolding

    NASA Technical Reports Server (NTRS)

    Hur, Joe S.; Darve, Eric

    2004-01-01

    We have employed a Hamiltonian model based on a self-consistent Gaussian appoximation to examine the unfolding process of proteins in external - both mechanical and thermal - force elds. The motivation was to investigate the unfolding pathways of proteins by including only the essence of the important interactions of the native-state topology. Furthermore, if such a model can indeed correctly predict the physics of protein unfolding, it can complement more computationally expensive simulations and theoretical work. The self-consistent Gaussian approximation by Micheletti et al. has been incorporated in our model to make the model mathematically tractable by signi cantly reducing the computational cost. All thermodynamic properties and pair contact probabilities are calculated by simply evaluating the values of a series of Incomplete Gamma functions in an iterative manner. We have compared our results to previous molecular dynamics simulation and experimental data for the mechanical unfolding of the giant muscle protein Titin (1TIT). Our model, especially in light of its simplicity and excellent agreement with experiment and simulation, demonstrates the basic physical elements necessary to capture the mechanism of protein unfolding in an external force field.

  10. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems

    NASA Astrophysics Data System (ADS)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-01

    We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  11. Multilinear Computing and Multilinear Algebraic Geometry

    DTIC Science & Technology

    2016-08-10

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...performance period of this project. 15. SUBJECT TERMS Tensors , multilinearity, algebraic geometry, numerical computations, computational tractability, high...Reset DISTRIBUTION A: Distribution approved for public release. DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS FOR COMPLETING

  12. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  13. Computational Nonlinear Morphology with Emphasis on Semitic Languages. Studies in Natural Language Processing.

    ERIC Educational Resources Information Center

    Kiraz, George Anton

    This book presents a tractable computational model that can cope with complex morphological operations, especially in Semitic languages, and less complex morphological systems present in Western languages. It outlines a new generalized regular rewrite rule system that uses multiple finite-state automata to cater to root-and-pattern morphology,…

  14. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  15. Numerical Study of Charged Inertial Particles in Turbulence using a Coupled Fluid-P3M Approach

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Capecelatro, Jesse

    2017-11-01

    Non-trivial interactions between charged particles and turbulence play an important role in many engineering and environmental flows, including clouds, fluidized bed reactors, charged hydrocarbon sprays and dusty plasmas. Due to the long-range nature of electrostatic forces, Coulomb interactions in systems with many particles must be handled carefully to avoid O(N2) computations. The particle-mesh (PM) method is typically employed in Eulerian-Lagrangian (EL) simulations as it avoids computing direct pairwise sums, but it fails to capture short-range interactions that are anticipated to be important when particles cluster. In this presentation, the particle-particle-particle-mesh (P3M) method that scales with O(NlogN) is implemented within a EL framework to simulate charged particles accurately in a tractable manner. The EL-P3M method is used to assess the competition between drag and Coulomb forces for a range of Stokes numbers and charges. Simulations of like- and oppositely-charged particles suspended in a two-dimensional Taylor-Green vortex and three-dimensional homogeneous isotropic turbulence are reported. One-point and two-point statistics obtained using PM and P3M are compared to assess the effect of added accuracy on collision rate and clustering.

  16. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data.

    PubMed

    Tom, Jennifer A; Sinsheimer, Janet S; Suchard, Marc A

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, often compared using point estimates that fail to account for the variability within and correlation between the distributions these realizations approximate. However, although the initial concession to stratify generally precludes the more sensible analysis using a single joint hierarchical model, we can circumvent this outcome and capitalize on the intermediate realizations by extending the dynamic iterative reweighting MCMC algorithm. In doing so, we reuse the available realizations by reweighting them with importance weights, recycling them into a now tractable joint hierarchical model. We apply this technique to intermediate realizations generated from stratified analyses of 687 influenza A genomes spanning 13 years allowing us to revisit hypotheses regarding the evolutionary history of influenza within a hierarchical statistical framework.

  17. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data

    PubMed Central

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2015-01-01

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, often compared using point estimates that fail to account for the variability within and correlation between the distributions these realizations approximate. However, although the initial concession to stratify generally precludes the more sensible analysis using a single joint hierarchical model, we can circumvent this outcome and capitalize on the intermediate realizations by extending the dynamic iterative reweighting MCMC algorithm. In doing so, we reuse the available realizations by reweighting them with importance weights, recycling them into a now tractable joint hierarchical model. We apply this technique to intermediate realizations generated from stratified analyses of 687 influenza A genomes spanning 13 years allowing us to revisit hypotheses regarding the evolutionary history of influenza within a hierarchical statistical framework. PMID:26681992

  18. A Tractable Numerical Model for Exploring Nonadiabatic Quantum Dynamics

    ERIC Educational Resources Information Center

    Camrud, Evan; Turner, Daniel B.

    2017-01-01

    Numerous computational and spectroscopic studies have demonstrated the decisive role played by nonadiabatic coupling in photochemical reactions. Nonadiabatic coupling drives photochemistry when potential energy surfaces are nearly degenerate at avoided crossings or truly degenerate at unavoided crossings. The dynamics induced by nonadiabatic…

  19. Exploring the Universe with WISE and Cloud Computing

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.

    2011-01-01

    WISE is a recently-completed astronomical survey mission that has imaged the entire sky in four infrared wavelength bands. The large quantity of science images returned consists of 2,776,922 individual snapshots in various locations in each band which, along with ancillary data, totals around 110TB of raw, uncompressed data. Making the most use of this data requires advanced computing resources. I will discuss some initial attempts in the use of cloud computing to make this large problem tractable.

  20. Anytime Prediction: Efficient Ensemble Methods for Any Computational Budget

    DTIC Science & Technology

    2014-01-21

    difficult problem and is the focus of this work. 1.1 Motivation The number of machine learning applications which involve real time and latency sensitive pre...significantly increasing latency , and the computational costs associated with hosting a service are often critical to its viability. For such...balancing training costs, concerns such as scalability and tractability are often more important, as opposed to factors such as latency which are more

  1. Parametric Study of a YAV-8B Harrier in Ground Effect Using Time-Dependent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.

  2. A non-linear programming approach to the computer-aided design of regulators using a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1985-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.

  3. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    PubMed Central

    2012-01-01

    Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878

  4. Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, Brian James; Yin, Lin; Stark, David James

    One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem

  5. Computing Role Assignments of Proper Interval Graphs in Polynomial Time

    NASA Astrophysics Data System (ADS)

    Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël

    A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.

  6. Conformational diversity and computational enzyme design

    PubMed Central

    Lassila, Jonathan K.

    2010-01-01

    The application of computational protein design methods to the design of enzyme active sites offers potential routes to new catalysts and new reaction specificities. Computational design methods have typically treated the protein backbone as a rigid structure for the sake of computational tractability. However, this fixed-backbone approximation introduces its own special challenges for enzyme design and it contrasts with an emerging picture of natural enzymes as dynamic ensembles with multiple conformations and motions throughout a reaction cycle. This review considers the impact of conformational variation and dynamics on computational enzyme design and it highlights new approaches to addressing protein conformational diversity in enzyme design including recent advances in multistate design, backbone flexibility, and computational library design. PMID:20829099

  7. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  8. Quantum simulations with noisy quantum computers

    NASA Astrophysics Data System (ADS)

    Gambetta, Jay

    Quantum computing is a new computational paradigm that is expected to lie beyond the standard model of computation. This implies a quantum computer can solve problems that can't be solved by a conventional computer with tractable overhead. To fully harness this power we need a universal fault-tolerant quantum computer. However the overhead in building such a machine is high and a full solution appears to be many years away. Nevertheless, we believe that we can build machines in the near term that cannot be emulated by a conventional computer. It is then interesting to ask what these can be used for. In this talk we will present our advances in simulating complex quantum systems with noisy quantum computers. We will show experimental implementations of this on some small quantum computers.

  9. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  10. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  11. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  12. Learning-based stochastic object models for use in optimizing imaging systems

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua

    2017-03-01

    It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.

  13. Intra-organizational Computation and Complexity

    DTIC Science & Technology

    2003-01-01

    models. New methodologies, centered on understanding algorithmic complexity, are being developed that may enable us to better handle network data ...tractability of data analysis, and enable more precise theorization. A variety of measures of algorithmic complexity, e.g., Kolmogorov-Chaitin, and a...variety of proxies exist (which are often turned to for pragmatic reasons) ( Lempel and Ziv ,1976). For the most part, social and organizational

  14. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  15. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  16. Distributed sensor networks: a cellular nonlinear network perspective.

    PubMed

    Haenggi, Martin

    2003-12-01

    Large-scale networks of integrated wireless sensors become increasingly tractable. Advances in hardware technology and engineering design have led to dramatic reductions in size, power consumption, and cost for digital circuitry, and wireless communications. Networking, self-organization, and distributed operation are crucial ingredients to harness the sensing, computing, and computational capabilities of the nodes into a complete system. This article shows that those networks can be considered as cellular nonlinear networks (CNNs), and that their analysis and design may greatly benefit from the rich theoretical results available for CNNs.

  17. Economics and computer science of a radio spectrum reallocation.

    PubMed

    Leyton-Brown, Kevin; Milgrom, Paul; Segal, Ilya

    2017-07-11

    The recent "incentive auction" of the US Federal Communications Commission was the first auction to reallocate radio frequencies between two different kinds of uses: from broadcast television to wireless Internet access. The design challenge was not just to choose market rules to govern a fixed set of potential trades but also, to determine the broadcasters' property rights, the goods to be exchanged, the quantities to be traded, the computational procedures, and even some of the performance objectives. An essential and unusual challenge was to make the auction simple enough for human participants while still ensuring that the computations would be tractable and capable of delivering nearly efficient outcomes.

  18. Unified sensor management in unknown dynamic clutter

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2010-04-01

    In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with maximization of a "natural" sensor management objective function called PENT (posterior expected number of targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.

  19. Renewal of the Attentive Sensing Project

    DTIC Science & Technology

    2006-02-07

    decisions about target presence or absence, is denoted track before detect . We have investigated joint tracking and detection in the context of the foveal...computationally tractable bounds. 4 Task 2: Sensor Configuration for Tracking and Track Before Detect Task 2 consisted of investigation of attentive...strategy to multiple targets and to track before detect sensors. To apply principles developed in the context of foveal sensors to more immediately

  20. Learning-based stochastic object models for characterizing anatomical variations

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  1. Computational techniques to enable visualizing shapes of objects of extra spatial dimensions

    NASA Astrophysics Data System (ADS)

    Black, Don Vaughn, II

    Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.

  2. Rotorcraft application of advanced computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stanaway, Sharon

    1991-01-01

    The objective was to develop the capability to compute the unsteady viscous flow around rotor-body combinations. In the interest of tractability, the problem was divided into subprograms for: (1) computing the flow around a rotor blade in isolation; (2) computing the flow around a fuselage in isolation, and (3) integrating the pieces. Considerable progress has already been made by others toward computing the rotor in isolation (Srinivasen) and this work focused on the remaining tasks. These tasks required formulating a multi-block strategy for combining rotating blades and nonrotating components (i.e., a fuselage). Then an appropriate configuration was chosen for which suitable rotor body interference test data exists. Next, surface and volume grids were generated and state-of-the-art CFD codes were modified and applied to the problem.

  3. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Sherrill, C. David

    2014-07-01

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.

  4. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu

    2014-07-28

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less

  5. Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm

    DOE PAGES

    Yang, Shang-Te; Ling, Hao

    2013-01-01

    An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less

  6. Economics and computer science of a radio spectrum reallocation

    PubMed Central

    Leyton-Brown, Kevin; Segal, Ilya

    2017-01-01

    The recent “incentive auction” of the US Federal Communications Commission was the first auction to reallocate radio frequencies between two different kinds of uses: from broadcast television to wireless Internet access. The design challenge was not just to choose market rules to govern a fixed set of potential trades but also, to determine the broadcasters’ property rights, the goods to be exchanged, the quantities to be traded, the computational procedures, and even some of the performance objectives. An essential and unusual challenge was to make the auction simple enough for human participants while still ensuring that the computations would be tractable and capable of delivering nearly efficient outcomes. PMID:28652335

  7. Fast Decentralized Averaging via Multi-scale Gossip

    NASA Astrophysics Data System (ADS)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  8. Generation of dynamo magnetic fields in protoplanetary and other astrophysical accretion disks

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Levy, E. H.

    1988-01-01

    A computational method for treating the generation of dynamo magnetic fields in astrophysical disks is presented. The numerical difficulty of handling the boundary condition at infinity in the cylindrical disk geometry is overcome by embedding the disk in a spherical computational space and matching the solutions to analytically tractable spherical functions in the surrounding space. The lowest lying dynamo normal modes for a 'thick' astrophysical disk are calculated. The generated modes found are all oscillatory and spatially localized. Tha potential implications of the results for the properties of dynamo magnetic fields in real astrophysical disks are discussed.

  9. Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs

    NASA Astrophysics Data System (ADS)

    Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes

    We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.

  10. A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics

    NASA Astrophysics Data System (ADS)

    Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.

    2017-12-01

    The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as other rheologies.

  11. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.

  12. Learning for Semantic Parsing with Kernels under Various Forms of Supervision

    DTIC Science & Technology

    2007-08-01

    natural language sentences to their formal executable meaning representations. This is a challenging problem and is critical for developing computing...sentences are semantically tractable. This indi- cates that Geoquery is more challenging domain for semantic parsing than ATIS. In the past, there have been a...Combining parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -99), pp. 187–194

  13. Nonequilibrium description of de novo biogenesis and transport through Golgi-like cisternae

    NASA Astrophysics Data System (ADS)

    Sachdeva, Himani; Barma, Mustansir; Rao, Madan

    2016-12-01

    A central issue in cell biology is the physico-chemical basis of organelle biogenesis in intracellular trafficking pathways, its most impressive manifestation being the biogenesis of Golgi cisternae. At a basic level, such morphologically and chemically distinct compartments should arise from an interplay between the molecular transport and chemical maturation. Here, we formulate analytically tractable, minimalist models, that incorporate this interplay between transport and chemical progression in physical space, and explore the conditions for de novo biogenesis of distinct cisternae. We propose new quantitative measures that can discriminate between the various models of transport in a qualitative manner-this includes measures of the dynamics in steady state and the dynamical response to perturbations of the kind amenable to live-cell imaging.

  14. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  15. An application of nonlinear programming to the design of regulators of a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a nonlinear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer. One concerns helicopter longitudinal dynamics and the other the flight dynamics of an aerodynamically unstable aircraft.

  16. Understanding Interpersonal Function in Psychiatric Illness Through Multiplayer Economic Games

    PubMed Central

    2014-01-01

    Interpersonal factors play significant roles in the onset, maintenance, and remission of psychiatric conditions. In the current major diagnostic classification systems for psychiatric disorders, some conditions are defined by the presence of impairments in social interaction or maintaining interpersonal relationships; these include autism, social phobia, and the personality disorders. Other psychopathologies confer significant difficulties in the social domain, including major depression, posttraumatic stress disorder, and psychotic disorders. Still other mental health conditions, including substance abuse and eating disorders, seem to be exacerbated or triggered in part by the influence of social peers. For each of these and other psychiatric conditions, the extent and quality of social support is a strong determinant of outcome such that high social support predicts symptom improvement and remission. Despite the central role of interpersonal factors in psychiatric illness, the neurobiology of social impairments remains largely unexplored, in part due to difficulties eliciting and quantifying interpersonal processes in a parametric manner. Recent advances in functional neuroimaging, combined with multiplayer exchange games drawn from behavioral economics, and computational/quantitative approaches more generally, provide a fitting paradigm within which to study interpersonal function and dysfunction in psychiatric conditions. In this review, we outline the importance of interpersonal factors in psychiatric illness and discuss ways in which neuroeconomics provides a tractable framework within which to examine the neurobiology of social dysfunction. PMID:22579510

  17. [Research on the Application of Fuzzy Logic to Systems Analysis and Control

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Research conducted with the support of NASA Grant NCC2-275 has been focused in the main on the development of fuzzy logic and soft computing methodologies and their applications to systems analysis and control. with emphasis 011 problem areas which are of relevance to NASA's missions. One of the principal results of our research has been the development of a new methodology called Computing with Words (CW). Basically, in CW words drawn from a natural language are employed in place of numbers for computing and reasoning. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW.

  18. Hydrodynamic theory of diffusion in two-temperature multicomponent plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramshaw, J.D.; Chang, C.H.

    Detailed numerical simulations of multicomponent plasmas require tractable expressions for species diffusion fluxes, which must be consistent with the given plasma current density J{sub q} to preserve local charge neutrality. The common situation in which J{sub q} = 0 is referred to as ambipolar diffusion. The use of formal kinetic theory in this context leads to results of formidable complexity. We derive simple tractable approximations for the diffusion fluxes in two-temperature multicomponent plasmas by means of a generalization of the hydrodynamical approach used by Maxwell, Stefan, Furry, and Williams. The resulting diffusion fluxes obey generalized Stefan-Maxwell equations that contain drivingmore » forces corresponding to ordinary, forced, pressure, and thermal diffusion. The ordinary diffusion fluxes are driven by gradients in pressure fractions rather than mole fractions. Simplifications due to the small electron mass are systematically exploited and lead to a general expression for the ambipolar electric field in the limit of infinite electrical conductivity. We present a self-consistent effective binary diffusion approximation for the diffusion fluxes. This approximation is well suited to numerical implementation and is currently in use in our LAVA computer code for simulating multicomponent thermal plasmas. Applications to date include a successful simulation of demixing effects in an argon-helium plasma jet, for which selected computational results are presented. Generalizations of the diffusion theory to finite electrical conductivity and nonzero magnetic field are currently in progress.« less

  19. Examining Trust, Forgiveness and Regret as Computational Concepts

    NASA Astrophysics Data System (ADS)

    Marsh, Stephen; Briggs, Pamela

    The study of trust has advanced tremendously in recent years, to the extent that the goal of a more unified formalisation of the concept is becoming feasible. To that end, we have begun to examine the closely related concepts of regret and forgiveness and their relationship to trust and its siblings. The resultant formalisation allows computational tractability in, for instance, artificial agents. Moreover, regret and forgiveness, when allied to trust, are very powerful tools in the Ambient Intelligence (AmI) security area, especially where Human Computer Interaction and concrete human understanding are key. This paper introduces the concepts of regret and forgiveness, exploring them from social psychological as well as a computational viewpoint, and presents an extension to Marsh's original trust formalisation that takes them into account. It discusses and explores work in the AmI environment, and further potential applications.

  20. Module discovery by exhaustive search for densely connected, co-expressed regions in biomolecular interaction networks.

    PubMed

    Colak, Recep; Moser, Flavia; Chu, Jeffrey Shih-Chieh; Schönhuth, Alexander; Chen, Nansheng; Ester, Martin

    2010-10-25

    Computational prediction of functionally related groups of genes (functional modules) from large-scale data is an important issue in computational biology. Gene expression experiments and interaction networks are well studied large-scale data sources, available for many not yet exhaustively annotated organisms. It has been well established, when analyzing these two data sources jointly, modules are often reflected by highly interconnected (dense) regions in the interaction networks whose participating genes are co-expressed. However, the tractability of the problem had remained unclear and methods by which to exhaustively search for such constellations had not been presented. We provide an algorithmic framework, referred to as Densely Connected Biclustering (DECOB), by which the aforementioned search problem becomes tractable. To benchmark the predictive power inherent to the approach, we computed all co-expressed, dense regions in physical protein and genetic interaction networks from human and yeast. An automatized filtering procedure reduces our output which results in smaller collections of modules, comparable to state-of-the-art approaches. Our results performed favorably in a fair benchmarking competition which adheres to standard criteria. We demonstrate the usefulness of an exhaustive module search, by using the unreduced output to more quickly perform GO term related function prediction tasks. We point out the advantages of our exhaustive output by predicting functional relationships using two examples. We demonstrate that the computation of all densely connected and co-expressed regions in interaction networks is an approach to module discovery of considerable value. Beyond confirming the well settled hypothesis that such co-expressed, densely connected interaction network regions reflect functional modules, we open up novel computational ways to comprehensively analyze the modular organization of an organism based on prevalent and largely available large-scale datasets. Software and data sets are available at http://www.sfu.ca/~ester/software/DECOB.zip.

  1. Stochastic hybrid systems for studying biochemical processes.

    PubMed

    Singh, Abhyudai; Hespanha, João P

    2010-11-13

    Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.

  2. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    NASA Astrophysics Data System (ADS)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  3. Biobeam—Multiplexed wave-optical simulations of light-sheet microscopy

    PubMed Central

    Weigert, Martin; Bundschuh, Sebastian T.

    2018-01-01

    Sample-induced image-degradation remains an intricate wave-optical problem in light-sheet microscopy. Here we present biobeam, an open-source software package that enables simulation of operational light-sheet microscopes by combining data from 105–106 multiplexed and GPU-accelerated point-spread-function calculations. The wave-optical nature of these simulations leads to the faithful reproduction of spatially varying aberrations, diffraction artifacts, geometric image distortions, adaptive optics, and emergent wave-optical phenomena, and renders image-formation in light-sheet microscopy computationally tractable. PMID:29652879

  4. An "intelligent" approach based on side-by-side cascade-correlation neural networks for estimating thermophysical properties from photothermal responses

    NASA Astrophysics Data System (ADS)

    Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc

    2015-01-01

    In the present paper, an artificial-intelligence-based approach dealing with the estimation of thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when subjecting materials to a light flux. So, the main objective of the present work was to estimate simultaneously both the thermal diffusivity and conductivity of materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side feedforward neural networks trained with the cascade-correlation algorithm. In addition, computation time was a key point to consider. That is why the developed algorithms are computationally tractable.

  5. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  6. Approximate Bayesian computation for spatial SEIR(S) epidemic models.

    PubMed

    Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A

    2018-02-01

    Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  8. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  9. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization.

    PubMed

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-11-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified.

  10. Quantum lattice model solver HΦ

    NASA Astrophysics Data System (ADS)

    Kawamura, Mitsuaki; Yoshimi, Kazuyoshi; Misawa, Takahiro; Yamaji, Youhei; Todo, Synge; Kawashima, Naoki

    2017-08-01

    HΦ [aitch-phi ] is a program package based on the Lanczos-type eigenvalue solution applicable to a broad range of quantum lattice models, i.e., arbitrary quantum lattice models with two-body interactions, including the Heisenberg model, the Kitaev model, the Hubbard model and the Kondo-lattice model. While it works well on PCs and PC-clusters, HΦ also runs efficiently on massively parallel computers, which considerably extends the tractable range of the system size. In addition, unlike most existing packages, HΦ supports finite-temperature calculations through the method of thermal pure quantum (TPQ) states. In this paper, we explain theoretical background and user-interface of HΦ. We also show the benchmark results of HΦ on supercomputers such as the K computer at RIKEN Advanced Institute for Computational Science (AICS) and SGI ICE XA (Sekirei) at the Institute for the Solid State Physics (ISSP).

  11. Tractable Experiment Design via Mathematical Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less

  12. Immunity to helminths: resistance, regulation, and susceptibility to gastrointestinal nematodes.

    PubMed

    Grencis, Richard K

    2015-01-01

    Helminth parasites are a highly successful group of pathogens that challenge the immune system in a manner distinct from rapidly replicating infectious agents. Of this group, roundworms (nematodes) that dwell in the intestines of humans and other animals are prevalent worldwide. Currently, more than one billion people are infected by at least one species, often for extended periods of time. Thus, host-protective immunity is rarely complete. The reasons for this are complex, but laboratory investigation of tractable model systems in which protective immunity is effective has provided a mechanistic understanding of resistance that is characterized almost universally by a type 2/T helper 2 response. Greater understanding of the mechanisms of susceptibility has also provided the basis for defining host immunoregulation and parasite-evasion strategies, helping place in context the changing patterns of immunological disease observed worldwide.

  13. Shifting the Computational Paradigm

    DTIC Science & Technology

    2004-10-01

    Classifier. SIGART Bulletin 2 (3): 88-92 (1991) 23. F. Donini , M . Lenzerini, D. Nardi, and W. Nutt. `The Complexity of Concept Languages’, KR-91, pp 151...162, 1991. 24. F. Donini , M . Lenzerini, D. Nardi, and W. Nutt. Tractable concept languages’, IJCAI-91, pp 458-465, 1991. 25. O. Lassila, “Web...UnambiguousProperty, then if P(x, y) and P(z, y) then x=z. aka injective. e.g. if nameOfMonth( m , "Feb") and nameOfMonth(n, "Feb") then m and n are the same

  14. Analytical modeling of the structureborne noise path on a small twin-engine aircraft

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III; Stokes, A. Westagard; Garrelick, J. M.; Martini, K. F.

    1988-01-01

    The structureborne noise path of a six passenger twin-engine aircraft is analyzed. Models of the wing and fuselage structures as well as the interior acoustic space of the cabin are developed and used to evaluate sensitivity to structural and acoustic parameters. Different modeling approaches are used to examine aspects of the structureborne path. These approaches are guided by a number of considerations including the geometry of the structures, the frequency range of interest, and the tractability of the computations. Results of these approaches are compared with experimental data.

  15. Landscape Encodings Enhance Optimization

    PubMed Central

    Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.

    2012-01-01

    Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860

  16. Robust optimization modelling with applications to industry and environmental problems

    NASA Astrophysics Data System (ADS)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  17. Getting more from accuracy and response time data: methods for fitting the linear ballistic accumulator.

    PubMed

    Donkin, Chris; Averell, Lee; Brown, Scott; Heathcote, Andrew

    2009-11-01

    Cognitive models of the decision process provide greater insight into response time and accuracy than do standard ANOVA techniques. However, such models can be mathematically and computationally difficult to apply. We provide instructions and computer code for three methods for estimating the parameters of the linear ballistic accumulator (LBA), a new and computationally tractable model of decisions between two or more choices. These methods-a Microsoft Excel worksheet, scripts for the statistical program R, and code for implementation of the LBA into the Bayesian sampling software WinBUGS-vary in their flexibility and user accessibility. We also provide scripts in R that produce a graphical summary of the data and model predictions. In a simulation study, we explored the effect of sample size on parameter recovery for each method. The materials discussed in this article may be downloaded as a supplement from http://brm.psychonomic-journals.org/content/supplemental.

  18. Toward regional-scale adjoint tomography in the deep earth

    NASA Astrophysics Data System (ADS)

    Masson, Y.; Romanowicz, B. A.

    2013-12-01

    Thanks to the development of efficient numerical computation methods, such as the Spectral Element Method (SEM) and to the increasing power of computer clusters, it is now possible to obtain regional-scale images of the Earth's interior using adjoint-tomography (e.g. Tape, C., et al., 2009). As for now, these tomographic models are limited to the upper layers of the earth, i.e., they provide us with high-resolution images of the crust and the upper part of the mantle. Given the gigantic amount of calculation it represents, obtaing similar models at the global scale (i.e. images of the entire Earth) seems out of reach at the moment. Furthermore, it's likely that the first generation of such global adjoint tomographic models will have a resolution significantly smaller than the current regional models. In order to image regions of interests in the deep Earth, such as plumes, slabs or large low shear velocity provinces (LLSVPs), while keeping the computation tractable, we are developing new tools that will allow us to perform regional-scale adjoint-tomography at arbitrary depths. In a recent study (Masson et al., 2013), we showed that a numerical equivalent of the time reversal mirrors used in experimental acoustics permits to confine the wave propagation computations (i.e. using SEM simulations) inside the region to be imaged. With this ability to limit wave propagation modeling inside a region of interest, obtaining the adjoint sensitivity kernels needed for tomographic imaging is only two steps further. First, the local wavefield modeling needs to be coupled with field extrapolation techniques in order to obtain synthetic seismograms at the surface of the earth. These seismograms will account for the 3D structure inside the region of interest in a quasi-exact manner. We will present preliminary results where the field-extrapolation is performed using Green's function computed in a 1D Earth model thanks to the Direct Solution Method (DSM). Once synthetic seismograms can be obtained, it is possible to evaluate the misfit between observed and computed seismograms. The second step will then be to extrapolate the misfit function back into the SEM region in order to compute local adjoint sensitivity kernels. When available, these kernels will allow us to perform regional-scale adjoint tomography at arbitrary locations inside the earth. Masson Y., Cupillard P., Capdeville Y., & Romanowicz B., 2013. On the numerical implementation of time-reversal mirrors for tomographic imaging, Journal of Geophysical Research (under review). Tape, C., et al. (2009). "Adjoint tomography of the southern California crust." Science 325(5943): 988-992.

  19. Rebuilding the Injured Lung

    PubMed Central

    2015-01-01

    The 57th annual Thomas L. Petty Aspen Lung Conference, entitled “Rebuilding the Injured Lung,” was held from June 4 to 7, 2014 at the Gant Conference Center in Aspen, Colorado. Investigators from a wide range of disciplines and perspectives convened to discuss the biology of lung injury, how the lung repairs itself, how and why repair fails, and how the repair process can be enhanced. Among the challenges identified in the course of the conference was how to develop more predictive experimental models that capture the multidimensional complexity of lung injury and repair in a tractable manner. From such approaches that successfully fuse the biological and physical sciences, the group envisioned that new therapies for acute and chronic lung injury would emerge. The discussion of experimental therapeutics ranged from pharmaceuticals and cells that interdict fibrosis and enhance repair to a de novo lung derived from stem cells repopulating a decellularized matrix. PMID:25830839

  20. Theoretical studies in interstellar cloud chemistry

    NASA Technical Reports Server (NTRS)

    Chiu, Y. T.; Prasad, S. S.

    1993-01-01

    This final report represents the completion of the three tasks under the purchase order no. SCPDE5620,1,2F. Chemical composition of gravitationally contracting, but otherwise quiescent, interstellar clouds and of interstellar clouds traversed by high velocity shocks, were modeled in a comprehensive manner that represents a significant progress in modeling these objects. The evolutionary chemical modeling, done under this NASA contract, represents a notable advance over the 'classical' fixed condition equilibrium models because the evolutionary models consider not only the chemical processes but also the dynamical processes by which the dark interstellar clouds may have assumed their present state. The shock calculations, being reported here, are important because they extend the limited chemical composition derivable from dynamical calculations for the total density and temperature structures behind the shock front. In order to be tractable, the dynamical calculations must severely simplify the chemistry. The present shock calculations take the shock profiles from the dynamical calculations and derive chemical composition in a comprehensive manner. The results of the present modeling study are still to be analyzed with reference to astronomical observational data and other contemporary model predictions. As far as humanly possible, this analysis will be continued with CRE's (Creative Research Enterprises's) IR&D resources, until a sponsor is found.

  1. Screened hybrid density functionals for solid-state chemistry and physics.

    PubMed

    Janesko, Benjamin G; Henderson, Thomas M; Scuseria, Gustavo E

    2009-01-21

    Density functional theory incorporating hybrid exchange-correlation functionals has been extraordinarily successful in providing accurate, computationally tractable treatments of molecular properties. However, conventional hybrid functionals can be problematic for solids. Their nonlocal, Hartree-Fock-like exchange term decays slowly and incorporates unphysical features in metals and narrow-bandgap semiconductors. This article provides an overview of our group's work on designing hybrid functionals for solids. We focus on the Heyd-Scuseria-Ernzerhof screened hybrid functional [J. Chem. Phys. 2003, 118, 8207], its applications to the chemistry and physics of solids and surfaces, and our efforts to build upon its successes.

  2. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization

    PubMed Central

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-01-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell’s capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified. PMID:27812109

  3. Localization of canine brachycephaly using an across breed mapping approach.

    PubMed

    Bannasch, Danika; Young, Amy; Myers, Jeffrey; Truvé, Katarina; Dickinson, Peter; Gregg, Jeffrey; Davis, Ryan; Bongcam-Rudloff, Eric; Webster, Matthew T; Lindblad-Toh, Kerstin; Pedersen, Niels

    2010-03-10

    The domestic dog, Canis familiaris, exhibits profound phenotypic diversity and is an ideal model organism for the genetic dissection of simple and complex traits. However, some of the most interesting phenotypes are fixed in particular breeds and are therefore less tractable to genetic analysis using classical segregation-based mapping approaches. We implemented an across breed mapping approach using a moderately dense SNP array, a low number of animals and breeds carefully selected for the phenotypes of interest to identify genetic variants responsible for breed-defining characteristics. Using a modest number of affected (10-30) and control (20-60) samples from multiple breeds, the correct chromosomal assignment was identified in a proof of concept experiment using three previously defined loci; hyperuricosuria, white spotting and chondrodysplasia. Genome-wide association was performed in a similar manner for one of the most striking morphological traits in dogs: brachycephalic head type. Although candidate gene approaches based on comparable phenotypes in mice and humans have been utilized for this trait, the causative gene has remained elusive using this method. Samples from nine affected breeds and thirteen control breeds identified strong genome-wide associations for brachycephalic head type on Cfa 1. Two independent datasets identified the same genomic region. Levels of relative heterozygosity in the associated region indicate that it has been subjected to a selective sweep, consistent with it being a breed defining morphological characteristic. Genotyping additional dogs in the region confirmed the association. To date, the genetic structure of dog breeds has primarily been exploited for genome wide association for segregating traits. These results demonstrate that non-segregating traits under strong selection are equally tractable to genetic analysis using small sample numbers.

  4. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  5. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  6. Real-time million-synapse simulation of rat barrel cortex.

    PubMed

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development.

  7. Empirical analysis of RNA robustness and evolution using high-throughput sequencing of ribozyme reactions.

    PubMed

    Hayden, Eric J

    2016-08-15

    RNA molecules provide a realistic but tractable model of a genotype to phenotype relationship. This relationship has been extensively investigated computationally using secondary structure prediction algorithms. Enzymatic RNA molecules, or ribozymes, offer access to genotypic and phenotypic information in the laboratory. Advancements in high-throughput sequencing technologies have enabled the analysis of sequences in the lab that now rivals what can be accomplished computationally. This has motivated a resurgence of in vitro selection experiments and opened new doors for the analysis of the distribution of RNA functions in genotype space. A body of computational experiments has investigated the persistence of specific RNA structures despite changes in the primary sequence, and how this mutational robustness can promote adaptations. This article summarizes recent approaches that were designed to investigate the role of mutational robustness during the evolution of RNA molecules in the laboratory, and presents theoretical motivations, experimental methods and approaches to data analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A Methodology for Evaluating the Hygroscopic Behavior of Wood in Adaptive Building Skins using Motion Grammar

    NASA Astrophysics Data System (ADS)

    El-Dabaa, Rana; Abdelmohsen, Sherif

    2018-05-01

    The challenge in designing kinetic architecture lies in the lack of applying computational design and human computer interaction to successfully design intelligent and interactive interfaces. The use of ‘programmable materials’ as specifically fabricated composite materials that afford motion upon stimulation is promising for low-cost low-tech systems for kinetic facades in buildings. Despite efforts to develop working prototypes, there has been no clear methodological framework for understanding and controlling the behavior of programmable materials or for using them for such purposes. This paper introduces a methodology for evaluating the motion acquired from programmed material – resulting from the hygroscopic behavior of wood – through ‘motion grammar’. Motion grammar typically allows for the explanation of desired motion control in a computationally tractable method. The paper analyzed and evaluated motion parameters related to the hygroscopic properties and behavior of wood, and introduce a framework for tracking and controlling wood as a programmable material for kinetic architecture.

  9. Optimal structure and parameter learning of Ising models

    DOE PAGES

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...

    2018-03-16

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  10. Optimal structure and parameter learning of Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  11. PRIMELT3 MEGA.XLSM software for primary magma calculation: Peridotite primary magma MgO contents from the liquidus to the solidus

    NASA Astrophysics Data System (ADS)

    Herzberg, C.; Asimow, P. D.

    2015-02-01

    An upgrade of the PRIMELT algorithm for calculating primary magma composition is given together with its implementation in PRIMELT3 MEGA.xlsm software. It supersedes PRIMELT2.xls in correcting minor mistakes in melt fraction and computed Ni content of olivine, it identifies residuum mineralogy, and it provides a thorough analysis of uncertainties in mantle potential temperature and olivine liquidus temperature. The uncertainty analysis was made tractable by the computation of olivine liquidus temperatures as functions of pressure and partial melt MgO content between the liquidus and solidus. We present a computed anhydrous peridotite solidus in T-P space using relations amongst MgO, T and P along the solidus; it compares well with experiments on the solidus. Results of the application of PRIMELT3 to a wide range of basalts shows that the mantle sources of ocean islands and large igneous provinces were hotter than oceanic spreading centers, consistent with earlier studies and expectations of the mantle plume model.

  12. A Comparison of Solver Performance for Complex Gastric Electrophysiology Models

    PubMed Central

    Sathar, Shameer; Cheng, Leo K.; Trew, Mark L.

    2016-01-01

    Computational techniques for solving systems of equations arising in gastric electrophysiology have not been studied for efficient solution process. We present a computationally challenging problem of simulating gastric electrophysiology in anatomically realistic stomach geometries with multiple intracellular and extracellular domains. The multiscale nature of the problem and mesh resolution required to capture geometric and functional features necessitates efficient solution methods if the problem is to be tractable. In this study, we investigated and compared several parallel preconditioners for the linear systems arising from tetrahedral discretisation of electrically isotropic and anisotropic problems, with and without stimuli. The results showed that the isotropic problem was computationally less challenging than the anisotropic problem and that the application of extracellular stimuli increased workload considerably. Preconditioning based on block Jacobi and algebraic multigrid solvers were found to have the best overall solution times and least iteration counts, respectively. The algebraic multigrid preconditioner would be expected to perform better on large problems. PMID:26736543

  13. Real-time million-synapse simulation of rat barrel cortex

    PubMed Central

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development. PMID:24910593

  14. Assessment of time-dependent density functional theory with the restricted excitation space approximation for excited state calculations of large systems

    NASA Astrophysics Data System (ADS)

    Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.

    2018-06-01

    The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.

  15. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  16. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  17. Physiological controls of large‐scale patterning in planarian regeneration: a molecular and computational perspective on growth and form

    PubMed Central

    Durant, Fallon; Lobo, Daniel; Hammelman, Jennifer

    2016-01-01

    Abstract Planaria are complex metazoans that repair damage to their bodies and cease remodeling when a correct anatomy has been achieved. This model system offers a unique opportunity to understand how large‐scale anatomical homeostasis emerges from the activities of individual cells. Much progress has been made on the molecular genetics of stem cell activity in planaria. However, recent data also indicate that the global pattern is regulated by physiological circuits composed of ionic and neurotransmitter signaling. Here, we overview the multi‐scale problem of understanding pattern regulation in planaria, with specific focus on bioelectric signaling via ion channels and gap junctions (electrical synapses), and computational efforts to extract explanatory models from functional and molecular data on regeneration. We present a perspective that interprets results in this fascinating field using concepts from dynamical systems theory and computational neuroscience. Serving as a tractable nexus between genetic, physiological, and computational approaches to pattern regulation, planarian pattern homeostasis harbors many deep insights for regenerative medicine, evolutionary biology, and engineering. PMID:27499881

  18. Computational Methodology for Absolute Calibration Curves for Microfluidic Optical Analyses

    PubMed Central

    Chang, Chia-Pin; Nagel, David J.; Zaghloul, Mona E.

    2010-01-01

    Optical fluorescence and absorption are two of the primary techniques used for analytical microfluidics. We provide a thorough yet tractable method for computing the performance of diverse optical micro-analytical systems. Sample sizes range from nano- to many micro-liters and concentrations from nano- to milli-molar. Equations are provided to trace quantitatively the flow of the fundamental entities, namely photons and electrons, and the conversion of energy from the source, through optical components, samples and spectral-selective components, to the detectors and beyond. The equations permit facile computations of calibration curves that relate the concentrations or numbers of molecules measured to the absolute signals from the system. This methodology provides the basis for both detailed understanding and improved design of microfluidic optical analytical systems. It saves prototype turn-around time, and is much simpler and faster to use than ray tracing programs. Over two thousand spreadsheet computations were performed during this study. We found that some design variations produce higher signal levels and, for constant noise levels, lower minimum detection limits. Improvements of more than a factor of 1,000 were realized. PMID:22163573

  19. Strategies for Global Optimization of Temporal Preferences

    NASA Technical Reports Server (NTRS)

    Morris, Paul; Morris, Robert; Khatib, Lina; Ramakrishnan, Sailesh

    2004-01-01

    A temporal reasoning problem can often be naturally characterized as a collection of constraints with associated local preferences for times that make up the admissible values for those constraints. Globally preferred solutions to such problems emerge as a result of well-defined operations that compose and order temporal assignments. The overall objective of this work is a characterization of different notions of global preference, and to identify tractable sub-classes of temporal reasoning problems incorporating these notions. This paper extends previous results by refining the class of useful notions of global temporal preference that are associated with problems that admit of tractable solution techniques. This paper also answers the hitherto open question of whether problems that seek solutions that are globally preferred from a Utilitarian criterion for global preference can be found tractably.

  20. Sinking bubbles in stout beers

    NASA Astrophysics Data System (ADS)

    Lee, W. T.; Kaar, S.; O'Brien, S. B. G.

    2018-04-01

    A surprising phenomenon witnessed by many is the sinking bubbles seen in a settling pint of stout beer. Bubbles are less dense than the surrounding fluid so how does this happen? Previous work has shown that the explanation lies in a circulation of fluid promoted by the tilted sides of the glass. However, this work has relied heavily on computational fluid dynamics (CFD) simulations. Here, we show that the phenomenon of sinking bubbles can be predicted using a simple analytic model. To make the model analytically tractable, we work in the limit of small bubbles and consider a simplified geometry. The model confirms both the existence of sinking bubbles and the previously proposed mechanism.

  1. Bioinformatics in translational drug discovery.

    PubMed

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  2. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  3. High-resolution mapping of bifurcations in nonlinear biochemical circuits

    NASA Astrophysics Data System (ADS)

    Genot, A. J.; Baccouche, A.; Sieskind, R.; Aubert-Kato, N.; Bredeche, N.; Bartolo, J. F.; Taly, V.; Fujii, T.; Rondelez, Y.

    2016-08-01

    Analog molecular circuits can exploit the nonlinear nature of biochemical reaction networks to compute low-precision outputs with fewer resources than digital circuits. This analog computation is similar to that employed by gene-regulation networks. Although digital systems have a tractable link between structure and function, the nonlinear and continuous nature of analog circuits yields an intricate functional landscape, which makes their design counter-intuitive, their characterization laborious and their analysis delicate. Here, using droplet-based microfluidics, we map with high resolution and dimensionality the bifurcation diagrams of two synthetic, out-of-equilibrium and nonlinear programs: a bistable DNA switch and a predator-prey DNA oscillator. The diagrams delineate where function is optimal, dynamics bifurcates and models fail. Inverse problem solving on these large-scale data sets indicates interference from enzymatic coupling. Additionally, data mining exposes the presence of rare, stochastically bursting oscillators near deterministic bifurcations.

  4. Efficient estimation of the maximum metabolic productivity of batch systems

    DOE PAGES

    St. John, Peter C.; Crowley, Michael F.; Bomble, Yannick J.

    2017-01-31

    Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumptionmore » that all fluxes in the cell are free to vary is a challenging numerical task. Here, previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable.« less

  5. Optimization of block-floating-point realizations for digital controllers with finite-word-length considerations.

    PubMed

    Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian

    2003-01-01

    The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.

  6. Ospreys do not teach offspring how to kill prey at the nest.

    PubMed

    Howard, Megan; Hoppitt, Will

    2017-08-01

    There is strong evidence for teaching in only a handful of species, most of which are cooperative breeders, leading some researchers to suggest that teaching may be more likely to evolve in such species. Alternatively, this initial distribution could be an artefact of the popularity and tractability of cooperative breeders as behavioural study systems. Therefore, establishing or refuting this potential evolutionary link requires researchers to assess potential cases of teaching in more non-cooperatively breeding species. We tested for teaching in the osprey ( Pandion haliaetus ), a non-cooperatively-breeding bird anecdotally reported to teach hunting skills to their offspring. We tested whether parents brought back more live prey to the nest as their offspring got older, allowing the latter to practice killing prey in a manner analogous to the progressive teaching seen in meerkats. We found the opposite trend to that predicted by the teaching hypothesis, indicating that ospreys do not teach their young at the nest. © 2017 The Author(s).

  7. Tractable flux-driven temperature, density, and rotation profile evolution with the quasilinear gyrokinetic transport model QuaLiKiz

    NASA Astrophysics Data System (ADS)

    Citrin, J.; Bourdelle, C.; Casson, F. J.; Angioni, C.; Bonanomi, N.; Camenen, Y.; Garbet, X.; Garzotti, L.; Görler, T.; Gürcan, O.; Koechl, F.; Imbeaux, F.; Linder, O.; van de Plassche, K.; Strand, P.; Szepesi, G.; Contributors, JET

    2017-12-01

    Quasilinear turbulent transport models are a successful tool for prediction of core tokamak plasma profiles in many regimes. Their success hinges on the reproduction of local nonlinear gyrokinetic fluxes. We focus on significant progress in the quasilinear gyrokinetic transport model QuaLiKiz (Bourdelle et al 2016 Plasma Phys. Control. Fusion 58 014036), which employs an approximated solution of the mode structures to significantly speed up computation time compared to full linear gyrokinetic solvers. Optimisation of the dispersion relation solution algorithm within integrated modelling applications leads to flux calculations × {10}6-7 faster than local nonlinear simulations. This allows tractable simulation of flux-driven dynamic profile evolution including all transport channels: ion and electron heat, main particles, impurities, and momentum. Furthermore, QuaLiKiz now includes the impact of rotation and temperature anisotropy induced poloidal asymmetry on heavy impurity transport, important for W-transport applications. Application within the JETTO integrated modelling code results in 1 s of JET plasma simulation within 10 h using 10 CPUs. Simultaneous predictions of core density, temperature, and toroidal rotation profiles for both JET hybrid and baseline experiments are presented, covering both ion and electron turbulence scales. The simulations are successfully compared to measured profiles, with agreement mostly in the 5%-25% range according to standard figures of merit. QuaLiKiz is now open source and available at www.qualikiz.com.

  8. Active Ambiguity Reduction: An Experiment Design Approach to Tractable Qualitative Reasoning.

    DTIC Science & Technology

    1987-04-20

    Approach to Tractable Qualitative Reasoning Shankar A. Rajamoney t [ For Gerald F. DeJong Artificial Intelligence Research Group Coordinated Science...Representations of Knowledge in a Mechanics Problem- Solver." Proceedings of the Fifth International Joint Conference on Artificial Intelligence. Cambridge. MIA...International Joint Conference on Artificial Intelligence. Tokyo. Japan. 1979. [de Kleer84] J. de Kleer and J. S. Brown. "A Qualitative Physics Based on

  9. Multicomponent density functional theory embedding formulation.

    PubMed

    Culpitt, Tanner; Brorsen, Kurt R; Pak, Michael V; Hammes-Schiffer, Sharon

    2016-07-28

    Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density is separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF(-) molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.

  10. Multicomponent density functional theory embedding formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culpitt, Tanner; Brorsen, Kurt R.; Pak, Michael V.

    Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density ismore » separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF{sup −} molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.« less

  11. Cheese rind communities provide tractable systems for in situ and in vitro studies of microbial diversity

    PubMed Central

    Wolfe, Benjamin E.; Button, Julie E.; Santarelli, Marcela; Dutton, Rachel J.

    2014-01-01

    SUMMARY Tractable microbial communities are needed to bridge the gap between observations of patterns of microbial diversity and mechanisms that can explain these patterns. We developed cheese rinds as model microbial communities by characterizing in situ patterns of diversity and by developing an in vitro system for community reconstruction. Sequencing of 137 different rind communities across 10 countries revealed 24 widely distributed and culturable genera of bacteria and fungi as dominant community members. Reproducible community types formed independent of geographic location of production. Intensive temporal sampling demonstrated that assembly of these communities is highly reproducible. Patterns of community composition and succession observed in situ can be recapitulated in a simple in vitro system. Widespread positive and negative interactions were identified between bacterial and fungal community members. Cheese rind microbial communities represent an experimentally tractable system for defining mechanisms that influence microbial community assembly and function. PMID:25036636

  12. Theoretical study of the hyperfine parameters of OH

    NASA Technical Reports Server (NTRS)

    Chong, Delano P.; Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.

    1991-01-01

    In the present study of the hyperfine parameters of O-17H as a function of the one- and n-particle spaces, all of the parameters except oxygen's spin density, b sub F(O), are sufficiently easily tractable to allow concentration on the computational requirements for accurate determination of b sub F(O). Full configuration-interaction (FCI) calculations in six Gaussian basis sets yield unambiguous results for (1) the effect of uncontracting the O s and p basis sets; (2) that of adding diffuse s and p functions; and (3) that of adding polarization functions to O. The size-extensive modified coupled-pair functional method yields b sub F values which are in fair agreement with FCI results.

  13. Low-Thrust Many-Revolution Trajectory Optimization via Differential Dynamic Programming and a Sundman Transformation

    NASA Astrophysics Data System (ADS)

    Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.

    2018-01-01

    Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.

  14. Chemical vapor deposition fluid flow simulation modelling tool

    NASA Technical Reports Server (NTRS)

    Bullister, Edward T.

    1992-01-01

    Accurate numerical simulation of chemical vapor deposition (CVD) processes requires a general purpose computational fluid dynamics package combined with specialized capabilities for high temperature chemistry. In this report, we describe the implementation of these specialized capabilities in the spectral element code NEKTON. The thermal expansion of the gases involved is shown to be accurately approximated by the low Mach number perturbation expansion of the incompressible Navier-Stokes equations. The radiative heat transfer between multiple interacting radiating surfaces is shown to be tractable using the method of Gebhart. The disparate rates of reaction and diffusion in CVD processes are calculated via a point-implicit time integration scheme. We demonstrate the use above capabilities on prototypical CVD applications.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  16. Identifying High Potential Well Targets with 3D Seismic and Mineralogy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellors, R. J.

    2015-10-30

    Seismic reflection the primary tool used in petroleum exploration and production, but use in geothermal exploration is less standard, in part due to cost but also due to the challenges in identifying the highly-permeable zones essential for economic hydrothermal systems [e.g. Louie et al., 2011; Majer, 2003]. Newer technology, such as wireless sensors and low-cost high performance computing, has helped reduce the cost and effort needed to conduct 3D surveys. The second difficulty, identifying permeable zones, has been less tractable so far. Here we report on the use of seismic attributes from a 3D seismic survey to identify and mapmore » permeable zones in a hydrothermal area.« less

  17. Pathway Design, Engineering, and Optimization.

    PubMed

    Garcia-Ruiz, Eva; HamediRad, Mohammad; Zhao, Huimin

    The microbial metabolic versatility found in nature has inspired scientists to create microorganisms capable of producing value-added compounds. Many endeavors have been made to transfer and/or combine pathways, existing or even engineered enzymes with new function to tractable microorganisms to generate new metabolic routes for drug, biofuel, and specialty chemical production. However, the success of these pathways can be impeded by different complications from an inherent failure of the pathway to cell perturbations. Pursuing ways to overcome these shortcomings, a wide variety of strategies have been developed. This chapter will review the computational algorithms and experimental tools used to design efficient metabolic routes, and construct and optimize biochemical pathways to produce chemicals of high interest.

  18. Bounded-Degree Approximations of Stochastic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less

  19. Low-Thrust Many-Revolution Trajectory Optimization via Differential Dynamic Programming and a Sundman Transformation

    NASA Astrophysics Data System (ADS)

    Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.

    2018-06-01

    Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.

  20. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.

    PubMed

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.

  1. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  2. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  3. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  4. Generation of anisotropy in turbulent flows subjected to rapid distortion

    NASA Astrophysics Data System (ADS)

    Clark, Timothy T.; Kurien, Susan; Rubinstein, Robert

    2018-01-01

    A computational tool for the anisotropic time-evolution of the spectral velocity correlation tensor is presented. We operate in the linear, rapid distortion limit of the mean-field-coupled equations. Each term of the equations is written in the form of an expansion to arbitrary order in the basis of irreducible representations of the SO(3) symmetry group. The computational algorithm for this calculation solves a system of coupled equations for the scalar weights of each generated anisotropic mode. The analysis demonstrates that rapid distortion rapidly but systematically generates higher-order anisotropic modes. To maintain a tractable computation, the maximum number of rotational modes to be used in a given calculation is specified a priori. The computed Reynolds stress converges to the theoretical result derived by Batchelor and Proudman [Quart. J. Mech. Appl. Math. 7, 83 (1954), 10.1093/qjmam/7.1.83] if a sufficiently large maximum number of rotational modes is utilized; more modes are required to recover the solution at later times. The emergence and evolution of the underlying multidimensional space of functions is presented here using a 64-mode calculation. Alternative implications for modeling strategies are discussed.

  5. Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Weaver, Jesse R.

    In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less

  6. Swirling Flow Computation at the Trailing Edge of Radial-Axial Hydraulic Turbines

    NASA Astrophysics Data System (ADS)

    Susan-Resiga, Romeo; Muntean, Sebastian; Popescu, Constantin

    2016-11-01

    Modern hydraulic turbines require optimized runners within a range of operating points with respect to minimum weighted average draft tube losses and/or flow instabilities. Tractable optimization methodologies must include realistic estimations of the swirling flow exiting the runner and further ingested by the draft tube, prior to runner design. The paper presents a new mathematical model and the associated numerical algorithm for computing the swirling flow at the trailing edge of Francis turbine runner, operated at arbitrary discharge. The general turbomachinery throughflow theory is particularized for an arbitrary hub-to-shroud line in the meridian half-plane and the resulting boundary value problem is solved with the finite element method. The results obtained with the present model are validated against full 3D runner flow computations within a range of discharge value. The mathematical model incorporates the full information for the relative flow direction, as well as the curvatures of the hub-to-shroud line and meridian streamlines, respectively. It is shown that the flow direction can be frozen within a range of operating points in the neighborhood of the best efficiency regime.

  7. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  8. Local rules simulation of the kinetics of virus capsid self-assembly.

    PubMed

    Schwartz, R; Shor, P W; Prevelige, P E; Berger, B

    1998-12-01

    A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.

  9. Turning intractable counting into sampling: Computing the configurational entropy of three-dimensional jammed packings.

    PubMed

    Martiniani, Stefano; Schrenk, K Julian; Stevenson, Jacob D; Wales, David J; Frenkel, Daan

    2016-01-01

    We present a numerical calculation of the total number of disordered jammed configurations Ω of N repulsive, three-dimensional spheres in a fixed volume V. To make these calculations tractable, we increase the computational efficiency of the approach of Xu et al. [Phys. Rev. Lett. 106, 245502 (2011)10.1103/PhysRevLett.106.245502] and Asenjo et al. [Phys. Rev. Lett. 112, 098002 (2014)10.1103/PhysRevLett.112.098002] and we extend the method to allow computation of the configurational entropy as a function of pressure. The approach that we use computes the configurational entropy by sampling the absolute volume of basins of attraction of the stable packings in the potential energy landscape. We find a surprisingly strong correlation between the pressure of a configuration and the volume of its basin of attraction in the potential energy landscape. This relation is well described by a power law. Our methodology to compute the number of minima in the potential energy landscape should be applicable to a wide range of other enumeration problems in statistical physics, string theory, cosmology, and machine learning that aim to find the distribution of the extrema of a scalar cost function that depends on many degrees of freedom.

  10. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  11. VAMPS: a website for visualization and analysis of microbial population structures.

    PubMed

    Huse, Susan M; Mark Welch, David B; Voorhis, Andy; Shipunova, Anna; Morrison, Hilary G; Eren, A Murat; Sogin, Mitchell L

    2014-02-05

    The advent of next-generation DNA sequencing platforms has revolutionized molecular microbial ecology by making the detailed analysis of complex communities over time and space a tractable research pursuit for small research groups. However, the ability to generate 10⁵-10⁸ reads with relative ease brings with it many downstream complications. Beyond the computational resources and skills needed to process and analyze data, it is difficult to compare datasets in an intuitive and interactive manner that leads to hypothesis generation and testing. We developed the free web service VAMPS (Visualization and Analysis of Microbial Population Structures, http://vamps.mbl.edu) to address these challenges and to facilitate research by individuals or collaborating groups working on projects with large-scale sequencing data. Users can upload marker gene sequences and associated metadata; reads are quality filtered and assigned to both taxonomic structures and to taxonomy-independent clusters. A simple point-and-click interface allows users to select for analysis any combination of their own or their collaborators' private data and data from public projects, filter these by their choice of taxonomic and/or abundance criteria, and then explore these data using a wide range of analytic methods and visualizations. Each result is extensively hyperlinked to other analysis and visualization options, promoting data exploration and leading to a greater understanding of data relationships. VAMPS allows researchers using marker gene sequence data to analyze the diversity of microbial communities and the relationships between communities, to explore these analyses in an intuitive visual context, and to download data, results, and images for publication. VAMPS obviates the need for individual research groups to make the considerable investment in computational infrastructure and bioinformatic support otherwise necessary to process, analyze, and interpret massive amounts of next-generation sequence data. Any web-capable device can be used to upload, process, explore, and extract data and results from VAMPS. VAMPS encourages researchers to share sequence and metadata, and fosters collaboration between researchers of disparate biomes who recognize common patterns in shared data.

  12. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  13. Membrane-spanning α-helical barrels as tractable protein-design targets.

    PubMed

    Niitsu, Ai; Heal, Jack W; Fauland, Kerstin; Thomson, Andrew R; Woolfson, Derek N

    2017-08-05

    The rational ( de novo ) design of membrane-spanning proteins lags behind that for water-soluble globular proteins. This is due to gaps in our knowledge of membrane-protein structure, and experimental difficulties in studying such proteins compared to water-soluble counterparts. One limiting factor is the small number of experimentally determined three-dimensional structures for transmembrane proteins. By contrast, many tens of thousands of globular protein structures provide a rich source of 'scaffolds' for protein design, and the means to garner sequence-to-structure relationships to guide the design process. The α-helical coiled coil is a protein-structure element found in both globular and membrane proteins, where it cements a variety of helix-helix interactions and helical bundles. Our deep understanding of coiled coils has enabled a large number of successful de novo designs. For one class, the α-helical barrels-that is, symmetric bundles of five or more helices with central accessible channels-there are both water-soluble and membrane-spanning examples. Recent computational designs of water-soluble α-helical barrels with five to seven helices have advanced the design field considerably. Here we identify and classify analogous and more complicated membrane-spanning α-helical barrels from the Protein Data Bank. These provide tantalizing but tractable targets for protein engineering and de novo protein design.This article is part of the themed issue 'Membrane pores: from structure and assembly, to medicine and technology'. © 2017 The Author(s).

  14. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  15. Design of orbital debris shields for oblique hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1994-01-01

    A new impact debris propagation code was written to link CTH simulations of space debris shield perforation to the Lagrangian finite element code DYNA3D, for space structure wall impact simulations. This software (DC3D) simulates debris cloud evolution using a nonlinear elastic-plastic deformable particle dynamics model, and renders computationally tractable the supercomputer simulation of oblique impacts on Whipple shield protected structures. Comparison of three dimensional, oblique impact simulations with experimental data shows good agreement over a range of velocities of interest in the design of orbital debris shielding. Source code developed during this research is provided on the enclosed floppy disk. An abstract based on the work described was submitted to the 1994 Hypervelocity Impact Symposium.

  16. Solving the quantum many-body problem with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Troyer, Matthias

    2017-02-01

    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

  17. Answering Schrödinger's question: A free-energy formulation

    NASA Astrophysics Data System (ADS)

    Ramstead, Maxwell James Désormeau; Badcock, Paul Benjamin; Friston, Karl John

    2018-03-01

    The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen's four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery.

  18. Simulating Biomass Fast Pyrolysis at the Single Particle Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciesielski, Peter; Wiggins, Gavin; Daw, C Stuart

    2017-07-01

    Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level ofmore » structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.« less

  19. Chemically frozen multicomponent boundary layer theory of salt and/or ash deposition rates from combustion gases

    NASA Technical Reports Server (NTRS)

    Rosner, D. E.; Chen, B.-K.; Fryburg, G. C.; Kohl, F. J.

    1979-01-01

    There is increased interest in, and concern about, deposition and corrosion phenomena in combustion systems containing inorganic condensible vapors and particles (salts, ash). To meet the need for a computationally tractable deposition rate theory general enough to embrace multielement/component situations of current and future gas turbine and magnetogasdynamic interest, a multicomponent chemically 'frozen' boundary layer (CFBL) deposition theory is presented and its applicability to the special case of Na2SO4 deposition from seeded laboratory burner combustion products is demonstrated. The coupled effects of Fick (concentration) diffusion and Soret (thermal) diffusion are included, along with explicit corrections for effects of variable properties and free stream turbulence. The present formulation is sufficiently general to include the transport of particles provided they are small enough to be formally treated as heavy molecules. Quantitative criteria developed to delineate the domain of validity of CFBL-rate theory suggest considerable practical promise for the present framework, which is characterized by relatively modest demands for new input information and computer time.

  20. Assessing the role of mini-applications in predicting key performance characteristics of scientific and engineering applications

    DOE PAGES

    Barrett, R. F.; Crozier, P. S.; Doerfler, D. W.; ...

    2014-09-28

    Computational science and engineering application programs are typically large, complex, and dynamic, and are often constrained by distribution limitations. As a means of making tractable rapid explorations of scientific and engineering application programs in the context of new, emerging, and future computing architectures, a suite of miniapps has been created to serve as proxies for full scale applications. Each miniapp is designed to represent a key performance characteristic that does or is expected to significantly impact the runtime performance of an application program. In this paper we introduce a methodology for assessing the ability of these miniapps to effectively representmore » these performance issues. We applied this methodology to four miniapps, examining the linkage between them and an application they are intended to represent. Herein we evaluate the fidelity of that linkage. This work represents the initial steps required to begin to answer the question, ''Under what conditions does a miniapp represent a key performance characteristic in a full app?''« less

  1. General-purpose abductive algorithm for interpretation

    NASA Astrophysics Data System (ADS)

    Fox, Richard K.; Hartigan, Julie

    1996-11-01

    Abduction, inference to the best explanation, is an information-processing task that is useful for solving interpretation problems such as diagnosis, medical test analysis, legal reasoning, theory evaluation, and perception. The task is a generative one in which an explanation comprising of domain hypotheses is assembled and used to account for given findings. The explanation is taken to be an interpretation as to why the findings have arisen within the given situation. Research in abduction has led to the development of a general-purpose computational strategy which has been demonstrated on all of the above types of problems. This abduction strategy can be performed in layers so that different types of knowledge can come together in deriving an explanation at different levels of description. Further, the abduction strategy is tractable and offers a very useful tradeoff between confidence in the explanation and completeness of the explanation. This paper will describe this computational strategy for abduction and demonstrate its usefulness towards perceptual problems by examining problem-solving systems in speech recognition and natural language understanding.

  2. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  3. GLUT3 gene expression is critical for embryonic growth, brain development and survival.

    PubMed

    Carayannopoulos, Mary O; Xiong, Fuxia; Jensen, Penny; Rios-Galdamez, Yesenia; Huang, Haigen; Lin, Shuo; Devaskar, Sherin U

    2014-04-01

    Glucose is the primary energy source for eukaryotic cells and the predominant substrate for the brain. GLUT3 is essential for trans-placental glucose transport and highly expressed in the mammalian brain. To further elucidate the role of GLUT3 in embryonic development, we utilized the vertebrate whole animal model system of Danio rerio as a tractable system for defining the cellular and molecular mechanisms altered by impaired glucose transport and metabolism related to perturbed expression of GLUT3. The comparable orthologue of human GLUT3 was identified and the expression of this gene abrogated during early embryonic development. In a dose-dependent manner embryonic brain development was disrupted resulting in a phenotype of aberrant brain organogenesis, associated with embryonic growth restriction and increased cellular apoptosis. Rescue of the morphant phenotype was achieved by providing exogenous GLUT3 mRNA. We conclude that GLUT3 is critically important for brain organogenesis and embryonic growth. Disruption of GLUT3 is responsible for the phenotypic spectrum of embryonic growth restriction to demise and neural apoptosis with microcephaly. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Rule groupings: An approach towards verification of expert systems

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala

    1991-01-01

    Knowledge-based expert systems are playing an increasingly important role in NASA space and aircraft systems. However, many of NASA's software applications are life- or mission-critical and knowledge-based systems do not lend themselves to the traditional verification and validation techniques for highly reliable software. Rule-based systems lack the control abstractions found in procedural languages. Hence, it is difficult to verify or maintain such systems. Our goal is to automatically structure a rule-based system into a set of rule-groups having a well-defined interface to other rule-groups. Once a rule base is decomposed into such 'firewalled' units, studying the interactions between rules would become more tractable. Verification-aid tools can then be developed to test the behavior of each such rule-group. Furthermore, the interactions between rule-groups can be studied in a manner similar to integration testing. Such efforts will go a long way towards increasing our confidence in the expert-system software. Our research efforts address the feasibility of automating the identification of rule groups, in order to decompose the rule base into a number of meaningful units.

  5. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. GLUT3 Gene Expression is Critical for Embryonic Growth, Brain Development and Survival

    PubMed Central

    Carayannopoulos, Mary O.; Xiong, Fuxia; Jensen, Penny; Rios-Galdamez, Yesenia; Huang, Haigen; Lin, Shuo; Devaskar, Sherin U.

    2015-01-01

    Glucose is the primary energy source for eukaryotic cells and the predominant substrate for the brain. GLUT3 is essential for trans-placental glucose transport and highly expressed in the mammalian brain. To further elucidate the role of GLUT3 in embryonic development, we utilized the vertebrate whole animal model system of Danio rerio as a tractable system for defining the cellular and molecular mechanisms altered by impaired glucose transport and metabolism related to perturbed expression of GLUT3. The comparable orthologue of human GLUT3 was identified and the expression of this gene abrogated during early embryonic development. In a dose-dependent manner embryonic brain development was disrupted resulting in a phenotype of aberrant brain organogenesis, associated with embryonic growth restriction and increased cellular apoptosis. Rescue of the morphant phenotype was achieved by providing exogenous GLUT3 mRNA. We conclude that GLUT3 is critically important for brain organogenesis and embryonic growth. Disruption of GLUT3 is responsible for the phenotypic spectrum of embryonic growth restriction to demise and neural apoptosis with microcephaly. PMID:24529979

  7. Computational Toxicology

    EPA Science Inventory

    ‘Computational toxicology’ is a broad term that encompasses all manner of computer-facilitated informatics, data-mining, and modeling endeavors in relation to toxicology, including exposure modeling, physiologically based pharmacokinetic (PBPK) modeling, dose-response modeling, ...

  8. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  9. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  10. Annual Rainfall Forecasting by Using Mamdani Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Fallah-Ghalhary, G.-A.; Habibi Nokhandan, M.; Mousavi Baygi, M.

    2009-04-01

    Long-term rainfall prediction is very important to countries thriving on agro-based economy. In general, climate and rainfall are highly non-linear phenomena in nature giving rise to what is known as "butterfly effect". The parameters that are required to predict the rainfall are enormous even for a short period. Soft computing is an innovative approach to construct computationally intelligent systems that are supposed to possess humanlike expertise within a specific domain, adapt themselves and learn to do better in changing environments, and explain how they make decisions. Unlike conventional artificial intelligence techniques the guiding principle of soft computing is to exploit tolerance for imprecision, uncertainty, robustness, partial truth to achieve tractability, and better rapport with reality. In this paper, 33 years of rainfall data analyzed in khorasan state, the northeastern part of Iran situated at latitude-longitude pairs (31°-38°N, 74°- 80°E). this research attempted to train Fuzzy Inference System (FIS) based prediction models with 33 years of rainfall data. For performance evaluation, the model predicted outputs were compared with the actual rainfall data. Simulation results reveal that soft computing techniques are promising and efficient. The test results using by FIS model showed that the RMSE was obtained 52 millimeter.

  11. Guidance and Control System for a Satellite Constellation

    NASA Technical Reports Server (NTRS)

    Bryson, Jonathan Lamar; Cox, James; Mays, Paul Richard; Neidhoefer, James Christian; Ephrain, Richard

    2010-01-01

    A distributed guidance and control algorithm was developed for a constellation of satellites. The system repositions satellites as required, regulates satellites to desired orbits, and prevents collisions. 1. Optimal methods are used to compute nominal transfers from orbit to orbit. 2. Satellites are regulated to maintain the desired orbits once the transfers are complete. 3. A simulator is used to predict potential collisions or near-misses. 4. Each satellite computes perturbations to its controls so as to increase any unacceptable distances of nearest approach to other objects. a. The avoidance problem is recast in a distributed and locally-linear form to arrive at a tractable solution. b. Plant matrix values are approximated via simulation at each time step. c. The Linear Quadratic Gaussian (LQG) method is used to compute perturbations to the controls that will result in increased miss distances. 5. Once all danger is passed, the satellites return to their original orbits, all the while avoiding each other as above. 6. The delta-Vs are reasonable. The controller begins maneuvers as soon as practical to minimize delta-V. 7. Despite the inclusion of trajectory simulations within the control loop, the algorithm is sufficiently fast for available satellite computer hardware. 8. The required measurement accuracies are within the capabilities of modern inertial measurement devices and modern positioning devices.

  12. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  13. Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-04-01

    Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less

  14. Photofragmentation of Gas-Phase Lanthanide Cyclopentadienyl Complexes: Experimental and Time-Dependent Excited-State Molecular Dynamics

    PubMed Central

    2015-01-01

    Unimolecular gas-phase laser-photodissociation reaction mechanisms of open-shell lanthanide cyclopentadienyl complexes, Ln(Cp)3 and Ln(TMCp)3, are analyzed from experimental and computational perspectives. The most probable pathways for the photoreactions are inferred from photoionization time-of-flight mass spectrometry (PI-TOF-MS), which provides the sequence of reaction intermediates and the distribution of final products. Time-dependent excited-state molecular dynamics (TDESMD) calculations provide insight into the electronic mechanisms for the individual steps of the laser-driven photoreactions for Ln(Cp)3. Computational analysis correctly predicts several key reaction products as well as the observed branching between two reaction pathways: (1) ligand ejection and (2) ligand cracking. Simulations support our previous assertion that both reaction pathways are initiated via a ligand-to-metal charge-transfer (LMCT) process. For the more complex chemistry of the tetramethylcyclopentadienyl complexes Ln(TMCp)3, TMESMD is less tractable, but computational geometry optimization reveals the structures of intermediates deduced from PI-TOF-MS, including several classic “tuck-in” structures and products of Cp ring expansion. The results have important implications for metal–organic catalysis and laser-assisted metal–organic chemical vapor deposition (LCVD) of insulators with high dielectric constants. PMID:24910492

  15. Hybrid RANS-LES using high order numerical methods

    NASA Astrophysics Data System (ADS)

    Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael

    2017-11-01

    Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  16. Task-based data-acquisition optimization for sparse image reconstruction systems

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  17. Covalent Ligand Discovery against Druggable Hotspots Targeted by Anti-cancer Natural Products.

    PubMed

    Grossman, Elizabeth A; Ward, Carl C; Spradlin, Jessica N; Bateman, Leslie A; Huffman, Tucker R; Miyamoto, David K; Kleinman, Jordan I; Nomura, Daniel K

    2017-11-16

    Many natural products that show therapeutic activities are often difficult to synthesize or isolate and have unknown targets, hindering their development as drugs. Identifying druggable hotspots targeted by covalently acting anti-cancer natural products can enable pharmacological interrogation of these sites with more synthetically tractable compounds. Here, we used chemoproteomic platforms to discover that the anti-cancer natural product withaferin A targets C377 on the regulatory subunit PPP2R1A of the tumor-suppressor protein phosphatase 2A (PP2A) complex leading to activation of PP2A activity, inactivation of AKT, and impaired breast cancer cell proliferation. We developed a more synthetically tractable cysteine-reactive covalent ligand, JNS 1-40, that selectively targets C377 of PPP2R1A to impair breast cancer signaling, proliferation, and in vivo tumor growth. Our study highlights the utility of using chemoproteomics to map druggable hotspots targeted by complex natural products and subsequently interrogating these sites with more synthetically tractable covalent ligands for cancer therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming

    NASA Astrophysics Data System (ADS)

    Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders

    The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.

  19. The Nature of Belief-Directed Exploratory Choice in Human Decision-Making

    PubMed Central

    Knox, W. Bradley; Otto, A. Ross; Stone, Peter; Love, Bradley C.

    2011-01-01

    In non-stationary environments, there is a conflict between exploiting currently favored options and gaining information by exploring lesser-known options that in the past have proven less rewarding. Optimal decision-making in such tasks requires considering future states of the environment (i.e., planning) and properly updating beliefs about the state of the environment after observing outcomes associated with choices. Optimal belief-updating is reflective in that beliefs can change without directly observing environmental change. For example, after 10 s elapse, one might correctly believe that a traffic light last observed to be red is now more likely to be green. To understand human decision-making when rewards associated with choice options change over time, we develop a variant of the classic “bandit” task that is both rich enough to encompass relevant phenomena and sufficiently tractable to allow for ideal actor analysis of sequential choice behavior. We evaluate whether people update beliefs about the state of environment in a reflexive (i.e., only in response to observed changes in reward structure) or reflective manner. In contrast to purely “random” accounts of exploratory behavior, model-based analyses of the subjects’ choices and latencies indicate that people are reflective belief updaters. However, unlike the Ideal Actor model, our analyses indicate that people’s choice behavior does not reflect consideration of future environmental states. Thus, although people update beliefs in a reflective manner consistent with the Ideal Actor, they do not engage in optimal long-term planning, but instead myopically choose the option on every trial that is believed to have the highest immediate payoff. PMID:22319503

  20. Tractable Chemical Models for CVD of Silicon and Carbon

    NASA Technical Reports Server (NTRS)

    Blanquet, E.; Gokoglu, S. A.

    1993-01-01

    Tractable chemical models are validated for the CVD of silicon and carbon. Dilute silane (SiH4) and methane (CH4) in hydrogen are chosen as gaseous precursors. The chemical mechanism for each systems Si and C is deliberately reduced to three reactions in the models: one in the gas phase and two at the surface. The axial-flow CVD reactor utilized in this study has well-characterized flow and thermal fields and provides variable deposition rates in the axial direction. Comparisons between the experimental and calculated deposition rates are made at different pressures and temperatures.

  1. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    DOE PAGES

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less

  2. Density functional theory for d- and f-electron materials and compounds

    DOE PAGES

    Mattson, Ann E.; Wills, John M.

    2016-02-12

    Here, the fundamental requirements for a computationally tractable Density Functional Theory-based method for relativistic f- and (nonrelativistic) d-electron materials and compounds are presented. The need for basing the Kohn–Sham equations on the Dirac equation is discussed. The full Dirac scheme needs exchange-correlation functionals in terms of four-currents, but ordinary functionals, using charge density and spin-magnetization, can be used in an approximate Dirac treatment. The construction of a functional that includes the additional confinement physics needed for these materials is illustrated using the subsystem-functional scheme. If future studies show that a full Dirac, four-current based, exchange-correlation functional is needed, the subsystemmore » functional scheme is one of the few schemes that can still be used for constructing functional approximations.« less

  3. Holography as deep learning

    NASA Astrophysics Data System (ADS)

    Gan, Wen-Cong; Shu, Fu-Wen

    Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu-Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.

  4. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  5. Structural modeling of Ge6.25As32.5Se61.25 using a combination of reverse Monte Carlo and Ab initio molecular dynamics.

    PubMed

    Opletal, George; Drumm, Daniel W; Wang, Rong P; Russo, Salvy P

    2014-07-03

    Ternary glass structures are notoriously difficult to model accurately, and yet prevalent in several modern endeavors. Here, a novel combination of Reverse Monte Carlo (RMC) modeling and ab initio molecular dynamics (MD) is presented, rendering these complicated structures computationally tractable. A case study (Ge6.25As32.5Se61.25 glass) illustrates the effects of ab initio MD quench rates and equilibration temperatures, and the combined approach's efficacy over standard RMC or random insertion methods. Submelting point MD quenches achieve the most stable, realistic models, agreeing with both experimental and fully ab initio results. The simple approach of RMC followed by ab initio geometry optimization provides similar quality to the RMC-MD combination, for far fewer resources.

  6. Finding Bounded Rational Equilibria. Part 1; Iterative Focusing

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights from the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  7. A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.

    2012-01-01

    A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.

  8. An Investigation of Primary School Science Teachers' Use of Computer Applications

    ERIC Educational Resources Information Center

    Ocak, Mehmet Akif; Akdemir, Omur

    2008-01-01

    This study investigated the level and frequency of science teachers' use of computer applications as an instructional tool in the classroom. The manner and frequency of science teachers' use of computer, their perceptions about integration of computer applications, and other factors contributed to changes in their computer literacy are…

  9. A Technique for Mapping Characteristic Lengths to Preserve Energy Dissipated via Strain Softening in a Multiscale Analysis

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.

    2014-01-01

    It is often advantageous to account for the microstructure of the material directly using multiscale modeling. For computational tractability, an idealized repeating unit cell (RUC) is used to capture all of the pertinent features of the microstructure. Typically, the RUC is dimensionless and depends only on the relative volume fractions of the different phases in the material. This works well for non-linear and inelastic behavior exhibiting a positive-definite constitutive response. Although, once the material exhibits strain softening, or localization, a mesh objective failure theories, such as smeared fracture theories, nodal and element enrichment theories (XFEM), cohesive elements or virtual crack closure technique (VCCT), can be utilized at the microscale, but the dimensions of the RUC must then be defined. One major challenge in multiscale progressive damage modeling is relating the characteristic lengths across the scales in order to preserve the energy that is dissipated via localization at the microscale. If there is no effort to relate the size of the macroscale element to the microscale RUC, then the energy that is dissipated will remain mesh dependent at the macroscale, even if it is regularized at the microscale. Here, a technique for mapping characteristic lengths across the scales is proposed. The RUC will be modeled using the generalized method of cells (GMC) micromechanics theory, and local failure in the matrix constituent subcells will be modeled using the crack band theory. The subcell characteristic lengths used in the crack band calculations will be mapped to the macroscale finite element in order to regularize the local energy in a manner consistent with the global length scale. Examples will be provided with and without the regularization, and they will be compared to a baseline case where the size and shape of the element and RUC are coincident (ensuring energy is preserved across the scales).

  10. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion instabilities and are shown to accurately predict the occurrence and frequency of the dominant mode of the instability observed in the experiment.

  11. 76 FR 59803 - Children's Online Privacy Protection Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ...,'' covering the ``myriad of computer and telecommunications facilities, including equipment and operating..., Dir. and Professor of Computer Sci. and Pub. Affairs, Princeton Univ. (currently Chief Technologist at... data in the manner of a personal computer. See Electronic Privacy Information Center (``EPIC...

  12. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  13. Universal distribution of component frequencies in biological and technological systems

    PubMed Central

    Pang, Tin Yau; Maslov, Sergei

    2013-01-01

    Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195

  14. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  15. Binding-Site Compatible Fragment Growing Applied to the Design of β2-Adrenergic Receptor Ligands.

    PubMed

    Chevillard, Florent; Rimmer, Helena; Betti, Cecilia; Pardon, Els; Ballet, Steven; van Hilten, Niek; Steyaert, Jan; Diederich, Wibke E; Kolb, Peter

    2018-02-08

    Fragment-based drug discovery is intimately linked to fragment extension approaches that can be accelerated using software for de novo design. Although computers allow for the facile generation of millions of suggestions, synthetic feasibility is however often neglected. In this study we computationally extended, chemically synthesized, and experimentally assayed new ligands for the β 2 -adrenergic receptor (β 2 AR) by growing fragment-sized ligands. In order to address the synthetic tractability issue, our in silico workflow aims at derivatized products based on robust organic reactions. The study started from the predicted binding modes of five fragments. We suggested a total of eight diverse extensions that were easily synthesized, and further assays showed that four products had an improved affinity (up to 40-fold) compared to their respective initial fragment. The described workflow, which we call "growing via merging" and for which the key tools are available online, can improve early fragment-based drug discovery projects, making it a useful creative tool for medicinal chemists during structure-activity relationship (SAR) studies.

  16. PPDB - A tool for investigation of plants physiology based on gene ontology.

    PubMed

    Sharma, Ajay Shiv; Gupta, Hari Om; Prasad, Rajendra

    2014-09-02

    Representing the way forward, from functional genomics and its ontology to functional understanding and physiological model, in a computationally tractable fashion is one of the ongoing challenges faced by computational biology. To tackle the standpoint, we herein feature the applications of contemporary database management to the development of PPDB, a searching and browsing tool for the Plants Physiology Database that is based upon the mining of a large amount of gene ontology data currently available. The working principles and search options associated with the PPDB are publicly available and freely accessible on-line ( http://www.iitr.ernet.in/ajayshiv/ ) through a user friendly environment generated by means of Drupal-6.24. By knowing that genes are expressed in temporally and spatially characteristic patterns and that their functionally distinct products often reside in specific cellular compartments and may be part of one or more multi-component complexes, this sort of work is intended to be relevant for investigating the functional relationships of gene products at a system level and, thus, helps us approach to the full physiology.

  17. PPDB: A Tool for Investigation of Plants Physiology Based on Gene Ontology.

    PubMed

    Sharma, Ajay Shiv; Gupta, Hari Om; Prasad, Rajendra

    2015-09-01

    Representing the way forward, from functional genomics and its ontology to functional understanding and physiological model, in a computationally tractable fashion is one of the ongoing challenges faced by computational biology. To tackle the standpoint, we herein feature the applications of contemporary database management to the development of PPDB, a searching and browsing tool for the Plants Physiology Database that is based upon the mining of a large amount of gene ontology data currently available. The working principles and search options associated with the PPDB are publicly available and freely accessible online ( http://www.iitr.ac.in/ajayshiv/ ) through a user-friendly environment generated by means of Drupal-6.24. By knowing that genes are expressed in temporally and spatially characteristic patterns and that their functionally distinct products often reside in specific cellular compartments and may be part of one or more multicomponent complexes, this sort of work is intended to be relevant for investigating the functional relationships of gene products at a system level and, thus, helps us approach to the full physiology.

  18. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  19. Laminar soot processes

    NASA Technical Reports Server (NTRS)

    Sunderland, P. B.; Lin, K.-C.; Faeth, G. M.

    1995-01-01

    Soot processes within hydrocarbon fueled flames are important because they affect the durability and performance of propulsion systems, the hazards of unwanted fires, the pollutant and particulate emissions from combustion processes, and the potential for developing computational combustion. Motivated by these observations, the present investigation is studying soot processes in laminar diffusion and premixed flames in order to better understand the soot and thermal radiation emissions of luminous flames. Laminar flames are being studied due to their experimental and computational tractability, noting the relevance of such results to practical turbulent flames through the laminar flamelet concept. Weakly-buoyant and nonbuoyant laminar diffusion flames are being considered because buoyancy affects soot processes in flames while most practical flames involve negligible effects of buoyancy. Thus, low-pressure weakly-buoyant flames are being observed during ground-based experiments while near atmospheric pressure nonbuoyant flames will be observed during space flight experiments at microgravity. Finally, premixed laminar flames also are being considered in order to observe some aspects of soot formation for simpler flame conditions than diffusion flames. The main emphasis of current work has been on measurements of soot nucleation and growth in laminar diffusion and premixed flames.

  20. Systems neuroscience in Drosophila: Conceptual and technical advantages.

    PubMed

    Kazama, H

    2015-06-18

    The fruit fly Drosophila melanogaster is ideally suited for investigating the neural circuit basis of behavior. Due to the simplicity and genetic tractability of the fly brain, neurons and circuits are identifiable across animals. Additionally, a large set of transgenic lines has been developed with the aim of specifically labeling small subsets of neurons and manipulating them in sophisticated ways. Electrophysiology and imaging can be applied in behaving individuals to examine the computations performed by each neuron, and even the entire population of relevant neurons in a particular region, because of the small size of the brain. Moreover, a rich repertoire of behaviors that can be studied is expanding to include those requiring cognitive abilities. Thus, the fly brain is an attractive system in which to explore both computations and mechanisms underlying behavior at levels spanning from genes through neurons to circuits. This review summarizes the advantages Drosophila offers in achieving this objective. A recent neurophysiology study on olfactory behavior is also introduced to demonstrate the effectiveness of these advantages. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. The tangled bank of amino acids

    PubMed Central

    Pollock, David D.

    2016-01-01

    Abstract The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. PMID:27028523

  2. On-the-fly scheduling as a manifestation of partial-order planning and dynamic task values.

    PubMed

    Hannah, Samuel D; Neal, Andrew

    2014-09-01

    The aim of this study was to develop a computational account of the spontaneous task ordering that occurs within jobs as work unfolds ("on-the-fly task scheduling"). Air traffic control is an example of work in which operators have to schedule their tasks as a partially predictable work flow emerges. To date, little attention has been paid to such on-the-fly scheduling situations. We present a series of discrete-event models fit to conflict resolution decision data collected from experienced controllers operating in a high-fidelity simulation. Our simulations reveal air traffic controllers' scheduling decisions as examples of the partial-order planning approach of Hayes-Roth and Hayes-Roth. The most successful model uses opportunistic first-come-first-served scheduling to select tasks from a queue. Tasks with short deadlines are executed immediately. Tasks with long deadlines are evaluated to assess whether they need to be executed immediately or deferred. On-the-fly task scheduling is computationally tractable despite its surface complexity and understandable as an example of both the partial-order planning strategy and the dynamic-value approach to prioritization.

  3. Dynamic Redox Regulation of IL-4 Signaling.

    PubMed

    Dwivedi, Gaurav; Gran, Margaret A; Bagchi, Pritha; Kemp, Melissa L

    2015-11-01

    Quantifying the magnitude and dynamics of protein oxidation during cell signaling is technically challenging. Computational modeling provides tractable, quantitative methods to test hypotheses of redox mechanisms that may be simultaneously operative during signal transduction. The interleukin-4 (IL-4) pathway, which has previously been reported to induce reactive oxygen species and oxidation of PTP1B, may be controlled by several other putative mechanisms of redox regulation; widespread proteomic thiol oxidation observed via 2D redox differential gel electrophoresis upon IL-4 treatment suggests more than one redox-sensitive protein implicated in this pathway. Through computational modeling and a model selection strategy that relied on characteristic STAT6 phosphorylation dynamics of IL-4 signaling, we identified reversible protein tyrosine phosphatase (PTP) oxidation as the primary redox regulatory mechanism in the pathway. A systems-level model of IL-4 signaling was developed that integrates synchronous pan-PTP oxidation with ROS-independent mechanisms. The model quantitatively predicts the dynamics of IL-4 signaling over a broad range of new redox conditions, offers novel hypotheses about regulation of JAK/STAT signaling, and provides a framework for interrogating putative mechanisms involving receptor-initiated oxidation.

  4. Dynamic Redox Regulation of IL-4 Signaling

    PubMed Central

    Dwivedi, Gaurav; Gran, Margaret A.; Bagchi, Pritha; Kemp, Melissa L.

    2015-01-01

    Quantifying the magnitude and dynamics of protein oxidation during cell signaling is technically challenging. Computational modeling provides tractable, quantitative methods to test hypotheses of redox mechanisms that may be simultaneously operative during signal transduction. The interleukin-4 (IL-4) pathway, which has previously been reported to induce reactive oxygen species and oxidation of PTP1B, may be controlled by several other putative mechanisms of redox regulation; widespread proteomic thiol oxidation observed via 2D redox differential gel electrophoresis upon IL-4 treatment suggests more than one redox-sensitive protein implicated in this pathway. Through computational modeling and a model selection strategy that relied on characteristic STAT6 phosphorylation dynamics of IL-4 signaling, we identified reversible protein tyrosine phosphatase (PTP) oxidation as the primary redox regulatory mechanism in the pathway. A systems-level model of IL-4 signaling was developed that integrates synchronous pan-PTP oxidation with ROS-independent mechanisms. The model quantitatively predicts the dynamics of IL-4 signaling over a broad range of new redox conditions, offers novel hypotheses about regulation of JAK/STAT signaling, and provides a framework for interrogating putative mechanisms involving receptor-initiated oxidation. PMID:26562652

  5. To cut or not to cut? Assessing the modular structure of brain networks.

    PubMed

    Chang, Yu-Teng; Pantazis, Dimitrios; Leahy, Richard M

    2014-05-01

    A wealth of methods has been developed to identify natural divisions of brain networks into groups or modules, with one of the most prominent being modularity. Compared with the popularity of methods to detect community structure, only a few methods exist to statistically control for spurious modules, relying almost exclusively on resampling techniques. It is well known that even random networks can exhibit high modularity because of incidental concentration of edges, even though they have no underlying organizational structure. Consequently, interpretation of community structure is confounded by the lack of principled and computationally tractable approaches to statistically control for spurious modules. In this paper we show that the modularity of random networks follows a transformed version of the Tracy-Widom distribution, providing for the first time a link between module detection and random matrix theory. We compute parametric formulas for the distribution of modularity for random networks as a function of network size and edge variance, and show that we can efficiently control for false positives in brain and other real-world networks. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Entropy production rate as a criterion for inconsistency in decision theory

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.

    2018-05-01

    Individual and group decisions are complex, often involving choosing an apt alternative from a multitude of options. Evaluating pairwise comparisons breaks down such complex decision problems into tractable ones. Pairwise comparison matrices (PCMs) are regularly used to solve multiple-criteria decision-making problems, for example, using Saaty’s analytic hierarchy process (AHP) framework. However, there are two significant drawbacks of using PCMs. First, humans evaluate PCMs in an inconsistent manner. Second, not all entries of a large PCM can be reliably filled by human decision makers. We address these two issues by first establishing a novel connection between PCMs and time-irreversible Markov processes. Specifically, we show that every PCM induces a family of dissipative maximum path entropy random walks (MERW) over the set of alternatives. We show that only ‘consistent’ PCMs correspond to detailed balanced MERWs. We identify the non-equilibrium entropy production in the induced MERWs as a metric of inconsistency of the underlying PCMs. Notably, the entropy production satisfies all of the recently laid out criteria for reasonable consistency indices. We also propose an approach to use incompletely filled PCMs in AHP. Potential future avenues are discussed as well.

  7. Competition between drag and Coulomb interactions in turbulent particle-laden flows using a coupled-fluid-Ewald-summation based approach

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Capecelatro, Jesse

    2018-03-01

    We present a numerical study on inertial electrically charged particles suspended in a turbulent carrier phase. Fluid-particle interactions are accounted for in an Eulerian-Lagrangian (EL) framework and coupled to a Fourier-based Ewald summation method, referred to as the particle-particle-particle-mesh (P3M ) method, to accurately capture short- and long-range electrostatic forces in a tractable manner. The EL P3M method is used to assess the competition between drag and Coulomb forces for a range of Stokes numbers and charge densities. Simulations of like- and oppositely charged particles suspended in a two-dimensional Taylor-Green vortex and three-dimensional homogeneous isotropic turbulence are reported. It is found that even in dilute suspensions, the short-range electric potential plays an important role in flows that admit preferential concentration. Suspensions of oppositely charged particles are observed to agglomerate in the form of chains and rings. Comparisons between the particle-mesh method typically employed in fluid-particle calculations and P3M are reported, in addition to one-point and two-point statistics to quantify the level of clustering as a function of Reynolds number, Stokes number, and nondimensional electric settling velocity.

  8. Rapid Genetic Analysis of Epithelial-Mesenchymal Signaling During Hair Regeneration

    PubMed Central

    Zhen, Hanson H.; Oro, Anthony E.

    2013-01-01

    Hair follicle morphogenesis, a complex process requiring interaction between epithelia-derived keratinocytes and the underlying mesenchyme, is an attractive model system to study organ development and tissue-specific signaling. Although hair follicle development is genetically tractable, fast and reproducible analysis of factors essential for this process remains a challenge. Here we describe a procedure to generate targeted overexpression or shRNA-mediated knockdown of factors using lentivirus in a tissue-specific manner. Using a modified version of a hair regeneration model 5, 6, 11, we can achieve robust gain- or loss-of-function analysis in primary mouse keratinocytes or dermal cells to facilitate study of epithelial-mesenchymal signaling pathways that lead to hair follicle morphogenesis. We describe how to isolate fresh primary mouse keratinocytes and dermal cells, which contain dermal papilla cells and their precursors, deliver lentivirus containing either shRNA or cDNA to one of the cell populations, and combine the cells to generate fully formed hair follicles on the backs of nude mice. This approach allows analysis of tissue-specific factors required to generate hair follicles within three weeks and provides a fast and convenient companion to existing genetic models. PMID:23486463

  9. Novel polyglutamine model uncouples proteotoxicity from aging.

    PubMed

    Christie, Nakeirah T M; Lee, Amy L; Fay, Hannah G; Gray, Amelia A; Kikis, Elise A

    2014-01-01

    Polyglutamine expansions in certain proteins are the genetic determinants for nine distinct progressive neurodegenerative disorders and resultant age-related dementia. In these cases, neurodegeneration is due to the aggregation propensity and resultant toxic properties of the polyglutamine-containing proteins. We are interested in elucidating the underlying mechanisms of toxicity of the protein ataxin-3, in which a polyglutamine expansion is the genetic determinant for Machado-Joseph Disease (MJD), also referred to as spinocerebellar ataxia 3 (SCA3). To this end, we have developed a novel model for ataxin-3 protein aggregation, by expressing a disease-related polyglutamine-containing fragment of ataxin-3 in the genetically tractable body wall muscle cells of the model system C. elegans. Here, we demonstrate that this ataxin-3 fragment aggregates in a polyQ length-dependent manner in C. elegans muscle cells and that this aggregation is associated with cellular dysfunction. However, surprisingly, this aggregation and resultant toxicity was not influenced by aging. This is in contrast to polyglutamine peptides alone whose aggregation/toxicity is highly dependent on age. Thus, the data presented here not only describe a new polyglutamine model, but also suggest that protein context likely influences the cellular interactions of the polyglutamine-containing protein and thereby modulates its toxic properties.

  10. Topics in Computational Learning Theory and Graph Algorithms.

    ERIC Educational Resources Information Center

    Board, Raymond Acton

    This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…

  11. 5 CFR 531.245 - Computing locality rates and special rates for GM employees.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Computing locality rates and special... Gm Employees § 531.245 Computing locality rates and special rates for GM employees. Locality rates and special rates are computed for GM employees in the same manner as locality rates and special rates...

  12. Finding Bounded Rational Equilibria. Part 2; Alternative Lagrangians and Uncountable Move Spaces

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights &om the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  13. Automated Monitoring and Analysis of Social Behavior in Drosophila

    PubMed Central

    Dankert, Heiko; Wang, Liming; Hoopfer, Eric D.; Anderson, David J.; Perona, Pietro

    2009-01-01

    We introduce a method based on machine vision for automatically measuring aggression and courtship in Drosophila melanogaster. The genetic and neural circuit bases of these innate social behaviors are poorly understood. High-throughput behavioral screening in this genetically tractable model organism is a potentially powerful approach, but it is currently very laborious. Our system monitors interacting pairs of flies, and computes their location, orientation and wing posture. These features are used for detecting behaviors exhibited during aggression and courtship. Among these, wing threat, lunging and tussling are specific to aggression; circling, wing extension (courtship “song”) and copulation are specific to courtship; locomotion and chasing are common to both. Ethograms may be constructed automatically from these measurements, saving considerable time and effort. This technology should enable large-scale screens for genes and neural circuits controlling courtship and aggression. PMID:19270697

  14. Ranking Specific Sets of Objects.

    PubMed

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  15. Cryptic binding sites on proteins: definition, detection, and druggability.

    PubMed

    Vajda, Sandor; Beglov, Dmitri; Wakefield, Amanda E; Egbert, Megan; Whitty, Adrian

    2018-05-22

    Many proteins in their unbound structures lack surface pockets appropriately sized for drug binding. Hence, a variety of experimental and computational tools have been developed for the identification of cryptic sites that are not evident in the unbound protein but form upon ligand binding, and can provide tractable drug target sites. The goal of this review is to discuss the definition, detection, and druggability of such sites, and their potential value for drug discovery. Novel methods based on molecular dynamics simulations are particularly promising and yield a large number of transient pockets, but it has been shown that only a minority of such sites are generally capable of binding ligands with substantial affinity. Based on recent studies, current methodology can be improved by combining molecular dynamics with fragment docking and machine learning approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Analysis of SI models with multiple interacting populations using subpopulations.

    PubMed

    Thomas, Evelyn K; Gurski, Katharine F; Hoffman, Kathleen A

    2015-02-01

    Computing endemic equilibria and basic reproductive numbers for systems of differential equations describing epidemiological systems with multiple connections between subpopulations is often algebraically intractable. We present an alternative method which deconstructs the larger system into smaller subsystems and captures the interactions between the smaller systems as external forces using an approximate model. We bound the basic reproductive numbers of the full system in terms of the basic reproductive numbers of the smaller systems and use the alternate model to provide approximations for the endemic equilibrium. In addition to creating algebraically tractable reproductive numbers and endemic equilibria, we can demonstrate the influence of the interactions between subpopulations on the basic reproductive number of the full system. The focus of this paper is to provide analytical tools to help guide public health decisions with limited intervention resources.

  17. First-Principles Propagation of Geoelectric Fields from Ionosphere to Ground using LANLGeoRad

    NASA Astrophysics Data System (ADS)

    Jeffery, C. A.; Woodroffe, J. R.; Henderson, M. G.

    2017-12-01

    A notable deficiency in the current SW forecasting chain is the propagation of geoelectric fields from ionosphere to ground using Biot-Savart integrals, which ignore the localized complexity of lithospheric electrical conductivity and the relatively high conductivity of ocean water compared to the lithosphere. Three-dimensional models of Earth conductivity with mesoscale spatial resolution are being developed, but a new approach is needed to incorporate this information into the SW forecast chain. We present initial results from a first-principles geoelectric propagation model call LANLGeoRad, which solves Maxwell's equations on an unstructured geodesic grid. Challenges associated with the disparate response times of millisecond electromagnetic propagation and 10-second geomagnetic fluctuations are highlighted, and a novel rescaling of the ionosphere/ground system is presented that renders this geoelectric system computationally tractable.

  18. Development of Three-Dimensional Flow Code Package to Predict Performance and Stability of Aircraft with Leading Edge Ice Contamination

    NASA Technical Reports Server (NTRS)

    Strash, D. J.; Summa, J. M.

    1996-01-01

    In the work reported herein, a simplified, uncoupled, zonal procedure is utilized to assess the capability of numerically simulating icing effects on a Boeing 727-200 aircraft. The computational approach combines potential flow plus boundary layer simulations by VSAERO for the un-iced aircraft forces and moments with Navier-Stokes simulations by NPARC for the incremental forces and moments due to iced components. These are compared with wind tunnel force and moment data, supplied by the Boeing Company, examining longitudinal flight characteristics. Grid refinement improved the local flow features over previously reported work with no appreciable difference in the incremental ice effect. The computed lift curve slope with and without empennage ice matches the experimental value to within 1%, and the zero lift angle agrees to within 0.2 of a degree. The computed slope of the un-iced and iced aircraft longitudinal stability curve is within about 2% of the test data. This work demonstrates the feasibility of a zonal method for the icing analysis of complete aircraft or isolated components within the linear angle of attack range. In fact, this zonal technique has allowed for the viscous analysis of a complete aircraft with ice which is currently not otherwise considered tractable.

  19. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  20. A pull-back algorithm to determine the unloaded vascular geometry in anisotropic hyperelastic AAA passive mechanics.

    PubMed

    Riveros, Fabián; Chandra, Santanu; Finol, Ender A; Gasser, T Christian; Rodriguez, Jose F

    2013-04-01

    Biomechanical studies on abdominal aortic aneurysms (AAA) seek to provide for better decision criteria to undergo surgical intervention for AAA repair. More accurate results can be obtained by using appropriate material models for the tissues along with accurate geometric models and more realistic boundary conditions for the lesion. However, patient-specific AAA models are generated from gated medical images in which the artery is under pressure. Therefore, identification of the AAA zero pressure geometry would allow for a more realistic estimate of the aneurysmal wall mechanics. This study proposes a novel iterative algorithm to find the zero pressure geometry of patient-specific AAA models. The methodology allows considering the anisotropic hyperelastic behavior of the aortic wall, its thickness and accounts for the presence of the intraluminal thrombus. Results on 12 patient-specific AAA geometric models indicate that the procedure is computational tractable and efficient, and preserves the global volume of the model. In addition, a comparison of the peak wall stress computed with the zero pressure and CT-based geometries during systole indicates that computations using CT-based geometric models underestimate the peak wall stress by 59 ± 64 and 47 ± 64 kPa for the isotropic and anisotropic material models of the arterial wall, respectively.

  1. Formal Requirements-Based Programming for Complex Systems

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis

    2005-01-01

    Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.

  2. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  3. Suppression of Soot Formation and Shapes of Laminar Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Xu, F.; Dai, Z.; Faeth, G. M.

    2001-01-01

    Laminar nonpremixed (diffusion) flames are of interest because they provide model flame systems that are far more tractable for analysis and experiments than practical turbulent flames. In addition, many properties of laminar diffusion flames are directly relevant to turbulent diffusion flames using laminar flamelet concepts. Finally, laminar diffusion flame shapes have been of interest since the classical study of Burke and Schumann because they involve a simple nonintrusive measurement that is convenient for evaluating flame shape predictions. Motivated by these observations, the shapes of round hydrocarbon-fueled laminar jet diffusion flames were considered, emphasizing conditions where effects of buoyancy are small because most practical flames are not buoyant. Earlier studies of shapes of hydrocarbon-fueled nonbuoyant laminar jet diffusion flames considered combustion in still air and have shown that flames at the laminar smoke point are roughly twice as long as corresponding soot-free (blue) flames and have developed simple ways to estimate their shapes. Corresponding studies of hydrocarbon-fueled weakly-buoyant laminar jet diffusion flames in coflowing air have also been reported. These studies were limited to soot-containing flames at laminar smoke point conditions and also developed simple ways to estimate their shapes but the behavior of corresponding soot-free flames has not been addressed. This is unfortunate because ways of selecting flame flow properties to reduce soot concentrations are of great interest; in addition, soot-free flames are fundamentally important because they are much more computationally tractable than corresponding soot-containing flames. Thus, the objectives of the present investigation were to observe the shapes of weakly-buoyant laminar jet diffusion flames at both soot-free and smoke point conditions and to use the results to evaluate simplified flame shape models. The present discussion is brief.

  4. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial intelligence. In the years ahead, this may well become a widely held position.

  5. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  6. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  7. Generalizing Backtrack-Free Search: A Framework for Search-Free Constraint Satisfaction

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy

    2000-01-01

    Tractable classes of constraint satisfaction problems are of great importance in artificial intelligence. Identifying and taking advantage of such classes can significantly speed up constraint problem solving. In addition, tractable classes are utilized in applications where strict worst-case performance guarantees are required, such as constraint-based plan execution. In this work, we present a formal framework for search-free (backtrack-free) constraint satisfaction. The framework is based on general procedures, rather than specific propagation techniques, and thus generalizes existing techniques in this area. We also relate search-free problem solving to the notion of decision sets and use the result to provide a constructive criterion that is sufficient to guarantee search-free problem solving.

  8. Computer Use by School Teachers in Teaching-Learning Process

    ERIC Educational Resources Information Center

    Bhalla, Jyoti

    2013-01-01

    Developing countries have a responsibility not merely to provide computers for schools, but also to foster a habit of infusing a variety of ways in which computers can be integrated in teaching-learning amongst the end users of these tools. Earlier researches lacked a systematic study of the manner and the extent of computer-use by teachers. The…

  9. Student Teachers' Computer Use during Practicum.

    ERIC Educational Resources Information Center

    Wang, Yu-mei; Holthaus, Patricia

    This study was designed to investigate the use of computers by student teachers in their practicums. Student teachers (n=120) in two public universities in the United States answered a questionnaire that covered: the manner and frequency of computer use, student teachers' perception of their training, their attitudes toward the role of the…

  10. Letter regarding 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics' by Patrizi et al. and research reproducibility.

    PubMed

    2017-04-01

    The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.

  11. Children, computer exposure and musculoskeletal outcomes: the development of pathway models for school and home computer-related musculoskeletal outcomes.

    PubMed

    Harris, Courtenay; Straker, Leon; Pollock, Clare; Smith, Anne

    2015-01-01

    Children's computer use is rapidly growing, together with reports of related musculoskeletal outcomes. Models and theories of adult-related risk factors demonstrate multivariate risk factors associated with computer use. Children's use of computers is different from adult's computer use at work. This study developed and tested a child-specific model demonstrating multivariate relationships between musculoskeletal outcomes, computer exposure and child factors. Using pathway modelling, factors such as gender, age, television exposure, computer anxiety, sustained attention (flow), socio-economic status and somatic complaints (headache and stomach pain) were found to have effects on children's reports of musculoskeletal symptoms. The potential for children's computer exposure to follow a dose-response relationship was also evident. Developing a child-related model can assist in understanding risk factors for children's computer use and support the development of recommendations to encourage children to use this valuable resource in educational, recreational and communication environments in a safe and productive manner. Computer use is an important part of children's school and home life. Application of this developed model, that encapsulates related risk factors, enables practitioners, researchers, teachers and parents to develop strategies that assist young people to use information technology for school, home and leisure in a safe and productive manner.

  12. Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment.

    ERIC Educational Resources Information Center

    Weller, Herman G.; Hartson, H. Rex

    1992-01-01

    Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…

  13. GeneImp: Fast Imputation to Large Reference Panels Using Genotype Likelihoods from Ultralow Coverage Sequencing

    PubMed Central

    Spiliopoulou, Athina; Colombo, Marco; Orchard, Peter; Agakov, Felix; McKeigue, Paul

    2017-01-01

    We address the task of genotype imputation to a dense reference panel given genotype likelihoods computed from ultralow coverage sequencing as inputs. In this setting, the data have a high-level of missingness or uncertainty, and are thus more amenable to a probabilistic representation. Most existing imputation algorithms are not well suited for this situation, as they rely on prephasing for computational efficiency, and, without definite genotype calls, the prephasing task becomes computationally expensive. We describe GeneImp, a program for genotype imputation that does not require prephasing and is computationally tractable for whole-genome imputation. GeneImp does not explicitly model recombination, instead it capitalizes on the existence of large reference panels—comprising thousands of reference haplotypes—and assumes that the reference haplotypes can adequately represent the target haplotypes over short regions unaltered. We validate GeneImp based on data from ultralow coverage sequencing (0.5×), and compare its performance to the most recent version of BEAGLE that can perform this task. We show that GeneImp achieves imputation quality very close to that of BEAGLE, using one to two orders of magnitude less time, without an increase in memory complexity. Therefore, GeneImp is the first practical choice for whole-genome imputation to a dense reference panel when prephasing cannot be applied, for instance, in datasets produced via ultralow coverage sequencing. A related future application for GeneImp is whole-genome imputation based on the off-target reads from deep whole-exome sequencing. PMID:28348060

  14. Coarse-grained modeling of RNA 3D structure.

    PubMed

    Dawson, Wayne K; Maciejczyk, Maciej; Jankowska, Elzbieta J; Bujnicki, Janusz M

    2016-07-01

    Functional RNA molecules depend on three-dimensional (3D) structures to carry out their tasks within the cell. Understanding how these molecules interact to carry out their biological roles requires a detailed knowledge of RNA 3D structure and dynamics as well as thermodynamics, which strongly governs the folding of RNA and RNA-RNA interactions as well as a host of other interactions within the cellular environment. Experimental determination of these properties is difficult, and various computational methods have been developed to model the folding of RNA 3D structures and their interactions with other molecules. However, computational methods also have their limitations, especially when the biological effects demand computation of the dynamics beyond a few hundred nanoseconds. For the researcher confronted with such challenges, a more amenable approach is to resort to coarse-grained modeling to reduce the number of data points and computational demand to a more tractable size, while sacrificing as little critical information as possible. This review presents an introduction to the topic of coarse-grained modeling of RNA 3D structures and dynamics, covering both high- and low-resolution strategies. We discuss how physics-based approaches compare with knowledge based methods that rely on databases of information. In the course of this review, we discuss important aspects in the reasoning process behind building different models and the goals and pitfalls that can result. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Computers and Play in Early Childhood: Affordances and Limitations

    ERIC Educational Resources Information Center

    Verenikina, Irina; Herrington, Jan; Peterson, Rob; Mantei, Jessica

    2010-01-01

    The widespread proliferation of computer games for children as young as six months of age, merits a reexamination of their manner of use and a review of their facility to provide opportunities for developmental play. This article describes a research study conducted to explore the use of computer games by young children, specifically to…

  16. An Empirical Measure of Computer Security Strength for Vulnerability Remediation

    ERIC Educational Resources Information Center

    Villegas, Rafael

    2010-01-01

    Remediating all vulnerabilities on computer systems in a timely and cost effective manner is difficult given that the window of time between the announcement of a new vulnerability and an automated attack has decreased. Hence, organizations need to prioritize the vulnerability remediation process on their computer systems. The goal of this…

  17. Reconfigurable engineered motile semiconductor microparticles.

    PubMed

    Ohiri, Ugonna; Shields, C Wyatt; Han, Koohee; Tyler, Talmage; Velev, Orlin D; Jokerst, Nan

    2018-05-03

    Locally energized particles form the basis for emerging classes of active matter. The design of active particles has led to their controlled locomotion and assembly. The next generation of particles should demonstrate robust control over their active assembly, disassembly, and reconfiguration. Here we introduce a class of semiconductor microparticles that can be comprehensively designed (in size, shape, electric polarizability, and patterned coatings) using standard microfabrication tools. These custom silicon particles draw energy from external electric fields to actively propel, while interacting hydrodynamically, and sequentially assemble and disassemble on demand. We show that a number of electrokinetic effects, such as dielectrophoresis, induced charge electrophoresis, and diode propulsion, can selectively power the microparticle motions and interactions. The ability to achieve on-demand locomotion, tractable fluid flows, synchronized motility, and reversible assembly using engineered silicon microparticles may enable advanced applications that include remotely powered microsensors, artificial muscles, reconfigurable neural networks and computational systems.

  18. Temperature dependence of long coherence times of oxide charge qubits.

    PubMed

    Dey, A; Yarlagadda, S

    2018-02-22

    The ability to maintain coherence and control in a qubit is a major requirement for quantum computation. We show theoretically that long coherence times can be achieved at easily accessible temperatures (such as boiling point of liquid helium) in small (i.e., ~10 nanometers) charge qubits of oxide double quantum dots when only optical phonons are the source of decoherence. In the regime of strong electron-phonon coupling and in the non-adiabatic region, we employ a duality transformation to make the problem tractable and analyze the dynamics through a non-Markovian quantum master equation. We find that the system decoheres after a long time, despite the fact that no energy is exchanged with the bath. Detuning the dots to a fraction of the optical phonon energy, increasing the electron-phonon coupling, reducing the adiabaticity, or decreasing the temperature enhances the coherence time.

  19. Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.

    PubMed

    Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan

    2011-11-01

    Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).

  20. larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.

    PubMed

    Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit

    2018-01-01

    The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.

  1. A hybrid agent-based approach for modeling microbiological systems.

    PubMed

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  2. Automated adaptive inference of phenomenological dynamical models.

    PubMed

    Daniels, Bryan C; Nemenman, Ilya

    2015-08-21

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  3. Practical Bayesian tomography

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Combes, Joshua; Cory, D. G.

    2016-03-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  4. Opera: Reconstructing Optimal Genomic Scaffolds with High-Throughput Paired-End Sequences

    PubMed Central

    Gao, Song; Sung, Wing-Kin

    2011-01-01

    Abstract Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/). PMID:21929371

  5. The speed-accuracy tradeoff: history, physiology, methodology, and behavior

    PubMed Central

    Heitz, Richard P.

    2014-01-01

    There are few behavioral effects as ubiquitous as the speed-accuracy tradeoff (SAT). From insects to rodents to primates, the tendency for decision speed to covary with decision accuracy seems an inescapable property of choice behavior. Recently, the SAT has received renewed interest, as neuroscience approaches begin to uncover its neural underpinnings and computational models are compelled to incorporate it as a necessary benchmark. The present work provides a comprehensive overview of SAT. First, I trace its history as a tractable behavioral phenomenon and the role it has played in shaping mathematical descriptions of the decision process. Second, I present a “users guide” of SAT methodology, including a critical review of common experimental manipulations and analysis techniques and a treatment of the typical behavioral patterns that emerge when SAT is manipulated directly. Finally, I review applications of this methodology in several domains. PMID:24966810

  6. Stochastic Modeling and Generation of Partially Polarized or Partially Coherent Electromagnetic Waves

    NASA Technical Reports Server (NTRS)

    Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.

  7. Baldovin-Stella stochastic volatility process and Wiener process mixtures

    NASA Astrophysics Data System (ADS)

    Peirano, P. P.; Challet, D.

    2012-08-01

    Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.

  8. Robust Learning of High-dimensional Biological Networks with Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Nägele, Andreas; Dejori, Mathäus; Stetter, Martin

    Structure learning of Bayesian networks applied to gene expression data has become a potentially useful method to estimate interactions between genes. However, the NP-hardness of Bayesian network structure learning renders the reconstruction of the full genetic network with thousands of genes unfeasible. Consequently, the maximal network size is usually restricted dramatically to a small set of genes (corresponding with variables in the Bayesian network). Although this feature reduction step makes structure learning computationally tractable, on the downside, the learned structure might be adversely affected due to the introduction of missing genes. Additionally, gene expression data are usually very sparse with respect to the number of samples, i.e., the number of genes is much greater than the number of different observations. Given these problems, learning robust network features from microarray data is a challenging task. This chapter presents several approaches tackling the robustness issue in order to obtain a more reliable estimation of learned network features.

  9. A two-fluid model for avalanche and debris flows.

    PubMed

    Pitman, E Bruce; Le, Long

    2005-07-15

    Geophysical mass flows--debris flows, avalanches, landslides--can contain O(10(6)-10(10)) m(3) or more of material, often a mixture of soil and rocks with a significant quantity of interstitial fluid. These flows can be tens of meters in depth and hundreds of meters in length. The range of scales and the rheology of this mixture presents significant modelling and computational challenges. This paper describes a depth-averaged 'thin layer' model of geophysical mass flows containing a mixture of solid material and fluid. The model is derived from a 'two-phase' or 'two-fluid' system of equations commonly used in engineering research. Phenomenological modelling and depth averaging combine to yield a tractable set of equations, a hyperbolic system that describes the motion of the two constituent phases. If the fluid inertia is small, a reduced model system that is easier to solve may be derived.

  10. Resource-aware taxon selection for maximizing phylogenetic diversity.

    PubMed

    Pardi, Fabio; Goldman, Nick

    2007-06-01

    Phylogenetic diversity (PD) is a useful metric for selecting taxa in a range of biological applications, for example, bioconservation and genomics, where the selection is usually constrained by the limited availability of resources. We formalize taxon selection as a conceptually simple optimization problem, aiming to maximize PD subject to resource constraints. This allows us to take into account the different amounts of resources required by the different taxa. Although this is a computationally difficult problem, we present a dynamic programming algorithm that solves it in pseudo-polynomial time. Our algorithm can also solve many instances of the Noah's Ark Problem, a more realistic formulation of taxon selection for biodiversity conservation that allows for taxon-specific extinction risks. These instances extend the set of problems for which solutions are available beyond previously known greedy-tractable cases. Finally, we discuss the relevance of our results to real-life scenarios.

  11. Mathematical modeling of spinning elastic bodies for modal analysis.

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Barbera, F. J.; Baddeley, V.

    1973-01-01

    The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.

  12. Module Extraction for Efficient Object Queries over Ontologies with Large ABoxes

    PubMed Central

    Xu, Jia; Shironoshita, Patrick; Visser, Ubbo; John, Nigel; Kabuka, Mansur

    2015-01-01

    The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly. PMID:26848490

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, Ashley; Hunt, Kristopher; Bernstein, Hans C.

    Interest in microbial communities for bioprocessing has surged in recent years based on the potential to optimize multiple tasks simultaneously and to enhance process productivity and stability. The presence and magnitude of these desirable system properties often result from interactions between functionally distinct community members. The importance of interactions, while appreciated by some disciplines for decades, has gained interest recently due to the development of ‘omics techniques, polymicrobial culturing approaches, and computational methods which has made the systems-level analysis of interacting components more tractable. This review defines and categorizes natural and engineered system components, interactions, and emergent properties, as wellmore » as presents three ecological theories relevant to microbial communities. Case studies are interpreted to illustrate components, interactions, emergent properties and agreement with theoretical concepts. A general foundation is laid to facilitate interpretation of current systems and to aid in future design of microbial systems for the next generation of bioprocesses.« less

  14. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  15. Study of Permanent Magnet Focusing for Astronomical Camera Tubes

    NASA Technical Reports Server (NTRS)

    Long, D. C.; Lowrance, J. L.

    1975-01-01

    A design is developed of a permanent magnet assembly (PMA) useful as the magnetic focusing unit for the 35 and 70 mm (diagonal) format SEC tubes. Detailed PMA designs for both tubes are given, and all data on their magnetic configuration, size, weight, and structure of magnetic shields adequate to screen the camera tube from the earth's magnetic field are presented. A digital computer is used for the PMA design simulations, and the expected operational performance of the PMA is ascertained through the calculation of a series of photoelectron trajectories. A large volume where the magnetic field uniformity is greater than 0.5% appears obtainable, and the point spread function (PSF) and modulation transfer function(MTF) indicate nearly ideal performance. The MTF at 20 cycles per mm exceeds 90%. The weight and volume appear tractable for the large space telescope and ground based application.

  16. Modeling intelligent adversaries for terrorism risk assessment: some necessary conditions for adversary models.

    PubMed

    Guikema, Seth

    2012-07-01

    Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.

  17. Data Access, Interoperability and Sustainability: Key Challenges for the Evolution of Science Capabilities

    NASA Astrophysics Data System (ADS)

    Walton, A. L.

    2015-12-01

    In 2016, the National Science Foundation (NSF) will support a portfolio of activities and investments focused upon challenges in data access, interoperability, and sustainability. These topics are fundamental to science questions of increasing complexity that require multidisciplinary approaches and expertise. Progress has become tractable because of (and sometimes complicated by) unprecedented growth in data (both simulations and observations) and rapid advances in technology (such as instrumentation in all aspects of the discovery process, together with ubiquitous cyberinfrastructure to connect, compute, visualize, store, and discover). The goal is an evolution of capabilities for the research community based on these investments, scientific priorities, technology advances, and policies. Examples from multiple NSF directorates, including investments by the Advanced Cyberinfrastructure Division, are aimed at these challenges and can provide the geosciences research community with models and opportunities for participation. Implications for the future are highlighted, along with the importance of continued community engagement on key issues.

  18. Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction

    NASA Astrophysics Data System (ADS)

    Mach, J. C.; Budrow, C. J.; Pagan, D. C.; Ruff, J. P. C.; Park, J.-S.; Okasinski, J.; Beaudoin, A. J.; Miller, M. P.

    2017-05-01

    Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present work, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to develop significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. The experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.

  19. Tomographic reconstruction of an aerosol plume using passive multiangle observations from the MISR satellite instrument

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Davis, Anthony B.; Diner, David J.

    2016-12-01

    We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.

  20. Using Betweenness Centrality to Identify Manifold Shortcuts

    PubMed Central

    Cukierski, William J.; Foran, David J.

    2010-01-01

    High-dimensional data presents a challenge to tasks of pattern recognition and machine learning. Dimensionality reduction (DR) methods remove the unwanted variance and make these tasks tractable. Several nonlinear DR methods, such as the well known ISOMAP algorithm, rely on a neighborhood graph to compute geodesic distances between data points. These graphs can contain unwanted edges which connect disparate regions of one or more manifolds. This topological sensitivity is well known [1], [2], [3], yet handling high-dimensional, noisy data in the absence of a priori manifold knowledge, remains an open and difficult problem. This work introduces a divisive, edge-removal method based on graph betweenness centrality which can robustly identify manifold-shorting edges. The problem of graph construction in high dimension is discussed and the proposed algorithm is fit into the ISOMAP workflow. ROC analysis is performed and the performance is tested on synthetic and real datasets. PMID:20607142

  1. First stage identification of syntactic elements in an extra-terrestrial signal

    NASA Astrophysics Data System (ADS)

    Elliott, John

    2011-02-01

    By investigating the generic attributes of a representative set of terrestrial languages at varying levels of abstraction, it is our endeavour to try and isolate elements of the signal universe, which are computationally tractable for its detection and structural decipherment. Ultimately, our aim is to contribute in some way to the understanding of what 'languageness' actually is. This paper describes algorithms and software developed to characterise and detect generic intelligent language-like features in an input signal, using natural language learning techniques: looking for characteristic statistical "language-signatures" in test corpora. As a first step towards such species-independent language-detection, we present a suite of programs to analyse digital representations of a range of data, and use the results to extrapolate whether or not there are language-like structures which distinguish this data from other sources, such as music, images, and white noise.

  2. Extension of the Viscous Collision Limiting Direct Simulation Monte Carlo Technique to Multiple Species

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Burt, Jonathan M.

    2016-01-01

    There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.

  3. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  4. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  5. Fast Time-Dependent Density Functional Theory Calculations of the X-ray Absorption Spectroscopy of Large Systems.

    PubMed

    Besley, Nicholas A

    2016-10-11

    The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .

  6. Laminar and Turbulent Gaseous Diffusion Flames. Appendix C

    NASA Technical Reports Server (NTRS)

    Faeth, G. M.; Urban, D. L. (Technical Monitor); Yuan, Z.-G. (Technical Monitor)

    2001-01-01

    Recent measurements and predictions of the properties of homogeneous (gaseous) laminar and turbulent non-premixed (diffusion) flames are discussed, emphasizing results from both ground- and space-based studies at microgravity conditions. Initial considerations show that effects of buoyancy not only complicate the interpretation of observations of diffusion flames but at times mislead when such results are applied to the non-buoyant diffusion flame conditions of greatest practical interest. This behavior motivates consideration of experiments where effects of buoyancy are minimized; therefore, methods of controlling the intrusion of buoyancy during observations of non-premixed flames are described, considering approaches suitable for both normal laboratory conditions as well as classical microgravity techniques. Studies of laminar flames at low-gravity and microgravity conditions are emphasized in view of the computational tractability of such flames for developing methods of predicting flame structure as well as the relevance of such flames to more practical turbulent flames by exploiting laminar flamelet concepts.

  7. Extended Islands of Tractability for Parsimony Haplotyping

    NASA Astrophysics Data System (ADS)

    Fleischer, Rudolf; Guo, Jiong; Niedermeier, Rolf; Uhlmann, Johannes; Wang, Yihui; Weller, Mathias; Wu, Xi

    Parsimony haplotyping is the problem of finding a smallest size set of haplotypes that can explain a given set of genotypes. The problem is NP-hard, and many heuristic and approximation algorithms as well as polynomial-time solvable special cases have been discovered. We propose improved fixed-parameter tractability results with respect to the parameter "size of the target haplotype set" k by presenting an O *(k 4k )-time algorithm. This also applies to the practically important constrained case, where we can only use haplotypes from a given set. Furthermore, we show that the problem becomes polynomial-time solvable if the given set of genotypes is complete, i.e., contains all possible genotypes that can be explained by the set of haplotypes.

  8. Design and Elaboration of a Tractable Tricyclic Scaffold To Synthesize Druglike Inhibitors of Dipeptidyl Peptidase-4 (DPP-4), Antagonists of the C-C Chemokine Receptor Type 5 (CCR5), and Highly Potent and Selective Phosphoinositol-3 Kinase δ (PI3Kδ) Inhibitors.

    PubMed

    Schwehm, Carolin; Kellam, Barrie; Garces, Aimie E; Hill, Stephen J; Kindon, Nicholas D; Bradshaw, Tracey D; Li, Jin; Macdonald, Simon J F; Rowedder, James E; Stoddart, Leigh A; Stocks, Michael J

    2017-02-23

    A novel molecular scaffold has been synthesized, and its incorporation into new analogues of biologically active molecules across multiple target classes will be discussed. In these studies, we have shown use of the tricyclic scaffold to synthesize potent inhibitors of the serine peptidase DPP-4, antagonists of the CCR5 receptor, and highly potent and selective PI3K δ isoform inhibitors. We also describe the predicted physicochemical properties of the resulting inhibitors and conclude that the tractable molecular scaffold could have potential application in future drug discovery programs.

  9. Defect Genome of Cubic Perovskites for Fuel Cell Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.

    Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less

  10. Defect Genome of Cubic Perovskites for Fuel Cell Applications

    DOE PAGES

    Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.; ...

    2017-10-10

    Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less

  11. The Mayak Worker Dosimetry System (MWDS-2013): Implementation of the Dose Calculations.

    PubMed

    Zhdanov, А; Vostrotin, V; Efimov, А; Birchall, A; Puncher, M

    2016-07-15

    The calculation of internal doses for the Mayak Worker Dosimetry System (MWDS-2013) involved extensive computational resources due to the complexity and sheer number of calculations required. The required output consisted of a set of 1000 hyper-realizations: each hyper-realization consists of a set (1 for each worker) of probability distributions of organ doses. This report describes the hardware components and computational approaches required to make the calculation tractable. Together with the software, this system is referred to here as the 'PANDORA system'. It is based on a commercial SQL server database in a series of six work stations. A complete run of the entire Mayak worker cohort entailed a huge amount of calculations in PANDORA and due to the relatively slow speed of writing the data into the SQL server, each run took about 47 days. Quality control was monitored by comparing doses calculated in PANDORA with those in a specially modified version of the commercial software 'IMBA Professional Plus'. Suggestions are also made for increasing calculation and storage efficiency for future dosimetry calculations using PANDORA. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Bringing numerous methods for expression and promoter analysis to a public cloud computing service.

    PubMed

    Polanski, Krzysztof; Gao, Bo; Mason, Sam A; Brown, Paul; Ott, Sascha; Denby, Katherine J; Wild, David L

    2018-03-01

    Every year, a large number of novel algorithms are introduced to the scientific community for a myriad of applications, but using these across different research groups is often troublesome, due to suboptimal implementations and specific dependency requirements. This does not have to be the case, as public cloud computing services can easily house tractable implementations within self-contained dependency environments, making the methods easily accessible to a wider public. We have taken 14 popular methods, the majority related to expression data or promoter analysis, developed these up to a good implementation standard and housed the tools in isolated Docker containers which we integrated into the CyVerse Discovery Environment, making these easily usable for a wide community as part of the CyVerse UK project. The integrated apps can be found at http://www.cyverse.org/discovery-environment, while the raw code is available at https://github.com/cyversewarwick and the corresponding Docker images are housed at https://hub.docker.com/r/cyversewarwick/. info@cyverse.warwick.ac.uk or D.L.Wild@warwick.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  13. Side-by-side ANFIS as a useful tool for estimating correlated thermophysical properties

    NASA Astrophysics Data System (ADS)

    Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc

    2015-12-01

    In the present paper, an artificial intelligence-based approach dealing with the estimation of correlated thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when homogeneous materials are subjected to a light flux. Commonly, gradient-based algorithms are used as parameter estimation techniques. Unfortunately, such algorithms show instabilities leading to non-convergence in case of correlated properties to be estimated from a rebuilt impulse response. So, the main objective of the present work was to simultaneously estimate both the thermal diffusivity and conductivity of homogeneous materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side neuro-fuzzy systems (adaptive network-based fuzzy inference systems) trained with a hybrid algorithm. We focused on the impact on generalization of both the examples used during training and the fuzzification process. In addition, computation time was a key point to consider. That is why the developed algorithm is computationally tractable and allows both the thermal diffusivity and conductivity of homogeneous materials to be simultaneously estimated with very good accuracy (the generalization error ranges between 4.6% and 6.2%).

  14. Grid-based Molecular Footprint Comparison Method for Docking and De Novo Design: Application to HIVgp41

    PubMed Central

    Mukherjee, Sudipto; Rizzo, Robert C.

    2014-01-01

    Scoring functions are a critically important component of computer-aided screening methods for the identification of lead compounds during early stages of drug discovery. Here, we present a new multi-grid implementation of the footprint similarity (FPS) scoring function that was recently developed in our laboratory which has proven useful for identification of compounds which bind to a protein on a per-residue basis in a way that resembles a known reference. The grid-based FPS method is much faster than its Cartesian-space counterpart which makes it computationally tractable for on-the-fly docking, virtual screening, or de novo design. In this work, we establish that: (i) relatively few grids can be used to accurately approximate Cartesian space footprint similarity, (ii) the method yields improved success over the standard DOCK energy function for pose identification across a large test set of experimental co-crystal structures, for crossdocking, and for database enrichment, and (iii) grid-based FPS scoring can be used to tailor construction of new molecules to have specific properties, as demonstrated in a series of test cases targeting the viral protein HIVgp41. The method will be made available in the program DOCK6. PMID:23436713

  15. Quantum simulation of transverse Ising models with Rydberg atoms

    NASA Astrophysics Data System (ADS)

    Schauss, Peter

    2018-04-01

    Quantum Ising models are canonical models for the study of quantum phase transitions (Sachdev 1999 Quantum Phase Transitions (Cambridge: Cambridge University Press)) and are the underlying concept for many analogue quantum computing and quantum annealing ideas (Tanaka et al Quantum Spin Glasses, Annealing and Computation (Cambridge: Cambridge University Press)). Here we focus on the implementation of finite-range interacting Ising spin models, which are barely tractable numerically. Recent experiments with cold atoms have reached the interaction-dominated regime in quantum Ising magnets via optical coupling of trapped neutral atoms to Rydberg states. This approach allows for the tunability of all relevant terms in an Ising spin Hamiltonian with 1/{r}6 interactions in transverse and longitudinal fields. This review summarizes the recent progress of these implementations in Rydberg lattices with site-resolved detection. Strong correlations in quantum Ising models have been observed in several experiments, starting from a single excitation in the superatom regime up to the point of crystallization. The rapid progress in this field makes spin systems based on Rydberg atoms a promising platform for quantum simulation because of the unmatched flexibility and strength of interactions combined with high control and good isolation from the environment.

  16. Evaluating the effects of real power losses in optimal power flow based storage integration

    DOE PAGES

    Castillo, Anya; Gayme, Dennice

    2017-03-27

    This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less

  17. The tangled bank of amino acids.

    PubMed

    Goldstein, Richard A; Pollock, David D

    2016-07-01

    The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. © 2016 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  18. Potentials of Mean Force With Ab Initio Mixed Hamiltonian Models of Solvation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Michel; Schenter, Gregory K.; Garrett, Bruce C.

    2003-08-01

    We give an account of a computationally tractable and efficient procedure for the calculation of potentials of mean force using mixed Hamiltonian models of electronic structure where quantum subsystems are described with computationally intensive ab initio wavefunctions. The mixed Hamiltonian is mapped into an all-classical Hamiltonian that is amenable to a thermodynamic perturbation treatment for the calculation of free energies. A small number of statistically uncorrelated (solute-solvent) configurations are selected from the Monte Carlo random walk generated with the all-classical Hamiltonian approximation. Those are used in the averaging of the free energy using the mixed quantum/classical Hamiltonian. The methodology ismore » illustrated for the micro-solvated SN2 substitution reaction of methyl chloride by hydroxide. We also compare the potential of mean force calculated with the above protocol with an approximate formalism, one in which the potential of mean force calculated with the all-classical Hamiltonian is simply added to the energy of the isolated (non-solvated) solute along the reaction path. Interestingly the latter approach is found to be in semi-quantitative agreement with the full mixed Hamiltonian approximation.« less

  19. Cooperative inference: Features, objects, and collections.

    PubMed

    Searcy, Sophia Ray; Shafto, Patrick

    2016-10-01

    Cooperation plays a central role in theories of development, learning, cultural evolution, and education. We argue that existing models of learning from cooperative informants have fundamental limitations that prevent them from explaining how cooperation benefits learning. First, existing models are shown to be computationally intractable, suggesting that they cannot apply to realistic learning problems. Second, existing models assume a priori agreement about which concepts are favored in learning, which leads to a conundrum: Learning fails without precise agreement on bias yet there is no single rational choice. We introduce cooperative inference, a novel framework for cooperation in concept learning, which resolves these limitations. Cooperative inference generalizes the notion of cooperation used in previous models from omission of labeled objects to the omission values of features, labels for objects, and labels for collections of objects. The result is an approach that is computationally tractable, does not require a priori agreement about biases, applies to both Boolean and first-order concepts, and begins to approximate the richness of real-world concept learning problems. We conclude by discussing relations to and implications for existing theories of cognition, cognitive development, and cultural evolution. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  1. A perspective on quantum mechanics calculations in ADMET predictions.

    PubMed

    Bowen, J Phillip; Güner, Osman F

    2013-01-01

    Understanding the molecular basis of drug action has been an important objective for pharmaceutical scientists. With the increasing speed of computers and the implementation of quantum chemistry methodologies, pharmacodynamic and pharmacokinetic problems have become more computationally tractable. Historically the former has been the focus of drug design, but within the last two decades efforts to understand the latter have increased. It takes about fifteen years and over $1 billion dollars for a drug to go from laboratory hit, through lead optimization, to final approval by the U.S. Food and Drug Administration. While the costs have increased substantially, the overall clinical success rate for a compound to emerge from clinical trials is approximately 10%. Most of the attrition rate can be traced to ADMET (absorption, distribution, metabolism, excretion, and toxicity) problems, which is a powerful impetus to study these issues at an earlier stage in drug discovery. Quantum mechanics offers pharmaceutical scientists the opportunity to investigate pharmacokinetic problems at the molecular level prior to laboratory preparation and testing. This review will provide a perspective on the use of quantum mechanics or a combination of quantum mechanics coupled with other classical methods in the pharmacokinetic phase of drug discovery. A brief overview of the essential features of theory will be discussed, and a few carefully selected examples will be given to highlight the computational methods.

  2. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  3. The Social Organization of the Computer Underground

    DTIC Science & Technology

    1989-08-01

    colleagues, with only small groups approaching peer relationships. 14 . SUBJECT TERMS HACK, Computer Underground 15. NUMBER OF PAGES 16. PRICE CODE 17...between individuals involved in a common activity (pp. 13- 14 ). Assessing the degree and manner in which the...criminalized over the past several years . Hackers, and the "danger" that they present in our computer dependent

  4. Childhood pneumonia and crowding, bed-sharing and nutrition: a case-control study from The Gambia.

    PubMed

    Howie, S R C; Schellenberg, J; Chimah, O; Ideh, R C; Ebruke, B E; Oluwalana, C; Mackenzie, G; Jallow, M; Njie, M; Donkor, S; Dionisio, K L; Goldberg, G; Fornace, K; Bottomley, C; Hill, P C; Grant, C C; Corrah, T; Prentice, A M; Ezzati, M; Greenwood, B M; Smith, P G; Adegbola, R A; Mulholland, K

    2016-10-01

    Greater Banjul and Upper River Regions, The Gambia. To investigate tractable social, environmental and nutritional risk factors for childhood pneumonia. A case-control study examining the association of crowding, household air pollution (HAP) and nutritional factors with pneumonia was undertaken in children aged 2-59 months: 458 children with severe pneumonia, defined according to the modified WHO criteria, were compared with 322 children with non-severe pneumonia, and these groups were compared to 801 neighbourhood controls. Controls were matched by age, sex, area and season. Strong evidence was found of an association between bed-sharing with someone with a cough and severe pneumonia (adjusted OR [aOR] 5.1, 95%CI 3.2-8.2, P < 0.001) and non-severe pneumonia (aOR 7.3, 95%CI 4.1-13.1, P < 0.001), with 18% of severe cases estimated to be attributable to this risk factor. Malnutrition and pneumonia had clear evidence of association, which was strongest between severe malnutrition and severe pneumonia (aOR 8.7, 95%CI 4.2-17.8, P < 0.001). No association was found between pneumonia and individual carbon monoxide exposure as a measure of HAP. Bed-sharing with someone with a cough is an important risk factor for severe pneumonia, and potentially tractable to intervention, while malnutrition remains an important tractable determinant.

  5. Balancing selfishness and norm conformity can explain human behavior in large-scale prisoner's dilemma games and can poise human groups near criticality

    NASA Astrophysics Data System (ADS)

    Realpe-Gómez, John; Andrighetto, Giulia; Nardin, Luis Gustavo; Montoya, Javier Antonio

    2018-04-01

    Cooperation is central to the success of human societies as it is crucial for overcoming some of the most pressing social challenges of our time; still, how human cooperation is achieved and may persist is a main puzzle in the social and biological sciences. Recently, scholars have recognized the importance of social norms as solutions to major local and large-scale collective action problems, from the management of water resources to the reduction of smoking in public places to the change in fertility practices. Yet a well-founded model of the effect of social norms on human cooperation is still lacking. Using statistical-physics techniques and integrating findings from cognitive and behavioral sciences, we present an analytically tractable model in which individuals base their decisions to cooperate both on the economic rewards they obtain and on the degree to which their action complies with social norms. Results from this parsimonious model are in agreement with observations in recent large-scale experiments with humans. We also find the phase diagram of the model and show that the experimental human group is poised near a critical point, a regime where recent work suggests living systems respond to changing external conditions in an efficient and coordinated manner.

  6. A novel mouse model identifies cooperating mutations and therapeutic targets critical for chronic myeloid leukemia progression

    PubMed Central

    Giotopoulos, George; van der Weyden, Louise; Osaki, Hikari; Rust, Alistair G.; Gallipoli, Paolo; Meduri, Eshwar; Horton, Sarah J.; Chan, Wai-In; Foster, Donna; Prinjha, Rab K.; Pimanda, John E.; Tenen, Daniel G.; Vassiliou, George S.; Koschmieder, Steffen; Adams, David J.

    2015-01-01

    The introduction of highly selective ABL-tyrosine kinase inhibitors (TKIs) has revolutionized therapy for chronic myeloid leukemia (CML). However, TKIs are only efficacious in the chronic phase of the disease and effective therapies for TKI-refractory CML, or after progression to blast crisis (BC), are lacking. Whereas the chronic phase of CML is dependent on BCR-ABL, additional mutations are required for progression to BC. However, the identity of these mutations and the pathways they affect are poorly understood, hampering our ability to identify therapeutic targets and improve outcomes. Here, we describe a novel mouse model that allows identification of mechanisms of BC progression in an unbiased and tractable manner, using transposon-based insertional mutagenesis on the background of chronic phase CML. Our BC model is the first to faithfully recapitulate the phenotype, cellular and molecular biology of human CML progression. We report a heterogeneous and unique pattern of insertions identifying known and novel candidate genes and demonstrate that these pathways drive disease progression and provide potential targets for novel therapeutic strategies. Our model greatly informs the biology of CML progression and provides a potent resource for the development of candidate therapies to improve the dismal outcomes in this highly aggressive disease. PMID:26304963

  7. Discovery of Supernumerary B Chromosomes in Drosophila melanogaster

    PubMed Central

    Bauerly, Elisabeth; Hughes, Stacie E.; Vietti, Dana R.; Miller, Danny E.; McDowell, William; Hawley, R. Scott

    2014-01-01

    B chromosomes are small, heterochromatic chromosomes that are transmitted in a non-Mendelian manner. We have identified a stock of Drosophila melanogaster that recently (within the last decade) acquired an average of 10 B chromosomes per fly. These B chromosomes are transmitted by both males and females and can be maintained for multiple generations in a wild-type genetic background despite the fact that they cause high levels of 4th chromosome meiotic nondisjunction in females. Most curiously, these B chromosomes are mitotically unstable, suggesting either the absence of critical chromosomal sites or the inability of the meiotic or mitotic systems to cope with many additional chromosomes. These B chromosomes also contain centromeres and are primarily composed of the heterochromatic AATAT satellite sequence. Although the AATAT sequence comprises the majority of the 4th chromosome heterochromatin, the B chromosomes lack most, if not all, 4th chromosome euchromatin. Presumably as a consequence of their heterochromatic content, these B chromosomes significantly modify position-effect variegation in two separate reporter systems, acting as enhancers of variegation in one case and suppressors in the other. The identification of B chromosomes in a genetically tractable organism like D. melanogaster will facilitate studies of chromosome evolution and the analysis of the mechanisms by which meiotic and mitotic processes cope with additional chromosomes. PMID:24478336

  8. De Novo Deep Transcriptome Analysis of Medicinal Plants for Gene Discovery in Biosynthesis of Plant Natural Products.

    PubMed

    Han, R; Rai, A; Nakamura, M; Suzuki, H; Takahashi, H; Yamazaki, M; Saito, K

    2016-01-01

    Study on transcriptome, the entire pool of transcripts in an organism or single cells at certain physiological or pathological stage, is indispensable in unraveling the connection and regulation between DNA and protein. Before the advent of deep sequencing, microarray was the main approach to handle transcripts. Despite obvious shortcomings, including limited dynamic range and difficulties to compare the results from distinct experiments, microarray was widely applied. During the past decade, next-generation sequencing (NGS) has revolutionized our understanding of genomics in a fast, high-throughput, cost-effective, and tractable manner. By adopting NGS, efficiency and fruitful outcomes concerning the efforts to elucidate genes responsible for producing active compounds in medicinal plants were profoundly enhanced. The whole process involves steps, from the plant material sampling, to cDNA library preparation, to deep sequencing, and then bioinformatics takes over to assemble enormous-yet fragmentary-data from which to comb and extract information. The unprecedentedly rapid development of such technologies provides so many choices to facilitate the task, which can cause confusion when choosing the suitable methodology for specific purposes. Here, we review the general approaches for deep transcriptome analysis and then focus on their application in discovering biosynthetic pathways of medicinal plants that produce important secondary metabolites. © 2016 Elsevier Inc. All rights reserved.

  9. Science and Sentiment: Grinnell's Fact-Based Philosophy of Biodiversity Conservation.

    PubMed

    Shavit, Ayelet; Griesemer, James R

    2018-06-01

    At the beginning of the twentieth century, the biologist Joseph Grinnell made a distinction between science and sentiment for producing fact-based generalizations on how to conserve biodiversity. We are inspired by Grinnellian science, which successfully produced a century-long impact on studying and conserving biodiversity that runs orthogonal to some familiar philosophical distinctions such as fact versus value, emotion versus reason and basic versus applied science. According to Grinnell, unlike sentiment-based generalizations, a fact-based generalization traces its diverse commitments and thus becomes tractable for its audience. We argue that foregrounding tractability better explains Grinnell's practice in the context of his time as well as in the context of current discourse among scientists over the political "biases" of biodiversity research and its problem of "reproducibility."

  10. Identification of Small RNA-Protein Partners in Plant Symbiotic Bacteria.

    PubMed

    Robledo, Marta; Matia-González, Ana M; García-Tomsig, Natalia I; Jiménez-Zurdo, José I

    2018-01-01

    The identification of the protein partners of bacterial small noncoding RNAs (sRNAs) is essential to understand the mechanistic principles and functions of riboregulation in prokaryotic cells. Here, we describe an optimized affinity chromatography protocol that enables purification of in vivo formed sRNA-protein complexes in Sinorhizobium meliloti, a genetically tractable nitrogen-fixing plant symbiotic bacterium. The procedure requires the tagging of the desired sRNA with the MS2 aptamer, which is affinity-captured by the MS2-MBP protein conjugated to an amylose resin. As proof of principle, we show recovery of the RNA chaperone Hfq associated to the strictly Hfq-dependent AbcR2 trans-sRNA. This method can be applied for the investigation of sRNA-protein interactions on a broad range of genetically tractable α-proteobacteria.

  11. Harnessing the hygroscopic and biofluorescent behaviors of genetically tractable microbial cells to design biohybrid wearables.

    PubMed

    Wang, Wen; Yao, Lining; Cheng, Chin-Yi; Zhang, Teng; Atsumi, Hiroshi; Wang, Luda; Wang, Guanyun; Anilionyte, Oksana; Steiner, Helene; Ou, Jifei; Zhou, Kang; Wawrousek, Chris; Petrecca, Katherine; Belcher, Angela M; Karnik, Rohit; Zhao, Xuanhe; Wang, Daniel I C; Ishii, Hiroshi

    2017-05-01

    Cells' biomechanical responses to external stimuli have been intensively studied but rarely implemented into devices that interact with the human body. We demonstrate that the hygroscopic and biofluorescent behaviors of living cells can be engineered to design biohybrid wearables, which give multifunctional responsiveness to human sweat. By depositing genetically tractable microbes on a humidity-inert material to form a heterogeneous multilayered structure, we obtained biohybrid films that can reversibly change shape and biofluorescence intensity within a few seconds in response to environmental humidity gradients. Experimental characterization and mechanical modeling of the film were performed to guide the design of a wearable running suit and a fluorescent shoe prototype with bio-flaps that dynamically modulates ventilation in synergy with the body's need for cooling.

  12. Adaptive voting computer system

    NASA Technical Reports Server (NTRS)

    Koczela, L. J.; Wilgus, D. S. (Inventor)

    1974-01-01

    A computer system is reported that uses adaptive voting to tolerate failures and operates in a fail-operational, fail-safe manner. Each of four computers is individually connected to one of four external input/output (I/O) busses which interface with external subsystems. Each computer is connected to receive input data and commands from the other three computers and to furnish output data commands to the other three computers. An adaptive control apparatus including a voter-comparator-switch (VCS) is provided for each computer to receive signals from each of the computers and permits adaptive voting among the computers to permit the fail-operational, fail-safe operation.

  13. An intelligent multi-media human-computer dialogue system

    NASA Technical Reports Server (NTRS)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  14. Using the TouchMath Program to Teach Mathematical Computation to At-Risk Students and Students with Disabilities

    ERIC Educational Resources Information Center

    Ellingsen, Ryleigh; Clinton, Elias

    2017-01-01

    This manuscript reviews the empirical literature of the TouchMath© instructional program. The TouchMath© program is a commercial mathematics series that uses a dot notation system to provide multisensory instruction of computation skills. Using the program, students are taught to solve computational tasks in a multisensory manner that does not…

  15. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  16. Computing the non-Markovian coarse-grained interactions derived from the Mori-Zwanzig formalism in molecular systems: Application to polymer melts

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em

    2017-01-01

    Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.

  17. CAPRI: Using a Geometric Foundation for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2002-01-01

    CAPRI (Computational Analysis Programming Interface) is a software development tool intended to make computerized design, simulation and analysis faster and more efficient. The computational steps traditionally taken for most engineering analysis (Computational Fluid Dynamics (CFD), structural analysis, etc.) are: Surface Generation, usually by employing a Computer Aided Design (CAD) system; Grid Generation, preparing the volume for the simulation; Flow Solver, producing the results at the specified operational point; Post-processing Visualization, interactively attempting to understand the results. It should be noted that the structures problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go IGES files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D) data formats and the upcoming CGNS). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Instead of the serial approach to analysis, CAPRI takes a geometry centric approach. CAPRI is a software building tool-kit that refers to two ideas: (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. The connection to the geometry is made through an Application Programming Interface (API) and not a file system.

  18. Nonlinear Model Predictive Control for Cooperative Control and Estimation

    NASA Astrophysics Data System (ADS)

    Ru, Pengkai

    Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.

  19. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    NASA Astrophysics Data System (ADS)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  20. Identifying research priorities for public health research to address health inequalities: use of Delphi-like survey methods.

    PubMed

    Turner, S; Ollerhead, E; Cook, A

    2017-10-09

    In the funding of health research and public health research it is vital that research questions posed are important and that funded research meets a research need or a gap in evidence. Many methods are used in the identification of research priorities, however, these can be resource intensive, costly and logistically challenging. Identifying such research priorities can be particularly challenging for complex public health problems as there is a need to consult a number of experts across disciplines and with a range of expertise. This study investigated the use of Delphi-like survey methods in identifying important research priorities relating to health inequalities and framing tractable research questions for topic areas identified. The study was conducted in two phases, both using Delphi-like survey methods. Firstly, public health professionals with an interest in health inequalities were asked to identify research priorities. Secondly academic researchers were asked to frame tractable research questions relating to the priorities identified. These research priorities identified using Delphi-like survey methods were subsequently compared to those identified using different methods. A total of 52 public health professionals and 21 academics across the United Kingdom agreed to take part. The response rates were high, from public health professionals across three survey rounds (69%, 50% and 40%) and from academics across one round (52%), indicating that participants were receptive to the method and motivated to respond. The themes identified as encompassing the most important research priorities were mental health, healthy environment and health behaviours. Within these themes, the topic areas that emerged most strongly included community interventions for prevention of mental health problems and the food and alcohol environment. Some responses received from academic researchers were (as requested) in the form of tractable research questions, whereas others contributed further potential topic areas instead. Delphi-like survey methods are practical and productive as a means of obtaining opinions from a wide number of relevant experts identifying potential priority topic areas for research; however, this method is less appropriate for framing tractable research questions.

  1. 32 CFR 701.53 - FOIA fee schedule.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... above 45.00 (e) Audiovisual documentary materials. Search costs are computed as for any other record... the work. Audiovisual materials provided to a requester need not be in reproducible format or quality... shall be computed in the manner described for audiovisual documentary material. (g) Costs for special...

  2. Birth/birth-death processes and their computable transition probabilities with biological applications.

    PubMed

    Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2018-03-01

    Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

  3. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  4. Quasi-laminar stability and sensitivity analyses for turbulent flows: Prediction of low-frequency unsteadiness and passive control

    NASA Astrophysics Data System (ADS)

    Mettot, Clément; Sipp, Denis; Bézard, Hervé

    2014-04-01

    This article presents a quasi-laminar stability approach to identify in high-Reynolds number flows the dominant low-frequencies and to design passive control means to shift these frequencies. The approach is based on a global linear stability analysis of mean-flows, which correspond to the time-average of the unsteady flows. Contrary to the previous work by Meliga et al. ["Sensitivity of 2-D turbulent flow past a D-shaped cylinder using global stability," Phys. Fluids 24, 061701 (2012)], we use the linearized Navier-Stokes equations based solely on the molecular viscosity (leaving aside any turbulence model and any eddy viscosity) to extract the least stable direct and adjoint global modes of the flow. Then, we compute the frequency sensitivity maps of these modes, so as to predict before hand where a small control cylinder optimally shifts the frequency of the flow. In the case of the D-shaped cylinder studied by Parezanović and Cadot [J. Fluid Mech. 693, 115 (2012)], we show that the present approach well captures the frequency of the flow and recovers accurately the frequency control maps obtained experimentally. The results are close to those already obtained by Meliga et al., who used a more complex approach in which turbulence models played a central role. The present approach is simpler and may be applied to a broader range of flows since it is tractable as soon as mean-flows — which can be obtained either numerically from simulations (Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), unsteady Reynolds-Averaged-Navier-Stokes (RANS), steady RANS) or from experimental measurements (Particle Image Velocimetry - PIV) — are available. We also discuss how the influence of the control cylinder on the mean-flow may be more accurately predicted by determining an eddy-viscosity from numerical simulations or experimental measurements. From a technical point of view, we finally show how an existing compressible numerical simulation code may be used in a black-box manner to extract the global modes and sensitivity maps.

  5. Crime or War: Cyberspace Law and Its Implications for Intelligence

    DTIC Science & Technology

    2011-02-11

    NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON a. REPORT UNCLASSIFED b. ABSTRACT UNCLASSIFED c . THIS PAGE UNCLASSIFED UNLIMITED 38...protected computer or gaining and using information in a manner exceeding authorized access. Robert Morris, a Cornell University computer science...hacker and not the Iranian government. There are hundreds of hackers conducting computer intrusions each day. The previously cited example of Robert

  6. Just Scan It!-Weapon Reconstruction in Computed Tomography on Historical and Current Swiss Military Guns.

    PubMed

    Franckenberg, Sabine; Binder, Thomas; Bolliger, Stephan; Thali, Michael J; Ross, Steffen G

    2016-09-01

    Cross-sectional imaging, such as computed tomography, has been increasingly implemented in both historic and recent postmortem forensic investigations. It aids in determining cause and manner of death as well as in correlating injuries to possible weapons. This study illuminates the feasibility of reconstructing guns in computed tomography and gives a distinct overview of historic and recent Swiss Army guns.

  7. The Influence of Large-Scale Computing on Aircraft Structural Design.

    DTIC Science & Technology

    1986-04-01

    the customer in the most cost- effective manner. Computer facility organizations became computer resource power brokers. A good data processing...capabilities generated on other processors can be easily used. This approach is easily implementable and provides a good strategy for using existing...assistance to member nations for the purpose of increasing their scientific and technical potential; - Recommending effective ways for the member nations to

  8. Surface Transportation Weather Decision Support Requirements - Executive Summary, Version 1.0

    DOT National Transportation Integrated Search

    1999-12-16

    WEATHER: IT AFFECTS THE VISIBILITY, TRACTABILITY, MANEUVERABILITY, VEHICLE STABILITY, EXHAUST EMISSIONS AND STRUCTURAL INTEGRITY OF THE SURFACE TRANSPORTATION SYSTEM. THEREBY WEATHER AFFECTS THE SAFETY, MOBILITY, PRODUCTIVITY AND ENVIRONMENTAL IMPACT...

  9. Mesh and Time-Step Independent Computational Fluid Dynamics (CFD) Solutions

    ERIC Educational Resources Information Center

    Nijdam, Justin J.

    2013-01-01

    A homework assignment is outlined in which students learn Computational Fluid Dynamics (CFD) concepts of discretization, numerical stability and accuracy, and verification in a hands-on manner by solving physically realistic problems of practical interest to engineers. The students solve a transient-diffusion problem numerically using the common…

  10. 32 CFR 518.20 - Collection of fees and fee rates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Audiovisual documentary materials. Search costs are computed as for any other record. Duplication cost is the.... Audiovisual materials provided to a requester need not be in reproducible format or quality. Army audiovisual... any record not described above shall be computed in the manner described for audiovisual documentary...

  11. Managing the Risks Associated with End-User Computing.

    ERIC Educational Resources Information Center

    Alavi, Maryam; Weiss, Ira R.

    1986-01-01

    Identifies organizational risks of end-user computing (EUC) associated with different stages of the end-user applications life cycle (analysis, design, implementation). Generic controls are identified that address each of the risks enumerated in a manner that allows EUC management to select those most appropriate to their EUC environment. (5…

  12. Teaching Recognition of Normal and Abnormal Heart Sounds Using Computer-Assisted Instruction

    ERIC Educational Resources Information Center

    Musselman, Eugene E.; Grimes, George M.

    1976-01-01

    The computer is being used in an innovative manner to teach the recognition of normal and abnormal canine heart sounds at the University of Chicago. Experience thus far indicates that the PLATO program resources allow the maximum development of the student's proficiency in auscultation. (Editor/LBH)

  13. Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach

    NASA Astrophysics Data System (ADS)

    Demirer, Nazli

    The control of systems with autonomous mobile agents has been a point of interest recently, with many applications like surveillance, coverage, searching over an area with probabilistic target locations or exploring an area. In all of these applications, the main goal of the swarm is to distribute itself over an operational space to achieve mission objectives specified by the density of swarm. This research focuses on the problem of controlling the distribution of multi-agent systems considering a hierarchical control structure where the whole swarm coordination is achieved at the high-level and individual vehicle/agent control is managed at the low-level. High-level coordination algorithms uses macroscopic models that describes the collective behavior of the whole swarm and specify the agent motion commands, whose execution will lead to the desired swarm behavior. The low-level control laws execute the motion to follow these commands at the agent level. The main objective of this research is to develop high-level decision control policies and algorithms to achieve physically realizable commanding of the agents by imposing mission constraints on the distribution. We also make some connections with decentralized low-level motion control. This dissertation proposes a Markov chain based method to control the density distribution of the whole system where the implementation can be achieved in a decentralized manner with no communication between agents since establishing communication with large number of agents is highly challenging. The ultimate goal is to guide the overall density distribution of the system to a prescribed steady-state desired distribution while satisfying desired transition and safety constraints. Here, the desired distribution is determined based on the mission requirements, for example in the application of area search, the desired distribution should match closely with the probabilistic target locations. The proposed method is applicable for both systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.

  14. A few scenarios still do not fit all

    NASA Astrophysics Data System (ADS)

    Schweizer, Vanessa

    2018-05-01

    For integrated climate change research, the Scenario Matrix Architecture provides a tractable menu of possible emissions trajectories, socio-economic futures and policy environments. However, the future of decision support may lie in searchable databases.

  15. The genetic basis of alcoholism: multiple phenotypes, many genes, complex networks.

    PubMed

    Morozova, Tatiana V; Goldman, David; Mackay, Trudy F C; Anholt, Robert R H

    2012-02-20

    Alcoholism is a significant public health problem. A picture of the genetic architecture underlying alcohol-related phenotypes is emerging from genome-wide association studies and work on genetically tractable model organisms.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less

  17. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  18. Protein structure determination by exhaustive search of Protein Data Bank derived databases.

    PubMed

    Stokes-Rees, Ian; Sliz, Piotr

    2010-12-14

    Parallel sequence and structure alignment tools have become ubiquitous and invaluable at all levels in the study of biological systems. We demonstrate the application and utility of this same parallel search paradigm to the process of protein structure determination, benefitting from the large and growing corpus of known structures. Such searches were previously computationally intractable. Through the method of Wide Search Molecular Replacement, developed here, they can be completed in a few hours with the aide of national-scale federated cyberinfrastructure. By dramatically expanding the range of models considered for structure determination, we show that small (less than 12% structural coverage) and low sequence identity (less than 20% identity) template structures can be identified through multidimensional template scoring metrics and used for structure determination. Many new macromolecular complexes can benefit significantly from such a technique due to the lack of known homologous protein folds or sequences. We demonstrate the effectiveness of the method by determining the structure of a full-length p97 homologue from Trichoplusia ni. Example cases with the MHC/T-cell receptor complex and the EmoB protein provide systematic estimates of minimum sequence identity, structure coverage, and structural similarity required for this method to succeed. We describe how this structure-search approach and other novel computationally intensive workflows are made tractable through integration with the US national computational cyberinfrastructure, allowing, for example, rapid processing of the entire Structural Classification of Proteins protein fragment database.

  19. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.

  20. Cutoff size need not strongly influence molecular dynamics results for solvated polypeptides.

    PubMed

    Beck, David A C; Armen, Roger S; Daggett, Valerie

    2005-01-18

    The correct treatment of van der Waals and electrostatic nonbonded interactions in molecular force fields is essential for performing realistic molecular dynamics (MD) simulations of solvated polypeptides. The most computationally tractable treatment of nonbonded interactions in MD utilizes a spherical distance cutoff (typically, 8-12 A) to reduce the number of pairwise interactions. In this work, we assess three spherical atom-based cutoff approaches for use with all-atom explicit solvent MD: abrupt truncation, a CHARMM-style electrostatic shift truncation, and our own force-shifted truncation. The chosen system for this study is an end-capped 17-residue alanine-based alpha-helical peptide, selected because of its use in previous computational and experimental studies. We compare the time-averaged helical content calculated from these MD trajectories with experiment. We also examine the effect of varying the cutoff treatment and distance on energy conservation. We find that the abrupt truncation approach is pathological in its inability to conserve energy. The CHARMM-style shift truncation performs quite well but suffers from energetic instability. On the other hand, the force-shifted spherical cutoff method conserves energy, correctly predicts the experimental helical content, and shows convergence in simulation statistics as the cutoff is increased. This work demonstrates that by using proper and rigorous techniques, it is possible to correctly model polypeptide dynamics in solution with a spherical cutoff. The inherent computational advantage of spherical cutoffs over Ewald summation (and related) techniques is essential in accessing longer MD time scales.

  1. Gyrofluid Modeling of Turbulent, Kinetic Physics

    NASA Astrophysics Data System (ADS)

    Despain, Kate Marie

    2011-12-01

    Gyrofluid models to describe plasma turbulence combine the advantages of fluid models, such as lower dimensionality and well-developed intuition, with those of gyrokinetics models, such as finite Larmor radius (FLR) effects. This allows gyrofluid models to be more tractable computationally while still capturing much of the physics related to the FLR of the particles. We present a gyrofluid model derived to capture the behavior of slow solar wind turbulence and describe the computer code developed to implement the model. In addition, we describe the modifications we made to a gyrofluid model and code that simulate plasma turbulence in tokamak geometries. Specifically, we describe a nonlinear phase mixing phenomenon, part of the E x B term, that was previously missing from the model. An inherently FLR effect, it plays an important role in predicting turbulent heat flux and diffusivity levels for the plasma. We demonstrate this importance by comparing results from the updated code to studies done previously by gyrofluid and gyrokinetic codes. We further explain what would be necessary to couple the updated gyrofluid code, gryffin, to a turbulent transport code, thus allowing gryffin to play a role in predicting profiles for fusion devices such as ITER and to explore novel fusion configurations. Such a coupling would require the use of Graphical Processing Units (GPUs) to make the modeling process fast enough to be viable. Consequently, we also describe our experience with GPU computing and demonstrate that we are poised to complete a gryffin port to this innovative architecture.

  2. Force and Stress along Simulated Dissociation Pathways of Cucurbituril-Guest Systems.

    PubMed

    Velez-Vega, Camilo; Gilson, Michael K

    2012-03-13

    The field of host-guest chemistry provides computationally tractable yet informative model systems for biomolecular recognition. We applied molecular dynamics simulations to study the forces and mechanical stresses associated with forced dissociation of aqueous cucurbituril-guest complexes with high binding affinities. First, the unbinding transitions were modeled with constant velocity pulling (steered dynamics) and a soft spring constant, to model atomic force microscopy (AFM) experiments. The computed length-force profiles yield rupture forces in good agreement with available measurements. We also used steered dynamics with high spring constants to generate paths characterized by a tight control over the specified pulling distance; these paths were then equilibrated via umbrella sampling simulations and used to compute time-averaged mechanical stresses along the dissociation pathways. The stress calculations proved to be informative regarding the key interactions determining the length-force profiles and rupture forces. In particular, the unbinding transition of one complex is found to be a stepwise process, which is initially dominated by electrostatic interactions between the guest's ammoniums and the host's carbonyl groups, and subsequently limited by the extraction of the guest's bulky bicyclooctane moiety; the latter step requires some bond stretching at the cucurbituril's extraction portal. Conversely, the dissociation of a second complex with a more slender guest is mainly driven by successive electrostatic interactions between the different guest's ammoniums and the host's carbonyl groups. The calculations also provide information on the origins of thermodynamic irreversibilities in these forced dissociation processes.

  3. Fault tolerant attitude control for small unmanned aircraft systems equipped with an airflow sensor array.

    PubMed

    Shen, H; Xu, Y; Dickinson, B T

    2014-11-18

    Inspired by sensing strategies observed in birds and bats, a new attitude control concept of directly using real-time pressure and shear stresses has recently been studied. It was shown that with an array of onboard airflow sensors, small unmanned aircraft systems can promptly respond to airflow changes and improve flight performances. In this paper, a mapping function is proposed to compute aerodynamic moments from the real-time pressure and shear data in a practical and computationally tractable formulation. Since many microscale airflow sensors are embedded on the small unmanned aircraft system surface, it is highly possible that certain sensors may fail. Here, an adaptive control system is developed that is robust to sensor failure as well as other numerical mismatches in calculating real-time aerodynamic moments. The advantages of the proposed method are shown in the following simulation cases: (i) feedback pressure and wall shear data from a distributed array of 45 airflow sensors; (ii) 50% failure of the symmetrically distributed airflow sensor array; and (iii) failure of all the airflow sensors on one wing. It is shown that even if 50% of the airflow sensors have failures, the aircraft is still stable and able to track the attitude commands.

  4. Geometric Modeling of Inclusions as Ellipsoids

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J.

    2008-01-01

    Nonmetallic inclusions in gas turbine disk alloys can have a significant detrimental impact on fatigue life. Because large inclusions that lead to anomalously low lives occur infrequently, probabilistic approaches can be utilized to avoid the excessively conservative assumption of lifing to a large inclusion in a high stress location. A prerequisite to modeling the impact of inclusions on the fatigue life distribution is a characterization of the inclusion occurrence rate and size distribution. To help facilitate this process, a geometric simulation of the inclusions was devised. To make the simulation problem tractable, the irregularly sized and shaped inclusions were modeled as arbitrarily oriented, three independent dimensioned, ellipsoids. Random orientation of the ellipsoid is accomplished through a series of three orthogonal rotations of axes. In this report, a set of mathematical models for the following parameters are described: the intercepted area of a randomly sectioned ellipsoid, the dimensions and orientation of the intercepted ellipse, the area of a randomly oriented sectioned ellipse, the depth and width of a randomly oriented sectioned ellipse, and the projected area of a randomly oriented ellipsoid. These parameters are necessary to determine an inclusion s potential to develop a propagating fatigue crack. Without these mathematical models, computationally expensive search algorithms would be required to compute these parameters.

  5. Bayesian calibration for electrochemical thermal model of lithium-ion cells

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-07-01

    Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.

  6. Molecular dynamics simulations on the inhibition of cyclin-dependent kinases 2 and 5 in the presence of activators.

    PubMed

    Zhang, Bing; Tan, Vincent B C; Lim, Kian Meng; Tay, Tong Earn

    2006-06-01

    Interests in CDK2 and CDK5 have stemmed mainly from their association with cancer and neuronal migration or differentiation related diseases and the need to design selective inhibitors for these kinases. Molecular dynamics (MD) simulations have not only become a viable approach to drug design because of advances in computer technology but are increasingly an integral part of drug discovery processes. It is common in MD simulations of inhibitor/CDK complexes to exclude the activator of the CDKs in the structural models to keep computational time tractable. In this paper, we present simulation results of CDK2 and CDK5 with roscovitine using models with and without their activators (cyclinA and p25). While p25 was found to induce slight changes in CDK5, the calculations support that cyclinA leads to significant conformational changes near the active site of CDK2. This suggests that detailed and structure-based inhibitor design targeted at these CDKs should employ activator-included models of the kinases. Comparisons between P/CDK2/cyclinA/roscovitine and CDK5/p25/roscovitine complexes reveal differences in the conformations of the glutamine around the active sites, which may be exploited to find highly selective inhibitors with respect to CDK2 and CDK5.

  7. Electrical Wave Propagation in a Minimally Realistic Fiber Architecture Model of the Left Ventricle

    NASA Astrophysics Data System (ADS)

    Song, Xianfeng; Setayeshgar, Sima

    2006-03-01

    Experimental results indicate a nested, layered geometry for the fiber surfaces of the left ventricle, where fiber directions are approximately aligned in each surface and gradually rotate through the thickness of the ventricle. Numerical and analytical results have highlighted the importance of this rotating anisotropy and its possible destabilizing role on the dynamics of scroll waves in excitable media with application to the heart. Based on the work of Peskin[1] and Peskin and McQueen[2], we present a minimally realistic model of the left ventricle that adequately captures the geometry and anisotropic properties of the heart as a conducting medium while being easily parallelizable, and computationally more tractable than fully realistic anatomical models. Complementary to fully realistic and anatomically-based computational approaches, studies using such a minimal model with the addition of successively realistic features, such as excitation-contraction coupling, should provide unique insight into the basic mechanisms of formation and obliteration of electrical wave instabilities. We describe our construction, implementation and validation of this model. [1] C. S. Peskin, Communications on Pure and Applied Mathematics 42, 79 (1989). [2] C. S. Peskin and D. M. McQueen, in Case Studies in Mathematical Modeling: Ecology, Physiology, and Cell Biology, 309(1996)

  8. A stochastic multi-scale method for turbulent premixed combustion

    NASA Astrophysics Data System (ADS)

    Cha, Chong M.

    2002-11-01

    The stochastic chemistry algorithm of Bunker et al. and Gillespie is used to perform the chemical reactions in a transported probability density function (PDF) modeling approach of turbulent combustion. Recently, Kraft & Wagner have demonstrated a 100-fold gain in computational speed (for a 100 species mechanism) using the stochastic approach over the conventional, direct integration method of solving for the chemistry. Here, the stochastic chemistry algorithm is applied to develop a new transported PDF model of turbulent premixed combustion. The methodology relies on representing the relevant spatially dependent physical processes as queuing events. The canonical problem of a one-dimensional premixed flame is used for validation. For the laminar case, molecular diffusion is described by a random walk. For the turbulent case, one of two different material transport submodels can provide the necessary closure: Taylor dispersion or Kerstein's one-dimensional turbulence approach. The former exploits ``eddy diffusivity'' and hence would be much more computationally tractable for practical applications. Various validation studies are performed. Results from the Monte Carlo simulations compare well to asymptotic solutions of laminar premixed flames, both with and without high activation temperatures. The correct scaling of the turbulent burning velocity is predicted in both Damköhler's small- and large-scale turbulence limits. The effect of applying the eddy diffusivity concept in the various regimes is discussed.

  9. Play and Digital Media

    ERIC Educational Resources Information Center

    Johnson, James E.; Christie, James F.

    2009-01-01

    This article examines how play is affected by computers and digital toys. Research indicates that when computer software targeted at children is problem-solving oriented and open-ended, children tend to engage in creative play and interact with peers in a positive manner. On the other hand, drill-and-practice programs can be quite boring and limit…

  10. A Framework for Collaborative and Convenient Learning on Cloud Computing Platforms

    ERIC Educational Resources Information Center

    Sharma, Deepika; Kumar, Vikas

    2017-01-01

    The depth of learning resides in collaborative work with more engagement and fun. Technology can enhance collaboration with a higher level of convenience and cloud computing can facilitate this in a cost effective and scalable manner. However, to deploy a successful online learning environment, elementary components of learning pedagogy must be…

  11. Science-Technology Coupling: The Case of Mathematical Logic and Computer Science.

    ERIC Educational Resources Information Center

    Wagner-Dobler, Roland

    1997-01-01

    In the history of science, there have often been periods of sudden rapprochements between pure science and technology-oriented branches of science. Mathematical logic as pure science and computer science as technology-oriented science have experienced such a rapprochement, which is studied in this article in a bibliometric manner. (Author)

  12. Securing Emergency State Data in a Tactical Computing Environment

    DTIC Science & Technology

    2010-12-01

    in a Controlled Manner, 19th IEEE Symposium on Computer-Based Medical Systems (CBMS󈧊), 847–854. [38] K. Kifayat, D. Llewellyn - Jones , A. Arabo, O...Drew, M. Merabti, Q. Shi, A. Waller, R. Craddock, G. Jones , State-of-the-Art in System-of-Systems Security for Crisis Management, Fourth Annual

  13. Extinction from a Rationalist Perspective

    PubMed Central

    Gallistel, C. R.

    2012-01-01

    The merging of the computational theory of mind and evolutionary thinking leads to a kind of rationalism, in which enduring truths about the world have become implicit in the computations that enable the brain to cope with the experienced world. The dead reckoning computation, for example, is implemented within the brains of animals as one of the mechanisms that enables them to learn where they are (Gallistel, 1990, 1995). It integrates a velocity signal with respect to a time signal. Thus, the manner in which position and velocity relate to one another in the world is reflected in the manner in which signals representing those variables are processed in the brain. I use principles of information theory and Bayesian inference to derive from other simple principles explanations for: 1) the failure of partial reinforcement to increase reinforcements to acquisition; 2) the partial reinforcement extinction effect; 3) spontaneous recovery; 4) renewal; 5) reinstatement; 6) resurgence (aka facilitated reacquisition). Like the principle underlying dead-reckoning, these principles are grounded in analytic considerations. They are the kind of enduring truths about the world that are likely to have shaped the brain's computations. PMID:22391153

  14. AIM: a personal view of where I have been and where we might be going.

    PubMed

    Rector, A

    2001-08-01

    My own career in medical informatics and AI in medicine has oscillated between concerns with medical records and concerns with knowledge representation with decision support as a pivotal integrating issue. It has focused on using AI to organise information and reduce 'muddle' and improve the user interfaces to produce 'useful and usable systems' to help doctors with a 'humanly impossible task'. Increasingly knowledge representation and ontologies have become the fulcrum for orchestrating re-use of information and integration of systems. Encouragingly, the dilemma between computational tractability and expressiveness is lessening, and ontologies and description logics are joining the mainstream both in AI in Medicine and in Intelligent Information Management generally. It has been shown possible to scale up ontologies to meet medical needs, and increasingly ontologies are playing a key role in meeting the requirements to scale up the complexity of clinical systems to meet the ever increasing demands brought about by new emphasis on reduction of errors, clinical accountability, and the explosion of knowledge on the Web.

  15. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  16. The Quality of the Embedding Potential Is Decisive for Minimal Quantum Region Size in Embedding Calculations: The Case of the Green Fluorescent Protein.

    PubMed

    Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob

    2017-12-12

    The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.

  17. Quantum Dynamics in Biological Systems

    NASA Astrophysics Data System (ADS)

    Shim, Sangwoo

    In the first part of this dissertation, recent efforts to understand quantum mechanical effects in biological systems are discussed. Especially, long-lived quantum coherences observed during the electronic energy transfer process in the Fenna-Matthews-Olson complex at physiological condition are studied extensively using theories of open quantum systems. In addition to the usual master equation based approaches, the effect of the protein structure is investigated in atomistic detail through the combined application of quantum chemistry and molecular dynamics simulations. To evaluate the thermalized reduced density matrix, a path-integral Monte Carlo method with a novel importance sampling approach is developed for excitons coupled to an arbitrary phonon bath at a finite temperature. In the second part of the thesis, simulations of molecular systems and applications to vibrational spectra are discussed. First, the quantum dynamics of a molecule is simulated by combining semiclassical initial value representation and density funcitonal theory with analytic derivatives. A computationally-tractable approximation to the sum-of-states formalism of Raman spectra is subsequently discussed.

  18. Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.

  19. Dynamics of history-dependent epidemics in temporal networks

    NASA Astrophysics Data System (ADS)

    Sunny, Albert; Kotnis, Bhushan; Kuri, Joy

    2015-08-01

    The structural properties of temporal networks often influence the dynamical processes that occur on these networks, e.g., bursty interaction patterns have been shown to slow down epidemics. In this paper, we investigate the effect of link lifetimes on the spread of history-dependent epidemics. We formulate an analytically tractable activity-driven temporal network model that explicitly incorporates link lifetimes. For Markovian link lifetimes, we use mean-field analysis for computing the epidemic threshold, while the effect of non-Markovian link lifetimes is studied using simulations. Furthermore, we also study the effect of negative correlation between the number of links spawned by an individual and the lifetimes of those links. Such negative correlations may arise due to the finite cognitive capacity of the individuals. Our investigations reveal that heavy-tailed link lifetimes slow down the epidemic, while negative correlations can reduce epidemic prevalence. We believe that our results help shed light on the role of link lifetimes in modulating diffusion processes on temporal networks.

  20. Plant Comparative and Functional Genomics

    DOE PAGES

    Yang, Xiaohan; Leebens-Mack, Jim; Chen, Feng; ...

    2015-01-01

    Plants form the foundation for our global ecosystem and are essential for environmental and human health. An increasing number of available plant genomes and tractable experimental systems, comparative and functional plant genomics research is greatly expanding our knowledge of the molecular basis of economically and nutritionally important traits in crop plants. Inferences drawn from comparative genomics are motivating experimental investigations of gene function and gene interactions. In this special issue aims to highlight recent advances made in comparative and functional genomics research in plants. Nine original research articles in this special issue cover five important topics: (1) transcription factor genemore » families relevant to abiotic stress tolerance; (2) plant secondary metabolism; (3) transcriptomebased markers for quantitative trait locus; (4) epigenetic modifications in plant-microbe interactions; and (5) computational prediction of protein-protein interactions. Finally, we studied the plant species in these articles which include model species as well as nonmodel plant species of economic importance (e.g., food crops and medicinal plants).« less

  1. Plant Comparative and Functional Genomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaohan; Leebens-Mack, Jim; Chen, Feng

    Plants form the foundation for our global ecosystem and are essential for environmental and human health. An increasing number of available plant genomes and tractable experimental systems, comparative and functional plant genomics research is greatly expanding our knowledge of the molecular basis of economically and nutritionally important traits in crop plants. Inferences drawn from comparative genomics are motivating experimental investigations of gene function and gene interactions. In this special issue aims to highlight recent advances made in comparative and functional genomics research in plants. Nine original research articles in this special issue cover five important topics: (1) transcription factor genemore » families relevant to abiotic stress tolerance; (2) plant secondary metabolism; (3) transcriptomebased markers for quantitative trait locus; (4) epigenetic modifications in plant-microbe interactions; and (5) computational prediction of protein-protein interactions. Finally, we studied the plant species in these articles which include model species as well as nonmodel plant species of economic importance (e.g., food crops and medicinal plants).« less

  2. A Global Covariance Descriptor for Nuclear Atypia Scoring in Breast Histopathology Images.

    PubMed

    Khan, Adnan Mujahid; Sirinukunwattana, Korsuk; Rajpoot, Nasir

    2015-09-01

    Nuclear atypia scoring is a diagnostic measure commonly used to assess tumor grade of various cancers, including breast cancer. It provides a quantitative measure of deviation in visual appearance of cell nuclei from those in normal epithelial cells. In this paper, we present a novel image-level descriptor for nuclear atypia scoring in breast cancer histopathology images. The method is based on the region covariance descriptor that has recently become a popular method in various computer vision applications. The descriptor in its original form is not suitable for classification of histopathology images as cancerous histopathology images tend to possess diversely heterogeneous regions in a single field of view. Our proposed image-level descriptor, which we term as the geodesic mean of region covariance descriptors, possesses all the attractive properties of covariance descriptors lending itself to tractable geodesic-distance-based k-nearest neighbor classification using efficient kernels. The experimental results suggest that the proposed image descriptor yields high classification accuracy compared to a variety of widely used image-level descriptors.

  3. Hybrid regulatory models: a statistically tractable approach to model regulatory network dynamics.

    PubMed

    Ocone, Andrea; Millar, Andrew J; Sanguinetti, Guido

    2013-04-01

    Computational modelling of the dynamics of gene regulatory networks is a central task of systems biology. For networks of small/medium scale, the dominant paradigm is represented by systems of coupled non-linear ordinary differential equations (ODEs). ODEs afford great mechanistic detail and flexibility, but calibrating these models to data is often an extremely difficult statistical problem. Here, we develop a general statistical inference framework for stochastic transcription-translation networks. We use a coarse-grained approach, which represents the system as a network of stochastic (binary) promoter and (continuous) protein variables. We derive an exact inference algorithm and an efficient variational approximation that allows scalable inference and learning of the model parameters. We demonstrate the power of the approach on two biological case studies, showing that the method allows a high degree of flexibility and is capable of testable novel biological predictions. http://homepages.inf.ed.ac.uk/gsanguin/software.html. Supplementary data are available at Bioinformatics online.

  4. Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction

    DOE PAGES

    Mach, J. C.; Budrow, C. J.; Pagan, D. C.; ...

    2017-03-15

    Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present paper, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to developmore » significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. Finally, the experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.« less

  5. An approximate method for calculating three-dimensional inviscid hypersonic flow fields

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Dejarnette, Fred R.

    1990-01-01

    An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.

  6. Wrinkle-free design of thin membrane structures using stress-based topology optimization

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-05-01

    Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.

  7. Electron-impact ionization of atomic hydrogen

    NASA Astrophysics Data System (ADS)

    Baertschy, Mark David

    2000-10-01

    Since the invention of quantum mechanics, even the simplest example of collisional breakup in a system of charged particles, e - + H --> H+ + e- + e-, has stood as one of the last unsolved fundamental problems in atomic physics. A complete solution requires calculating the energies and directions for a final state in which three charged particles are moving apart. Advances in the formal description of three-body breakup have yet to lead to a viable computational method. Traditional approaches, based on two-body formalisms, have been unable to produce differential cross sections for the three-body final state. Now, by using a mathematical transformation of the Schrödinger equation that makes the final state tractable, a complete solution has finally been achieved. Under this transformation, the scattering wave function can be calculated without imposing explicit scattering boundary conditions. This approach has produced the first triple differential cross sections that agree on an absolute scale with experiment as well as the first ab initio calculations of the single differential cross section [29].

  8. Study of dynamics of X-14B VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Loscutoff, W. V.; Mitchiner, J. L.; Roesener, R. A.; Seevers, J. A.

    1973-01-01

    Research was initiated to investigate certain facets of modern control theory and their integration with a digital computer to provide a tractable flight control system for a VTOL aircraft. Since the hover mode is the most demanding phase in the operation of a VTOL aircraft, the research efforts were concentrated in this mode of aircraft operation. Research work on three different aspects of the operation of the X-14B VTOL aircraft is discussed. A general theory for optimal, prespecified, closed-loop control is developed. The ultimate goal was optimal decoupling of the modes of the VTOL aircraft to simplify the pilot's task of handling the aircraft. Modern control theory is used to design deterministic state estimators which provide state variables not measured directly, but which are needed for state variable feedback control. The effect of atmospheric turbulence on the X-14B is investigated. A maximum magnitude gust envelope within which the aircraft could operate stably with the available control power is determined.

  9. Jump state estimation with multiple sensors with packet dropping and delaying channels

    NASA Astrophysics Data System (ADS)

    Dolz, Daniel; Peñarrocha, Ignacio; Sanchis, Roberto

    2016-03-01

    This work addresses the design of a state observer for systems whose outputs are measured through a communication network. The measurements from each sensor node are assumed to arrive randomly, scarcely and with a time-varying delay. The proposed model of the plant and the network measurement scenarios cover the cases of multiple sensors, out-of-sequence measurements, buffered measurements on a single packet and multirate sensor measurements. A jump observer is proposed that selects a different gain depending on the number of periods elapsed between successfully received measurements and on the available data. A finite set of gains is pre-calculated offline with a tractable optimisation problem, where the complexity of the observer implementation is a design parameter. The computational cost of the observer implementation is much lower than in the Kalman filter, whilst the performance is similar. Several examples illustrate the observer design for different measurement scenarios and observer complexity and show the achievable performance.

  10. Autonomous Guidance of Agile Small-scale Rotorcraft

    NASA Technical Reports Server (NTRS)

    Mettler, Bernard; Feron, Eric

    2004-01-01

    This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.

  11. Effects of van der Waals Force and Thermal Stresses on Pull-in Instability of Clamped Rectangular Microplates

    PubMed Central

    Batra, Romesh C.; Porfiri, Maurizio; Spinello, Davide

    2008-01-01

    We study the influence of von Kármán nonlinearity, van der Waals force, and thermal stresses on pull-in instability and small vibrations of electrostatically actuated microplates. We use the Galerkin method to develop a tractable reduced-order model for electrostatically actuated clamped rectangular microplates in the presence of van der Waals forces and thermal stresses. More specifically, we reduce the governing two-dimensional nonlinear transient boundary-value problem to a single nonlinear ordinary differential equation. For the static problem, the pull-in voltage and the pull-in displacement are determined by solving a pair of nonlinear algebraic equations. The fundamental vibration frequency corresponding to a deflected configuration of the microplate is determined by solving a linear algebraic equation. The proposed reduced-order model allows for accurately estimating the combined effects of van der Waals force and thermal stresses on the pull-in voltage and the pull-in deflection profile with an extremely limited computational effort. PMID:27879752

  12. Partitioning heritability by functional annotation using genome-wide association summary statistics.

    PubMed

    Finucane, Hilary K; Bulik-Sullivan, Brendan; Gusev, Alexander; Trynka, Gosia; Reshef, Yakir; Loh, Po-Ru; Anttila, Verneri; Xu, Han; Zang, Chongzhi; Farh, Kyle; Ripke, Stephan; Day, Felix R; Purcell, Shaun; Stahl, Eli; Lindstrom, Sara; Perry, John R B; Okada, Yukinori; Raychaudhuri, Soumya; Daly, Mark J; Patterson, Nick; Neale, Benjamin M; Price, Alkes L

    2015-11-01

    Recent work has demonstrated that some functional categories of the genome contribute disproportionately to the heritability of complex diseases. Here we analyze a broad set of functional elements, including cell type-specific elements, to estimate their polygenic contributions to heritability in genome-wide association studies (GWAS) of 17 complex diseases and traits with an average sample size of 73,599. To enable this analysis, we introduce a new method, stratified LD score regression, for partitioning heritability from GWAS summary statistics while accounting for linked markers. This new method is computationally tractable at very large sample sizes and leverages genome-wide information. Our findings include a large enrichment of heritability in conserved regions across many traits, a very large immunological disease-specific enrichment of heritability in FANTOM5 enhancers and many cell type-specific enrichments, including significant enrichment of central nervous system cell types in the heritability of body mass index, age at menarche, educational attainment and smoking behavior.

  13. Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mach, J. C.; Budrow, C. J.; Pagan, D. C.

    Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present paper, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to developmore » significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. Finally, the experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.« less

  14. Similarity-transformed perturbation theory on top of truncated local coupled cluster solutions: Theory and applications to intermolecular interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azar, Richard Julian, E-mail: julianazar2323@berkeley.edu; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu

    2015-05-28

    Your correspondents develop and apply fully nonorthogonal, local-reference perturbation theories describing non-covalent interactions. Our formulations are based on a Löwdin partitioning of the similarity-transformed Hamiltonian into a zeroth-order intramonomer piece (taking local CCSD solutions as its zeroth-order eigenfunction) plus a first-order piece coupling the fragments. If considerations are limited to a single molecule, the proposed intermolecular similarity-transformed perturbation theory represents a frozen-orbital variant of the “(2)”-type theories shown to be competitive with CCSD(T) and of similar cost if all terms are retained. Different restrictions on the zeroth- and first-order amplitudes are explored in the context of large-computation tractability and elucidationmore » of non-local effects in the space of singles and doubles. To accurately approximate CCSD intermolecular interaction energies, a quadratically growing number of variables must be included at zeroth-order.« less

  15. Electron-impact ionization of atomic hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baertschy, Mark D.

    2000-02-01

    Since the invention of quantum mechanics, even the simplest example of collisional breakup in a system of charged particles, e - + H → H + + e - + e +, has stood as one of the last unsolved fundamental problems in atomic physics. A complete solution requires calculating the energies and directions for a final state in which three charged particles are moving apart. Advances in the formal description of three-body breakup have yet to lead to a viable computational method. Traditional approaches, based on two-body formalisms, have been unable to produce differential cross sections for the three-bodymore » final state. Now, by using a mathematical transformation of the Schrodinger equation that makes the final state tractable, a complete solution has finally been achieved, Under this transformation, the scattering wave function can be calculated without imposing explicit scattering boundary conditions. This approach has produced the first triple differential cross sections that agree on an absolute scale with experiment as well as the first ab initio calculations of the single differential cross section.« less

  16. Effects of van der Waals Force and Thermal Stresses on Pull-in Instability of Clamped Rectangular Microplates.

    PubMed

    Batra, Romesh C; Porfiri, Maurizio; Spinello, Davide

    2008-02-15

    We study the influence of von Karman nonlinearity, van der Waals force, and a athermal stresses on pull-in instability and small vibrations of electrostatically actuated mi-croplates. We use the Galerkin method to develop a tractable reduced-order model for elec-trostatically actuated clamped rectangular microplates in the presence of van der Waals forcesand thermal stresses. More specifically, we reduce the governing two-dimensional nonlineartransient boundary-value problem to a single nonlinear ordinary differential equation. For thestatic problem, the pull-in voltage and the pull-in displacement are determined by solving apair of nonlinear algebraic equations. The fundamental vibration frequency corresponding toa deflected configuration of the microplate is determined by solving a linear algebraic equa-tion. The proposed reduced-order model allows for accurately estimating the combined effectsof van der Waals force and thermal stresses on the pull-in voltage and the pull-in deflectionprofile with an extremely limited computational effort.

  17. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    PubMed

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  19. Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms

    DOE PAGES

    Roald, Line Alnaes; Andersson, Goran

    2017-08-29

    Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less

  20. Communication: The description of strong correlation within self-consistent Green's function second-order perturbation theory

    NASA Astrophysics Data System (ADS)

    Phillips, Jordan J.; Zgid, Dominika

    2014-06-01

    We report an implementation of self-consistent Green's function many-body theory within a second-order approximation (GF2) for application with molecular systems. This is done by iterative solution of the Dyson equation expressed in matrix form in an atomic orbital basis, where the Green's function and self-energy are built on the imaginary frequency and imaginary time domain, respectively, and fast Fourier transform is used to efficiently transform these quantities as needed. We apply this method to several archetypical examples of strong correlation, such as a H32 finite lattice that displays a highly multireference electronic ground state even at equilibrium lattice spacing. In all cases, GF2 gives a physically meaningful description of the metal to insulator transition in these systems, without resorting to spin-symmetry breaking. Our results show that self-consistent Green's function many-body theory offers a viable route to describing strong correlations while remaining within a computationally tractable single-particle formalism.

  1. Comparing nonlinear MHD simulations of low-aspect-ratio RFPs to RELAX experiments

    NASA Astrophysics Data System (ADS)

    McCollam, K. J.; den Hartog, D. J.; Jacobson, C. M.; Sovinec, C. R.; Masamune, S.; Sanpei, A.

    2016-10-01

    Standard reversed-field pinch (RFP) plasmas provide a nonlinear dynamical system as a validation domain for numerical MHD simulation codes, with applications in general toroidal confinement scenarios including tokamaks. Using the NIMROD code, we simulate the nonlinear evolution of RFP plasmas similar to those in the RELAX experiment. The experiment's modest Lundquist numbers S (as low as a few times 104) make closely matching MHD simulations tractable given present computing resources. Its low aspect ratio ( 2) motivates a comparison study using cylindrical and toroidal geometries in NIMROD. We present initial results from nonlinear single-fluid runs at S =104 for both geometries and a range of equilibrium parameters, which preliminarily show that the magnetic fluctuations are roughly similar between the two geometries and between simulation and experiment, though there appear to be some qualitative differences in their temporal evolution. Runs at higher S are planned. This work is supported by the U.S. DOE and by the Japan Society for the Promotion of Science.

  2. The genetic basis of alcoholism: multiple phenotypes, many genes, complex networks

    PubMed Central

    2012-01-01

    Alcoholism is a significant public health problem. A picture of the genetic architecture underlying alcohol-related phenotypes is emerging from genome-wide association studies and work on genetically tractable model organisms. PMID:22348705

  3. Co-evolution for Problem Simplification

    NASA Technical Reports Server (NTRS)

    Haith, Gary L.; Lohn, Jason D.; Cplombano, Silvano P.; Stassinopoulos, Dimitris

    1999-01-01

    This paper explores a co-evolutionary approach applicable to difficult problems with limited failure/success performance feedback. Like familiar "predator-prey" frameworks this algorithm evolves two populations of individuals - the solutions (predators) and the problems (prey). The approach extends previous work by rewarding only the problems that match their difficulty to the level of solut,ion competence. In complex problem domains with limited feedback, this "tractability constraint" helps provide an adaptive fitness gradient that, effectively differentiates the candidate solutions. The algorithm generates selective pressure toward the evolution of increasingly competent solutions by rewarding solution generality and uniqueness and problem tractability and difficulty. Relative (inverse-fitness) and absolute (static objective function) approaches to evaluating problem difficulty are explored and discussed. On a simple control task, this co-evolutionary algorithm was found to have significant advantages over a genetic algorithm with either a static fitness function or a fitness function that changes on a hand-tuned schedule.

  4. On dependency properties of the ISIs generated by a two-compartmental neuronal model.

    PubMed

    Benedetto, Elisa; Sacerdote, Laura

    2013-02-01

    One-dimensional leaky integrate and fire neuronal models describe interspike intervals (ISIs) of a neuron as a renewal process and disregarding the neuron geometry. Many multi-compartment models account for the geometrical features of the neuron but are too complex for their mathematical tractability. Leaky integrate and fire two-compartment models seem a good compromise between mathematical tractability and an improved realism. They indeed allow to relax the renewal hypothesis, typical of one-dimensional models, without introducing too strong mathematical difficulties. Here, we pursue the analysis of the two-compartment model studied by Lansky and Rodriguez (Phys D 132:267-286, 1999), aiming of introducing some specific mathematical results used together with simulation techniques. With the aid of these methods, we investigate dependency properties of ISIs for different values of the model parameters. We show that an increase of the input increases the strength of the dependence between successive ISIs.

  5. Tractable Pareto Optimization of Temporal Preferences

    NASA Technical Reports Server (NTRS)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  6. Tackling the achilles' heel of genetic testing.

    PubMed

    Watkins, Hugh

    2015-01-14

    Assigning pathogenicity to rare genetic variants is at its hardest with the enormous titin gene, but comprehensive genomic analysis makes the task more tractable (Roberts et al., this issue). Copyright © 2015, American Association for the Advancement of Science.

  7. Effects of Disturbance on Populations of Marine Mammals

    DTIC Science & Technology

    2014-09-30

    relation between foraging success of mothers and pup production (reproductive rate). Second, we we used mark-recapture models to quantify the...responses to disturbance are not necessarily surrogate measures of population-level responses is widely understood. However, without tractable

  8. Towards a Computational Framework for Modeling the Impact of Aortic Coarctations Upon Left Ventricular Load

    PubMed Central

    Karabelas, Elias; Gsell, Matthias A. F.; Augustin, Christoph M.; Marx, Laura; Neic, Aurel; Prassl, Anton J.; Goubergrits, Leonid; Kuehne, Titus; Plank, Gernot

    2018-01-01

    Computational fluid dynamics (CFD) models of blood flow in the left ventricle (LV) and aorta are important tools for analyzing the mechanistic links between myocardial deformation and flow patterns. Typically, the use of image-based kinematic CFD models prevails in applications such as predicting the acute response to interventions which alter LV afterload conditions. However, such models are limited in their ability to analyze any impacts upon LV load or key biomarkers known to be implicated in driving remodeling processes as LV function is not accounted for in a mechanistic sense. This study addresses these limitations by reporting on progress made toward a novel electro-mechano-fluidic (EMF) model that represents the entire physics of LV electromechanics (EM) based on first principles. A biophysically detailed finite element (FE) model of LV EM was coupled with a FE-based CFD solver for moving domains using an arbitrary Eulerian-Lagrangian (ALE) formulation. Two clinical cases of patients suffering from aortic coarctations (CoA) were built and parameterized based on clinical data under pre-treatment conditions. For one patient case simulations under post-treatment conditions after geometric repair of CoA by a virtual stenting procedure were compared against pre-treatment results. Numerical stability of the approach was demonstrated by analyzing mesh quality and solver performance under the significantly large deformations of the LV blood pool. Further, computational tractability and compatibility with clinical time scales were investigated by performing strong scaling benchmarks up to 1536 compute cores. The overall cost of the entire workflow for building, fitting and executing EMF simulations was comparable to those reported for image-based kinematic models, suggesting that EMF models show potential of evolving into a viable clinical research tool. PMID:29892227

  9. Anharmonic interatomic force constants and thermal conductivity from Grüneisen parameters: An application to graphene

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua; Gan, Chee Kwan

    2017-07-01

    Phonon-mediated thermal conductivity, which is of great technological relevance, arises due fundamentally to anharmonic scattering from interatomic potentials. Despite its prevalence, accurate first-principles calculations of thermal conductivity remain challenging, primarily due to the high computational cost of anharmonic interatomic force constant (IFC) calculations. Meanwhile, the related anharmonic phenomenon of thermal expansion is much more tractable, being computable from the Grüneisen parameters associated with phonon frequency shifts due to crystal deformations. In this work, we propose an approach for computing the largest cubic IFCs from the Grüneisen parameter data. This allows an approximate determination of the thermal conductivity via a much less expensive route. The key insight is that although the Grüneisen parameters cannot possibly contain all the information on the cubic IFCs, being derivable from spatially uniform deformations, they can still unambiguously and accurately determine the largest and most physically relevant ones. By fitting the anisotropic Grüneisen parameter data along judiciously designed deformations, we can deduce (i.e., reverse-engineer) the dominant cubic IFCs and estimate three-phonon scattering amplitudes. We illustrate our approach by explicitly computing the largest cubic IFCs and thermal conductivity of graphene, especially for its out-of-plane (flexural) modes that exhibit anomalously large anharmonic shifts and thermal conductivity contributions. Our calculations on graphene not only exhibit reasonable agreement with established density-functional theory results, but they also present a pedagogical opportunity for introducing an elegant analytic treatment of the Grüneisen parameters of generic two-band models. Our approach can be readily extended to more complicated crystalline materials with nontrivial anharmonic lattice effects.

  10. Tackling the conformational sampling of larger flexible compounds and macrocycles in pharmacology and drug discovery.

    PubMed

    Chen, I-Jen; Foloppe, Nicolas

    2013-12-15

    Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. The potential for fast van der Waals computations for layered materials using a Lifshitz model

    NASA Astrophysics Data System (ADS)

    Zhou, Yao; Pellouchoud, Lenson A.; Reed, Evan J.

    2017-06-01

    Computation of the van der Waals (vdW) interactions plays a crucial role in the study of layered materials. The adiabatic-connection fluctuation-dissipation theorem within random phase approximation (ACFDT-RPA) has been empirically reported to be the most accurate of commonly used methods, but it is limited to small systems due to its computational complexity. Without a computationally tractable vdW correction, fictitious strains are often introduced in the study of multilayer heterostructures, which, we find, can change the vdW binding energy by as much as 15%. In this work, we employed for the first time a defined Lifshitz model to provide the vdW potentials for a spectrum of layered materials orders of magnitude faster than the ACFDT-RPA for representative layered material structures. We find that a suitably defined Lifshitz model gives the correlation component of the binding energy to within 8-20% of the ACFDT-RPA calculations for a variety of layered heterostructures. Using this fast Lifshitz model, we studied the vdW binding properties of 210 three-layered heterostructures. Our results demonstrate that the three-body vdW effects are generally small (10% of the binding energy) in layered materials for most cases, and that non-negligible second-nearest neighbor layer interaction and three-body effects are observed for only those cases in which the middle layer is atomically thin (e.g. BN or graphene). We find that there is potential for particular combinations of stacked layers to exhibit repulsive three-body van der Waals effects, although these effects are likely to be much smaller than two-body effects.

  12. Gait control in a soft robot by sensing interactions with the environment using self-deformation.

    PubMed

    Umedachi, Takuya; Kano, Takeshi; Ishiguro, Akio; Trimmer, Barry A

    2016-12-01

    All animals use mechanosensors to help them move in complex and changing environments. With few exceptions, these sensors are embedded in soft tissues that deform in normal use such that sensory feedback results from the interaction of an animal with its environment. Useful information about the environment is expected to be embedded in the mechanical responses of the tissues during movements. To explore how such sensory information can be used to control movements, we have developed a soft-bodied crawling robot inspired by a highly tractable animal model, the tobacco hornworm Manduca sexta . This robot uses deformations of its body to detect changes in friction force on a substrate. This information is used to provide local sensory feedback for coupled oscillators that control the robot's locomotion. The validity of the control strategy is demonstrated with both simulation and a highly deformable three-dimensionally printed soft robot. The results show that very simple oscillators are able to generate propagating waves and crawling/inching locomotion through the interplay of deformation in different body parts in a fully decentralized manner. Additionally, we confirmed numerically and experimentally that the gait pattern can switch depending on the surface contact points. These results are expected to help in the design of adaptable, robust locomotion control systems for soft robots and also suggest testable hypotheses about how soft animals use sensory feedback.

  13. Gait control in a soft robot by sensing interactions with the environment using self-deformation

    PubMed Central

    Ishiguro, Akio; Trimmer, Barry A.

    2016-01-01

    All animals use mechanosensors to help them move in complex and changing environments. With few exceptions, these sensors are embedded in soft tissues that deform in normal use such that sensory feedback results from the interaction of an animal with its environment. Useful information about the environment is expected to be embedded in the mechanical responses of the tissues during movements. To explore how such sensory information can be used to control movements, we have developed a soft-bodied crawling robot inspired by a highly tractable animal model, the tobacco hornworm Manduca sexta. This robot uses deformations of its body to detect changes in friction force on a substrate. This information is used to provide local sensory feedback for coupled oscillators that control the robot's locomotion. The validity of the control strategy is demonstrated with both simulation and a highly deformable three-dimensionally printed soft robot. The results show that very simple oscillators are able to generate propagating waves and crawling/inching locomotion through the interplay of deformation in different body parts in a fully decentralized manner. Additionally, we confirmed numerically and experimentally that the gait pattern can switch depending on the surface contact points. These results are expected to help in the design of adaptable, robust locomotion control systems for soft robots and also suggest testable hypotheses about how soft animals use sensory feedback. PMID:28083114

  14. Sexual response of male Drosophila to honey bee queen mandibular pheromone: implications for genetic studies of social insects.

    PubMed

    Croft, Justin R; Liu, Tom; Camiletti, Alison L; Simon, Anne F; Thompson, Graham J

    2017-02-01

    Honey bees secrete a queen mandibular pheromone that renders workers reproductively altruistic and drones sexually attentive. This sex-specific function of QMP may have evolved from a sexually dimorphic signaling mechanism derived from pre-social ancestors. If so, there is potential for pre-social insects to respond to QMP, and in a manner that is comparable to its normal effect on workers and drones. Remarkably, QMP applied to female Drosophila does induce worker-like qualities [Camiletti et al. (Entomol Exp Appl 147:262, 2013)], and we here extend this comparison to examine the effects of bee pheromone on male fruit flies. We find that male Drosophila melanogaster consistently orient towards a source of queen pheromone in a T-maze, suggesting a recruitment response comparable to the pheromone's normal effect on drones. Moreover, exposure to QMP renders male flies more sexually attentive; they display intensified pre-copulatory behavior towards conspecific females. We can inhibit this sexual effect through a loss-of-olfactory-function mutation, which suggests that the pheromone-responsive behavioral mechanism is olfactory-driven. These pheromone-induced changes to male Drosophila behavior suggest that aspects of sexual signaling are conserved between these two distantly related taxa. Our results highlight a role for Drosophila as a genetically tractable pre-social model for studies of social insect biology.

  15. Corticotropin-Releasing Factor Critical for Zebrafish Camouflage Behavior Is Regulated by Light and Sensitive to Ethanol

    PubMed Central

    Wagle, Mahendra; Mathur, Priya; Guo, Su

    2011-01-01

    The zebrafish camouflage response is an innate “hard-wired” behavior that offers an excellent opportunity to explore neural circuit assembly and function. Moreover, the camouflage response is sensitive to ethanol, making it a tractable system for understanding how ethanol influences neural circuit development and function. Here we report the identification of corticotropin releasing factor (CRF) as a critical component of the camouflage response pathway. We further show that ethanol, having no direct effect on the visual sensory system or the melanocytes, acts downstream of retinal ganglion cells and requires the CRF-proopiomelanocortin (POMC) pathway to exert its effect on camouflage. Treatment with ethanol, as well as alteration of light exposure that changes sensory input into the camouflage circuit, robustly modifies CRF expression in subsets of neurons. Activity of both Adenylyl Cyclase 5 and Extracellular signal Regulated Kinase (ERK) is required for such ethanol- or light- induced plasticity of crf expression. These results reveal an essential role of a peptidergic pathway in camouflage that is regulated by light and influenced by ethanol at concentrations relevant to abuse and anxiolysis, in a cAMP- and ERK- dependent manner. We conclude that this ethanol-modulated camouflage response represents a novel and relevant system for molecular genetic dissection of a neural circuit that is regulated by light and sensitive to ethanol. PMID:21209207

  16. Corticotropin-releasing factor critical for zebrafish camouflage behavior is regulated by light and sensitive to ethanol.

    PubMed

    Wagle, Mahendra; Mathur, Priya; Guo, Su

    2011-01-05

    The zebrafish camouflage response is an innate "hard-wired" behavior that offers an excellent opportunity to explore neural circuit assembly and function. Moreover, the camouflage response is sensitive to ethanol, making it a tractable system for understanding how ethanol influences neural circuit development and function. Here we report the identification of corticotropin-releasing factor (CRF) as a critical component of the camouflage response pathway. We further show that ethanol, having no direct effect on the visual sensory system or the melanocytes, acts downstream of retinal ganglion cells and requires the CRF-proopiomelanocortin pathway to exert its effect on camouflage. Treatment with ethanol, as well as alteration of light exposure that changes sensory input into the camouflage circuit, robustly modifies CRF expression in subsets of neurons. Activity of both adenylyl cyclase 5 and extracellular signal-regulated kinase (ERK) is required for such ethanol-induced or light-induced plasticity of crf expression. These results reveal an essential role of a peptidergic pathway in camouflage that is regulated by light and influenced by ethanol at concentrations relevant to abuse and anxiolysis, in a cAMP-dependent and ERK-dependent manner. We conclude that this ethanol-modulated camouflage response represents a novel and relevant system for molecular genetic dissection of a neural circuit that is regulated by light and sensitive to ethanol.

  17. “Glowing Head” Mice: A Genetic Tool Enabling Reliable Preclinical Image-Based Evaluation of Cancers in Immunocompetent Allografts

    PubMed Central

    Day, Chi-Ping; Carter, John; Ohler, Zoe Weaver; Bonomi, Carrie; El Meskini, Rajaa; Martin, Philip; Graff-Cherry, Cari; Feigenbaum, Lionel; Tüting, Thomas; Van Dyke, Terry; Hollingshead, Melinda; Merlino, Glenn

    2014-01-01

    Preclinical therapeutic assessment currently relies on the growth response of established human cell lines xenografted into immunocompromised mice, a strategy that is generally not predictive of clinical outcomes. Immunocompetent genetically engineered mouse (GEM)-derived tumor allograft models offer highly tractable preclinical alternatives and facilitate analysis of clinically promising immunomodulatory agents. Imageable reporters are essential for accurately tracking tumor growth and response, particularly for metastases. Unfortunately, reporters such as luciferase and GFP are foreign antigens in immunocompetent mice, potentially hindering tumor growth and confounding therapeutic responses. Here we assessed the value of reporter-tolerized GEMs as allograft recipients by targeting minimal expression of a luciferase-GFP fusion reporter to the anterior pituitary gland (dubbed the “Glowing Head” or GH mouse). The luciferase-GFP reporter expressed in tumor cells induced adverse immune responses in wildtype mouse, but not in GH mouse, as transplantation hosts. The antigenicity of optical reporters resulted in a decrease in both the growth and metastatic potential of the labeled tumor in wildtype mice as compared to the GH mice. Moreover, reporter expression can also alter the tumor response to chemotherapy or targeted therapy in a context-dependent manner. Thus the GH mice and experimental approaches vetted herein provide concept validation and a strategy for effective, reproducible preclinical evaluation of growth and response kinetics for traceable tumors. PMID:25369133

  18. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    ERIC Educational Resources Information Center

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  19. Improving Teacher-Prepared Computer Software for Better Language Teaching/Learning.

    ERIC Educational Resources Information Center

    Rhodes, Frances Gates

    A study is reported that examined the relative effectiveness of four computer-assisted-instruction (CAI) manners of presentation and response for teaching irregular verbs to English/Spanish bilingual students in South Texas. Each of 4 types of CAI presentation gave the same 46 selected irregular verbs in context to fifth-graders in 4 subject…

  20. Integrating Mathematical Modeling for Undergraduate Pre-Service Science Education Learning and Instruction in Middle School Classrooms

    ERIC Educational Resources Information Center

    Carrejo, David; Robertson, William H.

    2011-01-01

    Computer-based mathematical modeling in physics is a process of constructing models of concepts and the relationships between them in the scientific characteristics of work. In this manner, computer-based modeling integrates the interactions of natural phenomenon through the use of models, which provide structure for theories and a base for…

  1. 26 CFR 1.179-5 - Time and manner of making election.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... desktop computer costing $1,500. On Taxpayer's 2003 Federal tax return filed on April 15, 2004, Taxpayer elected to expense under section 179 the full cost of the laptop computer and the full cost of the desktop... provided by the Internal Revenue Code, the regulations under the Code, or other guidance published in the...

  2. Climate Change Discourse in Mass Media: Application of Computer-Assisted Content Analysis

    ERIC Educational Resources Information Center

    Kirilenko, Andrei P.; Stepchenkova, Svetlana O.

    2012-01-01

    Content analysis of mass media publications has become a major scientific method used to analyze public discourse on climate change. We propose a computer-assisted content analysis method to extract prevalent themes and analyze discourse changes over an extended period in an objective and quantifiable manner. The method includes the following: (1)…

  3. Brain Activity Associated with Emoticons: An fMRI Study

    NASA Astrophysics Data System (ADS)

    Yuasa, Masahide; Saito, Keiichi; Mukawa, Naoki

    In this paper, we describe that brain activities associated with emoticons by using fMRI. In communication over a computer network, we use abstract faces such as computer graphics (CG) avatars and emoticons. These faces convey users' emotions and enrich their communications. However, the manner in which these faces influence the mental process is as yet unknown. The human brain may perceive the abstract face in an entirely different manner, depending on its level of reality. We conducted an experiment using fMRI in order to investigate the effects of emoticons. The results show that right inferior frontal gyrus, which associated with nonverbal communication, is activated by emoticons. Since the emoticons were created to reflect the real human facial expressions as accurately as possible, we believed that they would activate the right fusiform gyrus. However, this region was not found to be activated during the experiment. This finding is useful in understanding how abstract faces affect our behaviors and decision-making in communication over a computer network.

  4. Cellular computational platform and neurally inspired elements thereof

    DOEpatents

    Okandan, Murat

    2016-11-22

    A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.

  5. Load Forecasting Based Distribution System Network Reconfiguration -- A Distributed Data-Driven Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, themore » proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.« less

  6. Approximation methods for control of structural acoustics models with piezoceramic actuators

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.

    1993-01-01

    The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input team. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.

  7. Spectrally-Temporally Adapted Spectrally Modulated Spectrally Encoded (SMSE) Waveform Design for Coexistent CR-Based SDR Applications

    DTIC Science & Technology

    2010-03-01

    uses all available resources in some optimized manner. By further exploiting the design flexibility and computational efficiency of Orthogonal Frequency...in the following sections. 3.2.1 Estimation of PU Signal Statistics. The Estimate PU Signal Statis- tics function of Fig 3.4 is used to compute the...consecutive PU transmissions, and 4) the probability of transitioning from one transmission state to another. These statistics are then used to compute the

  8. Understanding and Improving Blind Students' Access to Visual Information in Computer Science Education

    ERIC Educational Resources Information Center

    Baker, Catherine M.

    2017-01-01

    Teaching people with disabilities tech skills empowers them to create solutions to problems they encounter and prepares them for careers. However, computer science is typically taught in a highly visual manner which can present barriers for people who are blind. The goal of this dissertation is to understand and decrease those barriers. The first…

  9. Computer Conferencing in Mathematics Classrooms: Distance Education--The Long and the Short of It.

    ERIC Educational Resources Information Center

    Lamb, Charles E.; Klemm, William R.

    One of the major goals of mathematics education reform efforts is for students to become more confident in their abilities. This paper suggests that computer conferencing provides a way to change classroom practice so that students can work together in a self-paced manner that builds self-esteem and confidence in mathematics. A pedagogical…

  10. An Alternative Method for Computing Unit Costs and Productivity Ratios. AIR 1984 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Winstead, Wayland H.; And Others

    An alternative measure for evaluating the performance of academic departments was studied. A comparison was made with the traditional manner for computing unit costs and productivity ratios: prorating the salary and effort of each faculty member to each course level based on the personal mix of course taught. The alternative method used averaging…

  11. Computer-Assisted Instruction in the Basic Public Speaking Course: Issues of Development and Implementation.

    ERIC Educational Resources Information Center

    Vest, David; Tajchman, Ron

    A study explained the manner in which a computer-assisted tutorial was built and assessed the utility of the courseware. The tutorial was designed to demonstrate the efficacy of good organization in informing the audience about a topic and provide appropriate models for the presentation of the well-organized informative speech. The topic of the…

  12. Transportation planning and ITS : putting the pieces together

    DOT National Transportation Integrated Search

    1998-01-01

    Intelligent Transportation Systems (ITS) include the application of computer, electronics, and communications technologies and management strategies -- in an integrated manner -- providing traveler information to increase the safety and efficiency of...

  13. Contextual classification of multispectral image data: Approximate algorithm

    NASA Technical Reports Server (NTRS)

    Tilton, J. C. (Principal Investigator)

    1980-01-01

    An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.

  14. Extinction from a rationalist perspective.

    PubMed

    Gallistel, C R

    2012-05-01

    The merging of the computational theory of mind and evolutionary thinking leads to a kind of rationalism, in which enduring truths about the world have become implicit in the computations that enable the brain to cope with the experienced world. The dead reckoning computation, for example, is implemented within the brains of animals as one of the mechanisms that enables them to learn where they are (Gallistel, 1990, 1995). It integrates a velocity signal with respect to a time signal. Thus, the manner in which position and velocity relate to one another in the world is reflected in the manner in which signals representing those variables are processed in the brain. I use principles of information theory and Bayesian inference to derive from other simple principles explanations for: (1) the failure of partial reinforcement to increase reinforcements to acquisition; (2) the partial reinforcement extinction effect; (3) spontaneous recovery; (4) renewal; (5) reinstatement; (6) resurgence (aka facilitated reacquisition). Like the principle underlying dead-reckoning, these principles are grounded in analytic considerations. They are the kind of enduring truths about the world that are likely to have shaped the brain's computations. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Fast-SNP: a fast matrix pre-processing algorithm for efficient loopless flux optimization of metabolic models

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155

  16. Solving Hard Computational Problems Efficiently: Asymptotic Parametric Complexity 3-Coloring Algorithm

    PubMed Central

    Martín H., José Antonio

    2013-01-01

    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711

  17. Value-based decision making via sequential sampling with hierarchical competition and attentional modulation

    PubMed Central

    2017-01-01

    In principle, formal dynamical models of decision making hold the potential to represent fundamental computations underpinning value-based (i.e., preferential) decisions in addition to perceptual decisions. Sequential-sampling models such as the race model and the drift-diffusion model that are grounded in simplicity, analytical tractability, and optimality remain popular, but some of their more recent counterparts have instead been designed with an aim for more feasibility as architectures to be implemented by actual neural systems. Connectionist models are proposed herein at an intermediate level of analysis that bridges mental phenomena and underlying neurophysiological mechanisms. Several such models drawing elements from the established race, drift-diffusion, feedforward-inhibition, divisive-normalization, and competing-accumulator models were tested with respect to fitting empirical data from human participants making choices between foods on the basis of hedonic value rather than a traditional perceptual attribute. Even when considering performance at emulating behavior alone, more neurally plausible models were set apart from more normative race or drift-diffusion models both quantitatively and qualitatively despite remaining parsimonious. To best capture the paradigm, a novel six-parameter computational model was formulated with features including hierarchical levels of competition via mutual inhibition as well as a static approximation of attentional modulation, which promotes “winner-take-all” processing. Moreover, a meta-analysis encompassing several related experiments validated the robustness of model-predicted trends in humans’ value-based choices and concomitant reaction times. These findings have yet further implications for analysis of neurophysiological data in accordance with computational modeling, which is also discussed in this new light. PMID:29077746

  18. Fully Coupled Nonlinear Fluid Flow and Poroelasticity in Arbitrarily Fractured Porous Media: A Hybrid-Dimensional Computational Model

    NASA Astrophysics Data System (ADS)

    Jin, L.; Zoback, M. D.

    2017-10-01

    We formulate the problem of fully coupled transient fluid flow and quasi-static poroelasticity in arbitrarily fractured, deformable porous media saturated with a single-phase compressible fluid. The fractures we consider are hydraulically highly conductive, allowing discontinuous fluid flux across them; mechanically, they act as finite-thickness shear deformation zones prior to failure (i.e., nonslipping and nonpropagating), leading to "apparent discontinuity" in strain and stress across them. Local nonlinearity arising from pressure-dependent permeability of fractures is also included. Taking advantage of typically high aspect ratio of a fracture, we do not resolve transversal variations and instead assume uniform flow velocity and simple shear strain within each fracture, rendering the coupled problem numerically more tractable. Fractures are discretized as lower dimensional zero-thickness elements tangentially conforming to unstructured matrix elements. A hybrid-dimensional, equal-low-order, two-field mixed finite element method is developed, which is free from stability issues for a drained coupled system. The fully implicit backward Euler scheme is employed for advancing the fully coupled solution in time, and the Newton-Raphson scheme is implemented for linearization. We show that the fully discretized system retains a canonical form of a fracture-free poromechanical problem; the effect of fractures is translated to the modification of some existing terms as well as the addition of several terms to the capacity, conductivity, and stiffness matrices therefore allowing the development of independent subroutines for treating fractures within a standard computational framework. Our computational model provides more realistic inputs for some fracture-dominated poromechanical problems like fluid-induced seismicity.

  19. Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative Framework.

    PubMed

    Gershman, Samuel J; Daw, Nathaniel D

    2017-01-03

    We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, theories of RL have largely involved procedural and semantic memory, the way in which knowledge about action values or world models extracted gradually from many experiences can drive choice. This focus on semantic memory leaves out many aspects of memory, such as episodic memory, related to the traces of individual events. We suggest that these two challenges are related. The computational challenge can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (a) efficiently approximate value functions over complex state spaces, (b) learn with very little data, and (c) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system.

  20. Value-based decision making via sequential sampling with hierarchical competition and attentional modulation.

    PubMed

    Colas, Jaron T

    2017-01-01

    In principle, formal dynamical models of decision making hold the potential to represent fundamental computations underpinning value-based (i.e., preferential) decisions in addition to perceptual decisions. Sequential-sampling models such as the race model and the drift-diffusion model that are grounded in simplicity, analytical tractability, and optimality remain popular, but some of their more recent counterparts have instead been designed with an aim for more feasibility as architectures to be implemented by actual neural systems. Connectionist models are proposed herein at an intermediate level of analysis that bridges mental phenomena and underlying neurophysiological mechanisms. Several such models drawing elements from the established race, drift-diffusion, feedforward-inhibition, divisive-normalization, and competing-accumulator models were tested with respect to fitting empirical data from human participants making choices between foods on the basis of hedonic value rather than a traditional perceptual attribute. Even when considering performance at emulating behavior alone, more neurally plausible models were set apart from more normative race or drift-diffusion models both quantitatively and qualitatively despite remaining parsimonious. To best capture the paradigm, a novel six-parameter computational model was formulated with features including hierarchical levels of competition via mutual inhibition as well as a static approximation of attentional modulation, which promotes "winner-take-all" processing. Moreover, a meta-analysis encompassing several related experiments validated the robustness of model-predicted trends in humans' value-based choices and concomitant reaction times. These findings have yet further implications for analysis of neurophysiological data in accordance with computational modeling, which is also discussed in this new light.

  1. Geomagnetic Cutoff Rigidity Computer Program: Theory, Software Description and Example

    NASA Technical Reports Server (NTRS)

    Smart, D. F.; Shea, M. A.

    2001-01-01

    The access of charged particles to the earth from space through the geomagnetic field has been of interest since the discovery of the cosmic radiation. The early cosmic ray measurements found that cosmic ray intensity was ordered by the magnetic latitude and the concept of cutoff rigidity was developed. The pioneering work of Stoermer resulted in the theory of particle motion in the geomagnetic field, but the fundamental mathematical equations developed have 'no solution in closed form'. This difficulty has forced researchers to use the 'brute force' technique of numerical integration of individual trajectories to ascertain the behavior of trajectory families or groups. This requires that many of the trajectories must be traced in order to determine what energy (or rigidity) a charged particle must have to penetrate the magnetic field and arrive at a specified position. It turned out the cutoff rigidity was not a simple quantity but had many unanticipated complexities that required many hundreds if not thousands of individual trajectory calculations to solve. The accurate calculation of particle trajectories in the earth's magnetic field is a fundamental problem that limited the efficient utilization of cosmic ray measurements during the early years of cosmic ray research. As the power of computers has improved over the decades, the numerical integration procedure has grown more tractable, and magnetic field models of increasing accuracy and complexity have been utilized. This report is documentation of a general FORTRAN computer program to trace the trajectory of a charged particle of a specified rigidity from a specified position and direction through a model of the geomagnetic field.

  2. Clinical and mathematical introduction to computer processing of scintigraphic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goris, M.L.; Briandet, P.A.

    The authors state in their preface:''...we believe that there is no book yet available in which computing in nuclear medicine has been approached in a reasonable manner. This book is our attempt to correct the situation.'' The book is divided into four sections: (1) Clinical Applications of Quantitative Scintigraphic Analysis; (2) Mathematical Derivations; (3) Processing Methods of Scintigraphic Images; and (4) The (Computer) System. Section 1 has chapters on quantitative approaches to congenital and acquired heart diseases, nephrology and urology, and pulmonary medicine.

  3. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.

  4. Improved Hybrid Modeling of Spent Fuel Storage Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bibber, Karl van

    This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less

  5. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

  6. A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Cioaca, Alexandru

    A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.

  7. The impact of Docker containers on the performance of genomic pipelines

    PubMed Central

    Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L.; Notredame, Cedric

    2015-01-01

    Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time. PMID:26421241

  8. Target Trailing With Safe Navigation With Colregs for Maritime Autonomous Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki (Inventor); Aghazarian, Hrand (Inventor); Huntsberger, Terrance L. (Inventor); Howard, Andrew B. (Inventor); Wolf, Michael T. (Inventor); Zarzhitsky, Dimitri V. (Inventor)

    2014-01-01

    Systems and methods for operating autonomous waterborne vessels in a safe manner. The systems include hardware for identifying the locations and motions of other vessels, as well as the locations of stationary objects that represent navigation hazards. By applying a computational method that uses a maritime navigation algorithm for avoiding hazards and obeying COLREGS using Velocity Obstacles to the data obtained, the autonomous vessel computes a safe and effective path to be followed in order to accomplish a desired navigational end result, while operating in a manner so as to avoid hazards and to maintain compliance with standard navigational procedures defined by international agreement. The systems and methods have been successfully demonstrated on water with radar and stereo cameras as the perception sensors, and integrated with a higher level planner for trailing a maneuvering target.

  9. The impact of Docker containers on the performance of genomic pipelines.

    PubMed

    Di Tommaso, Paolo; Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L; Notredame, Cedric

    2015-01-01

    Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time.

  10. Modelling solid solutions with cluster expansion, special quasirandom structures, and thermodynamic approaches

    NASA Astrophysics Data System (ADS)

    Saltas, V.; Horlait, D.; Sgourou, E. N.; Vallianatos, F.; Chroneos, A.

    2017-12-01

    Modelling solid solutions is fundamental in understanding the properties of numerous materials which are important for a range of applications in various fields including nanoelectronics and energy materials such as fuel cells, nuclear materials, and batteries, as the systematic understanding throughout the composition range of solid solutions for a range of conditions can be challenging from an experimental viewpoint. The main motivation of this review is to contribute to the discussion in the community of the applicability of methods that constitute the investigation of solid solutions computationally tractable. This is important as computational modelling is required to calculate numerous defect properties and to act synergistically with experiment to understand these materials. This review will examine in detail two examples: silicon germanium alloys and MAX phase solid solutions. Silicon germanium alloys are technologically important in nanoelectronic devices and are also relevant considering the recent advances in ternary and quaternary groups IV and III-V semiconductor alloys. MAX phase solid solutions display a palette of ceramic and metallic properties and it is anticipated that via their tuning they can have applications ranging from nuclear to aerospace industries as well as being precursors for particular MXenes. In the final part, a brief summary assesses the limitations and possibilities of the methodologies discussed, whereas there is discussion on the future directions and examples of solid solution systems that should prove fruitful to consider.

  11. Investigation of Coal-biomass Catalytic Gasification using Experiments, Reaction Kinetics and Computational Fluid Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaglia, Francine; Agblevor, Foster; Klein, Michael

    A collaborative effort involving experiments, kinetic modeling, and computational fluid dynamics (CFD) was used to understand co-gasification of coal-biomass mixtures. The overall goal of the work was to determine the key reactive properties for coal-biomass mixed fuels. Sub-bituminous coal was mixed with biomass feedstocks to determine the fluidization and gasification characteristics of hybrid poplar wood, switchgrass and corn stover. It was found that corn stover and poplar wood were the best feedstocks to use with coal. The novel approach of this project was the use of a red mud catalyst to improve gasification and lower gasification temperatures. An important resultsmore » was the reduction of agglomeration of the biomass using the catalyst. An outcome of this work was the characterization of the chemical kinetics and reaction mechanisms of the co-gasification fuels, and the development of a set of models that can be integrated into other modeling environments. The multiphase flow code, MFIX, was used to simulate and predict the hydrodynamics and co-gasification, and results were validated with the experiments. The reaction kinetics modeling was used to develop a smaller set of reactions for tractable CFD calculations that represented the experiments. Finally, an efficient tool was developed, MCHARS, and coupled with MFIX to efficiently simulate the complex reaction kinetics.« less

  12. Extending the Stabilized Supralinear Network model for binocular image processing.

    PubMed

    Selby, Ben; Tripp, Bryan

    2017-06-01

    The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability. First, whereas the SSN receives abstract orientation signals as input, we extended it to receive images (through a linear-nonlinear stage), and found that the extended version behaved similarly. Secondly, whereas the SSN had previously been studied in a monocular context, we found that it could also reproduce data on interocular transfer of surround suppression. Finally, we reformulated the SSN as a convolutional neural network, and found that it scaled well on parallel hardware. These results provide additional support for the plausibility of the SSN as a model of lateral interactions in V1, and suggest that the SSN is well suited as a component of complex vision models. Future work will use the SSN to explore relationships between local network interactions and sophisticated vision processes in large networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  14. Linear Models for Systematics and Nuisances

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.

    2017-12-01

    The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.

  15. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  16. Soot Formation in Freely-Propagating Laminar Premixed Flames

    NASA Technical Reports Server (NTRS)

    Lin, K.-C.; Hassan, M. I.; Faeth, G. M.

    1997-01-01

    Soot formation within hydrocarbon-fueled flames is an important unresolved problem of combustion science. Thus, the present study is considering soot formation in freely-propagating laminar premixed flames, exploiting the microgravity environment to simplify measurements at the high-pressure conditions of interest for many practical applications. The findings of the investigation are relevant to reducing emissions of soot and continuum radiation from combustion processes, to improving terrestrial and spacecraft fire safety, and to developing methods of computational combustion, among others. Laminar premixed flames are attractive for studying soot formation because they are simple one-dimensional flows that are computationally tractable for detailed numerical simulations. Nevertheless, studying soot-containing burner-stabilized laminar premixed flames is problematical: spatial resolution and residence times are limited at the pressures of interest for practical applications, flame structure is sensitive to minor burner construction details so that experimental reproducibility is not very good, consistent burner behavior over the lengthy test programs needed to measure soot formation properties is hard to achieve, and burners have poor durability. Fortunately, many of these problems are mitigated for soot-containing, freely-propagating laminar premixed flames. The present investigation seeks to extend work in this laboratory for various soot processes in flames by observing soot formation in freely-propagating laminar premixed flames. Measurements are being made at both Normal Gravity (NG) and MicroGravity (MG), using a short-drop free-fall facility to provide MG conditions.

  17. Robust Multivariable Optimization and Performance Simulation for ASIC Design

    NASA Technical Reports Server (NTRS)

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.

  18. Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.

    PubMed

    Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua

    2016-11-01

    This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.

  19. Modeling Terminal Velocity

    ERIC Educational Resources Information Center

    Brand, Neal; Quintanilla, John A.

    2013-01-01

    Using a simultaneously falling softball as a stopwatch, the terminal velocity of a whiffle ball can be obtained to surprisingly high accuracy with only common household equipment. This classroom activity engages students in an apparently daunting task that nevertheless is tractable, using a simple model and mathematical techniques at their…

  20. Yeast: An Experimental Organism for Modern Biology.

    ERIC Educational Resources Information Center

    Botstein, David; Fink, Gerald R.

    1988-01-01

    Discusses the applicability and advantages of using yeasts as popular and ideal model systems for studying and understanding eukaryotic biology at the cellular and molecular levels. Cites experimental tractability and the cooperative tradition of the research community of yeast biologists as reasons for this success. (RT)

  1. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  2. Quantifying, Visualizing, and Monitoring Lead Optimization.

    PubMed

    Maynard, Andrew T; Roberts, Christopher D

    2016-05-12

    Although lead optimization (LO) is by definition a process, process-centric analysis and visualization of this important phase of pharmaceutical R&D has been lacking. Here we describe a simple statistical framework to quantify and visualize the progression of LO projects so that the vital signs of LO convergence can be monitored. We refer to the resulting visualizations generated by our methodology as the "LO telemetry" of a project. These visualizations can be automated to provide objective, holistic, and instantaneous analysis and communication of LO progression. This enhances the ability of project teams to more effectively drive LO process, while enabling management to better coordinate and prioritize LO projects. We present the telemetry of five LO projects comprising different biological targets and different project outcomes, including clinical compound selection, termination due to preclinical safety/tox, and termination due to lack of tractability. We demonstrate that LO progression is accurately captured by the telemetry. We also present metrics to quantify LO efficiency and tractability.

  3. Learning planar Ising models

    DOE PAGES

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...

    2016-12-01

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  4. Learning planar Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  5. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  6. Framework for cascade size calculations on random networks

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Schweitzer, Frank

    2018-04-01

    We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.

  7. Methods, systems, and computer program products for network firewall policy optimization

    DOEpatents

    Fulp, Errin W [Winston-Salem, NC; Tarsa, Stephen J [Duxbury, MA

    2011-10-18

    Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.

  8. Nozzles for Focusing Aerosol Particles

    DTIC Science & Technology

    2009-10-01

    Fabrication of the nozzle with the desired shape was accomplished using EDM technology. First, a copper tungsten electrode was turned on a CNC lathe . The...small (0.9-mm diameter). The external portions of the nozzles were machined in a more conventional manner using computer numerical control ( CNC ... lathes and milling machines running programs written by computer aided machining (CAM) software. The close tolerance of concentricity of the two

  9. Non-Genomic Origins of Proteins and Metabolism

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2003-01-01

    It is proposed that evolution of inanimate matter to cells endowed with a nucleic acid- based coding of genetic information was preceded by an evolutionary phase, in which peptides not coded by nucleic acids were able to self-organize into networks capable of evolution towards increasing metabolic complexity. Recent findings that truly different, simple peptides (Keefe and Szostak, 2001) can perform the same function (such as ATP binding) provide experimental support for this mechanism of early protobiological evolution. The central concept underlying this mechanism is that the reproduction of cellular functions alone was sufficient for self-maintenance of protocells, and that self- replication of macromolecules was not required at this stage of evolution. The precise transfer of information between successive generations of the earliest protocells was unnecessary and, possibly, undesirable. The key requirement in the initial stage of protocellular evolution was an ability to rapidly explore a large number of protein sequences in order to discover a set of molecules capable of supporting self- maintenance and growth of protocells. Undoubtedly, the essential protocellular functions were carried out by molecules not nearly as efficient or as specific as contemporary proteins. Many, potentially unrelated sequences could have performed each of these functions at an evolutionarily acceptable level. As evolution progressed, however proteins must have performed their functions with increasing efficiency and specificity. This, in turn, put additional constraints on protein sequences and the fraction of proteins capable of performing their functions at the required level decreased. At some point, the likelihood of generating a sufficiently efficient set of proteins through a non-coded synthesis was so small that further evolution was not possible without storing information about the sequences of these proteins. Beyond this point, further evolution required coupling between proteins and informational polymers that is characteristic to all known forms of life. The emergence of such coupling must be postulated in any scenario of the origin of life, no matter whether it starts with RNA or proteins. To examine the evolutionary potential of non-genomic systems, a simple, computationally tractable model, which is still capable of capturing the essential features of the real system, has been studied computationally. Both constructive and destructive processes have been introduced into the model in a stochastic manner. Instead of assuming random reaction sets, only a suite of protobiologically plausible reactions has been considered. Peptides have been explicitly considered as protoenzymes and their catalytic efficiencies have been assigned on the basis of biochemical principles and experimental estimates. Simulations have been carried out using a novel approach (The Next Reaction Method) that is appropriate even for very low concentrations of reactants. Studies have focused on global autocatalytic processes and their diversity.

  10. Oscillatory threshold logic.

    PubMed

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.

  11. Parallelisation study of a three-dimensional environmental flow model

    NASA Astrophysics Data System (ADS)

    O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank

    2014-03-01

    There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.

  12. Hi-Tech Unrevealed.

    ERIC Educational Resources Information Center

    Vernooy, D. Andrew; Alter, Kevin

    2001-01-01

    Presents design features of the University of Texas' Applied Computational Engineering and Sciences Building and discusses how institutions can guide the character of their architecture without subverting the architects' responsibility to confront their contemporary culture in a critical manner. (GR)

  13. Transformation of Galilean satellite parameters to J2000

    NASA Astrophysics Data System (ADS)

    Lieske, J. H.

    1998-09-01

    The so-called galsat software has the capability of computing Earth-equatorial coordinates of Jupiter's Galilean satellies in an arbitrary reference frame, not just that of B1950. The 50 parameters which define the theory of motion of the Galilean satellites (Lieske 1977, Astron. Astrophys. 56, 333--352) could also be transformed in a manner such that the same galsat computer program can be employed to compute rectangular coordinates with their values being in the J2000 system. One of the input parameters (varepsilon_ {27}) is related to the obliquity of the ecliptic and its value is normally zero in the B1950 frame. If that parameter is changed from 0 to -0.0002771, and if other input parameters are changed in a prescribed manner, then the same galsat software can be employed to produce ephemerides on the J2000 system for any of the ephemerides which employ the galsat parameters, such as those of Arlot (1982), Vasundhara (1994) and Lieske. In this paper we present the parameters whose values must be altered in order for the software to produce coordinates directly in the J2000 system.

  14. Algal Biofuels | Bioenergy | NREL

    Science.gov Websites

    growth conditions in a laboratory setting, particularly when strains are maintained under constant other products during phototrophic growth. NREL bioethylene research received a 2015 R&D 100 Award and winter crops, growth on either salt water or fresh water, and genetic tractability. For more

  15. An Empirical Polarizable Force Field Based on the Classical Drude Oscillator Model: Development History and Recent Applications

    PubMed Central

    2016-01-01

    Molecular mechanics force fields that explicitly account for induced polarization represent the next generation of physical models for molecular dynamics simulations. Several methods exist for modeling induced polarization, and here we review the classical Drude oscillator model, in which electronic degrees of freedom are modeled by charged particles attached to the nuclei of their core atoms by harmonic springs. We describe the latest developments in Drude force field parametrization and application, primarily in the last 15 years. Emphasis is placed on the Drude-2013 polarizable force field for proteins, DNA, lipids, and carbohydrates. We discuss its parametrization protocol, development history, and recent simulations of biologically interesting systems, highlighting specific studies in which induced polarization plays a critical role in reproducing experimental observables and understanding physical behavior. As the Drude oscillator model is computationally tractable and available in a wide range of simulation packages, it is anticipated that use of these more complex physical models will lead to new and important discoveries of the physical forces driving a range of chemical and biological phenomena. PMID:26815602

  16. A mathematical model for simulating noise suppression of lined ejectors

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.

    1994-01-01

    A mathematical model containing the essential features embodied in the noise suppression of lined ejectors is presented. Although some simplification of the physics is necessary to render the model mathematically tractable, the current model is the most versatile and technologically advanced at the current time. A system of linearized equations and the boundary conditions governing the sound field are derived starting from the equations of fluid dynamics. A nonreflecting boundary condition is developed. In view of the complex nature of the equations, a parametric study requires the use of numerical techniques and modern computers. A finite element algorithm that solves the differential equations coupled with the boundary condition is then introduced. The numerical method results in a matrix equation with several hundred thousand degrees of freedom that is solved efficiently on a supercomputer. The model is validated by comparing results either with exact solutions or with approximate solutions from other works. In each case, excellent correlations are obtained. The usefulness of the model as an optimization tool and the importance of variable impedance liners as a mechanism for achieving broadband suppression within a lined ejector are demonstrated.

  17. Transient probabilities for queues with applications to hospital waiting list management.

    PubMed

    Joy, Mark; Jones, Simon

    2005-08-01

    In this paper we study queuing systems within the NHS. Recently imposed government performance targets lead NHS executives to investigate and instigate alternative management strategies, thereby imposing structural changes on the queues. Under such circumstances, it is most unlikely that such systems are in equilibrium. It is crucial, in our opinion, to recognise this state of affairs in order to make a balanced assessment of the role of queue management in the modern NHS. From a mathematical perspective it should be emphasised that measures of the state of a queue based upon the assumption of statistical equilibrium (a pervasive methodology in the study of queues) are simply wrong in the above scenario. To base strategic decisions around such ideas is therefore highly questionable and it is one of the purposes of this paper to offer alternatives: we present some (recent) research whose results generate performance measures and measures of risk, for example, of waiting-times growing unacceptably large; we emphasise that these results concern the transient behaviour of the queueing model-there is no asssumption of statistical equilibrium. We also demonstrate that our results are computationally tractable.

  18. A new strategy for genome assembly using short sequence reads and reduced representation libraries.

    PubMed

    Young, Andrew L; Abaan, Hatice Ozel; Zerbino, Daniel; Mullikin, James C; Birney, Ewan; Margulies, Elliott H

    2010-02-01

    We have developed a novel approach for using massively parallel short-read sequencing to generate fast and inexpensive de novo genomic assemblies comparable to those generated by capillary-based methods. The ultrashort (<100 base) sequences generated by this technology pose specific biological and computational challenges for de novo assembly of large genomes. To account for this, we devised a method for experimentally partitioning the genome using reduced representation (RR) libraries prior to assembly. We use two restriction enzymes independently to create a series of overlapping fragment libraries, each containing a tractable subset of the genome. Together, these libraries allow us to reassemble the entire genome without the need of a reference sequence. As proof of concept, we applied this approach to sequence and assembled the majority of the 125-Mb Drosophila melanogaster genome. We subsequently demonstrate the accuracy of our assembly method with meaningful comparisons against the current available D. melanogaster reference genome (dm3). The ease of assembly and accuracy for comparative genomics suggest that our approach will scale to future mammalian genome-sequencing efforts, saving both time and money without sacrificing quality.

  19. Electric Field Fluctuations in Water

    NASA Astrophysics Data System (ADS)

    Thorpe, Dayton; Limmer, David; Chandler, David

    2013-03-01

    Charge transfer in solution, such as autoionization and ion pair dissociation in water, is governed by rare electric field fluctuations of the solvent. Knowing the statistics of such fluctuations can help explain the dynamics of these rare events. Trajectories short enough to be tractable by computer simulation are virtually certain not to sample the large fluctuations that promote rare events. Here, we employ importance sampling techniques with classical molecular dynamics simulations of liquid water to study statistics of electric field fluctuations far from their means. We find that the distributions of electric fields located on individual water molecules are not in general gaussian. Near the mean this non-gaussianity is due to the internal charge distribution of the water molecule. Further from the mean, however, there is a previously unreported Bjerrum-like defect that stabilizes certain large fluctuations out of equilibrium. As expected, differences in electric fields acting between molecules are gaussian to a remarkable degree. By studying these differences, though, we are able to determine what configurations result not only in large electric fields, but also in electric fields with long spatial correlations that may be needed to promote charge separation.

  20. Authentication Without Secrets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierson, Lyndon G.; Robertson, Perry J.

    This work examines a new approach to authentication, which is the most fundamental security primitive that underpins all cyber security protections. Current Internet authentication techniques require the protection of one or more secret keys along with the integrity protection of the algorithms/computations designed to prove possession of the secret without actually revealing it. Protecting a secret requires physical barriers or encryption with yet another secret key. The reason to strive for "Authentication without Secret Keys" is that protecting secrets (even small ones only kept in a small corner of a component or device) is much harder than protecting the integritymore » of information that is not secret. Promising methods are examined for authentication of components, data, programs, network transactions, and/or individuals. The successful development of authentication without secret keys will enable far more tractable system security engineering for high exposure, high consequence systems by eliminating the need for brittle protection mechanisms to protect secret keys (such as are now protected in smart cards, etc.). This paper is a re-release of SAND2009-7032 with new figures numerous edits.« less

  1. Assessing the system value of optimal load shifting

    DOE PAGES

    Merrick, James; Ye, Yinyu; Entriken, Bob

    2017-04-30

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  2. Assessing the system value of optimal load shifting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrick, James; Ye, Yinyu; Entriken, Bob

    We analyze a competitive electricity market, where consumers exhibit optimal load shifting behavior to maximize utility and producers/suppliers maximize their profit under supply capacity constraints. The associated computationally tractable formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Through analytic and numeric assessment of the model, we assess the equilibrium value of optimal electricity load shifting, including how the value changes as more electricity consumers adopt associated technologies. For our illustrative numerical case, derived from the Current Trends scenario of the ERCOTmore » Long Term System Assessment, the average energy arbitrage value per ERCOT customer of optimal load shifting technologies is estimated to be $3 for the 2031 scenario year. We assess the sensitivity of this result to the flexibility of load, along with its relationship to the deployment of renewables. Finally, the model presented can also be a starting point for designing system operation infrastructure that communicates with the devices that schedule loads in response to price signals.« less

  3. Active Subspaces for Wind Plant Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N; Quick, Julian; Dykes, Katherine L

    Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less

  4. Simulation-Guided 3D Nanomanufacturing via Focused Electron Beam Induced Deposition

    DOE PAGES

    Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.; ...

    2016-06-10

    Focused electron beam induced deposition (FEBID) is one of the few techniques that enables direct-write synthesis of free-standing 3D nanostructures. While the fabrication of simple architectures such as vertical or curving nanowires has been achieved by simple trial and error, processing complex 3D structures is not tractable with this approach. This is due, inpart, to the dynamic interplay between electron–solid interactions and the transient spatial distribution of absorbed precursor molecules on the solid surface. Here, we demonstrate the ability to controllably deposit 3D lattice structures at the micro/nanoscale, which have received recent interest owing to superior mechanical and optical properties.more » Moreover, a hybrid Monte Carlo–continuum simulation is briefly overviewed, and subsequently FEBID experiments and simulations are directly compared. Finally, a 3D computer-aided design (CAD) program is introduced, which generates the beam parameters necessary for FEBID by both simulation and experiment. In using this approach, we demonstrate the fabrication of various 3D lattice structures using Pt-, Au-, and W-based precursors.« less

  5. Generalized activity equations for spiking neural network dynamics.

    PubMed

    Buice, Michael A; Chow, Carson C

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  6. Hierarchical Model for the Analysis of Scattering Data of Complex Materials

    DOE PAGES

    Oyedele, Akinola; Mcnutt, Nicholas W.; Rios, Orlando; ...

    2016-05-16

    Interpreting the results of scattering data for complex materials with a hierarchical structure in which at least one phase is amorphous presents a significant challenge. Often the interpretation relies on the use of large-scale molecular dynamics (MD) simulations, in which a structure is hypothesized and from which a radial distribution function (RDF) can be extracted and directly compared against an experimental RDF. This computationally intensive approach presents a bottleneck in the efficient characterization of the atomic structure of new materials. Here, we propose and demonstrate an approach for a hierarchical decomposition of the RDF in which MD simulations are replacedmore » by a combination of tractable models and theory at the atomic scale and the mesoscale, which when combined yield the RDF. We apply the procedure to a carbon composite, in which graphitic nanocrystallites are distributed in an amorphous domain. We compare the model with the RDF from both MD simulation and neutron scattering data. Ultimately, this procedure is applicable for understanding the fundamental processing-structure-property relationships in complex magnetic materials.« less

  7. Adaptive control using neural networks and approximate models.

    PubMed

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  8. Enhancing Privacy in Participatory Sensing Applications with Multidimensional Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrest, Stephanie; He, Wenbo; Groat, Michael

    2013-01-01

    Participatory sensing applications rely on individuals to share personal data to produce aggregated models and knowledge. In this setting, privacy concerns can discourage widespread adoption of new applications. We present a privacy-preserving participatory sensing scheme based on negative surveys for both continuous and multivariate categorical data. Without relying on encryption, our algorithms enhance the privacy of sensed data in an energy and computation efficient manner. Simulations and implementation on Android smart phones illustrate how multidimensional data can be aggregated in a useful and privacy-enhancing manner.

  9. Novel schemes for measurement-based quantum computation.

    PubMed

    Gross, D; Eisert, J

    2007-06-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.

  10. Computationally-Predicted AOPs and Systems Toxicology

    EPA Science Inventory

    The Adverse Outcome Pathway has emerged as an internationally harmonized mechanism for organizing biological information in a chemical agnostic manner. This construct is valuable for interpreting the results from high-throughput toxicity (HTT) assessment by providing a mechanisti...

  11. Structural design and stress analysis program for advanced composite filament-wound axisymmetric pressure vessels (COMTANK)

    NASA Technical Reports Server (NTRS)

    Knoell, A. C.

    1972-01-01

    Computer program has been specifically developed to handle, in an efficient and cost effective manner, planar wound pressure vessels fabricated of either boron-epoxy or graphite-epoxy advanced composite materials.

  12. Lumped Parameter Model (LPM) for Light-Duty Vehicles

    EPA Pesticide Factsheets

    EPA’s Lumped Parameter Model (LPM) is a free, desktop computer application that estimates the effectiveness (CO2 Reduction) of various technology combinations or “packages,” in a manner that accounts for synergies between technologies.

  13. Analysis of Compton continuum measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gold, R.; Olson, I. K.

    1970-01-01

    Five computer programs: COMPSCAT, FEND, GABCO, DOSE, and COMPLOT, have been developed and used for the analysis and subsequent reduction of measured energy distributions of Compton recoil electrons to continuous gamma spectra. In addition to detailed descriptions of these computer programs, the relationship amongst these codes is stressed. The manner in which these programs function is illustrated by tracing a sample measurement through a complete cycle of the data-reduction process.

  14. Asymptotic Normality of Poly-T Densities with Bayesian Applications.

    DTIC Science & Technology

    1987-10-01

    be extended to the case of many t-like factors in a straightforward manner. Obviously, the computational complexity will increase rapidly as the number...York: Marcel-Dekker. Broemeling, L.D. and Abdullah, M.Y. (1984). An approximation to the poly-t distribution. Communciations in Statistics A,11, 1407...Street Center Champaign, IL 61820 Austin, TX 78703 Dr. Steven Hunks Dr. James Krantz Department of Education Computer -based Education University of

  15. The Development of a Novel High Throughput Computational Tool for Studying Individual and Collective Cellular Migration

    PubMed Central

    Chapnick, Douglas A.; Jacobsen, Jeremy; Liu, Xuedong

    2013-01-01

    Understanding how cells migrate individually and collectively during development and cancer metastasis can be significantly aided by a computation tool to accurately measure not only cellular migration speed, but also migration direction and changes in migration direction in a temporal and spatial manner. We have developed such a tool for cell migration researchers, named Pathfinder, which is capable of simultaneously measuring the migration speed, migration direction, and changes in migration directions of thousands of cells both instantaneously and over long periods of time from fluorescence microscopy data. Additionally, we demonstrate how the Pathfinder software can be used to quantify collective cell migration. The novel capability of the Pathfinder software to measure the changes in migration direction of large populations of cells in a spatiotemporal manner will aid cellular migration research by providing a robust method for determining the mechanisms of cellular guidance during individual and collective cell migration. PMID:24386097

  16. A brief perspective on computational electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nachman, A.

    1996-06-01

    There is a growing interest in many quarters in acquiring the ability to predict all manner of electromagnetic (EM) effects. These effects include radar scattering attributes of objects (airplanes, missles, tanks, ships, etc.); the mutal interference of a multitude of antennas on board a single aircraft or ship; the performance of integrated circuits (IC); the propagation of waves (radio and radar) over long distances with the help of hindrance of complicated tomography and ionospheric/atmospheric ducting; and the propagation of pulses through dispersive media (soil, treetops, or concrete) to detect pollutants or hidden targets, or to assess the health of runways.more » All of the above require extensive computation and, despite the fact that Maxwell`s equations are linear in all these cases, codes do not exist which will do the job in a timely and error-controlled manner. This report briefly discusses how this can be rectified. 16 refs.« less

  17. Control of joint motion simulators for biomechanical research

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1992-01-01

    The authors present a hierarchical adaptive algorithm for controlling upper extremity human joint motion simulators. A joint motion simulator is a computer-controlled, electromechanical system which permits the application of forces to the tendons of a human cadaver specimen in such a way that the cadaver joint under study achieves a desired motion in a physiologic manner. The proposed control scheme does not require knowledge of the cadaver specimen dynamic model, and solves on-line the indeterminate problem which arises because human joints typically possess more actuators than degrees of freedom. Computer simulation results are given for an elbow/forearm system and wrist/hand system under hierarchical control. The results demonstrate that any desired normal joint motion can be accurately tracked with the proposed algorithm. These simulation results indicate that the controller resolved the indeterminate problem redundancy in a physiologic manner, and show that the control scheme was robust to parameter uncertainty and to sensor noise.

  18. Computer program CDCID: an automated quality control program using CDC update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, G.L.; Aguilar, F.

    1984-04-01

    A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less

  19. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.

  20. Computers in health care for the 21st century.

    PubMed

    O'Desky, R I; Ball, M J; Ball, E E

    1990-03-01

    As the world enters the last decade of the 20th Century, there is a great deal of speculation about the effect of computers on the future delivery of health care. In this article, the authors attempt to identify some of the evolving computer technologies and anticipate what effect they will have by the year 2000. Rather than listing potential accomplishments, each of the affected areas: hardware, software, health care systems and communications, are presented in an evolutionary manner so the reader can better appreciate where we have been and where we are going.

  1. A meteorologically driven maize stress indicator model

    NASA Technical Reports Server (NTRS)

    Taylor, T. W.; Ravet, F. W. (Principal Investigator)

    1981-01-01

    A maize soil moisture and temperature stress model is described which was developed to serve as a meteorological data filter to alert commodity analysts to potential stress conditions in the major maize-producing areas of the world. The model also identifies optimum climatic conditions and planting/harvest problems associated with poor tractability.

  2. Brachypodium as a model for the grasses: today and the future

    USDA-ARS?s Scientific Manuscript database

    Over the past several years, Brachypodium distachyon (Brachypodium) has emerged as a tractable model system to study biological questions relevant to the grasses. To place its relevance in the larger context of plant biology, we outline here the expanding adoption of Brachypodium as a model grass an...

  3. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  4. Contextual Fear Conditioning in Zebrafish

    ERIC Educational Resources Information Center

    Kenney, Justin W.; Scott, Ian C.; Josselyn, Sheena A.; Frankland, Paul W.

    2017-01-01

    Zebrafish are a genetically tractable vertebrate that hold considerable promise for elucidating the molecular basis of behavior. Although numerous recent advances have been made in the ability to precisely manipulate the zebrafish genome, much less is known about many aspects of learning and memory in adult fish. Here, we describe the development…

  5. Different Parameters Support Generalization and Discrimination Learning in "Drosophila" at the Flight Simulator

    ERIC Educational Resources Information Center

    Brembs, Bjorn; de Ibarra, Natalie Hempel

    2006-01-01

    We have used a genetically tractable model system, the fruit fly "Drosophila melanogaster" to study the interdependence between sensory processing and associative processing on learning performance. We investigated the influence of variations in the physical and predictive properties of color stimuli in several different operant-conditioning…

  6. A Vector Representation for Thermodynamic Relationships

    ERIC Educational Resources Information Center

    Pogliani, Lionello

    2006-01-01

    The existing vector formalism method for thermodynamic relationship maintains tractability and uses accessible mathematics, which can be seen as a diverting and entertaining step into the mathematical formalism of thermodynamics and as an elementary application of matrix algebra. The method is based on ideas and operations apt to improve the…

  7. Career Opportunities for Physicists in the Micro Electronics Industry

    NASA Astrophysics Data System (ADS)

    Bourianoff, George

    1997-10-01

    The US micro electronics industry anticipates growth of 20 to 30 percent per year for the next five years. The need for engineers and scientists poses a critical problem for the industry but conversely presents great opportunities for those in closely related fields such as physics where career opportunities may be more limited. There is no shortage of important and challenging problems on the Semiconductor Institute of America (SIA) roadmap which will require solution in the next 10 years and which require expertise in the physical sciences. However, significant cultural differences exist between the physics community and the engineering oriented semiconductor community which must be understood and addressed in order for a physicist to successfully contribute in this environment. This talk will identify some of those cultural differences and describe some of the critical physics related problems which must be solved. Critical roadblocks include lithographic patterning below 0.18m. and design of Very Large Scale Integrated (VLSI) circuits in the deep submicron regime. The former will require developing radiation sources and optical elements for the EUV or XRAY part of the spectrum. The latter will require incorporating electromagnetic field equations with traditional lumped element circuit design methods. The cultural barriers alluded to earlier involve the manner in which engineering detail is approached. A physicist's basic instinct is to strip off the detail in order to make a problem mathematically tractable. This enables understanding of the underlying physical relationships but does not yield the quantitative detail necessary in semiconductor production.

  8. Marking individual ants for behavioral sampling in a laboratory colony.

    PubMed

    Holbrook, C Tate

    2009-07-01

    Ant societies are tractable and malleable, two features that make them ideal models for probing the organization of complex biological systems. The ability to identify specific individuals while they function as part of a colony permits an integrative analysis of social complexity, including self-organizational processes (i.e., how individual-level properties and social interactions give rise to emergent, colony-level attributes such as division of labor and collective decision making). Effects of genotype, nutrition, and physiology on individual behavior and the organization of work also can be investigated in this manner, through correlative and manipulative approaches. Moreover, aspects of colony demography (e.g., colony size, and age and size distributions of workers) can be altered experimentally to examine colony development and regulatory mechanisms underlying colony homeostasis and resiliency. This protocol describes how to sample the behavior of ants living in a colony under laboratory conditions. Specifically, it outlines how to identify and observe individuals within a colony, an approach that can be used to quantify individual- and colony-level patterns of behavior. When a lower-resolution measure of overall group behavior is desired, individual identities might not be required. Given the diversity of ants and their study, this protocol provides a very general methodology; the details can be modified according to the body size, colony size, and ecology of the focal species, as well as to specific research aims. These basic techniques can also be extended to more advanced experimental designs such as manipulation of colony demography and hormone treatment.

  9. Blind topological measurement-based quantum computation.

    PubMed

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-01-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3 × 10(-3), which is comparable to that (7.5 × 10(-3)) of non-blind topological quantum computation. As the error per gate of the order 10(-3) was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  10. Blind topological measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-09-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3×10-3, which is comparable to that (7.5×10-3) of non-blind topological quantum computation. As the error per gate of the order 10-3 was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  11. Oscillatory Threshold Logic

    PubMed Central

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034

  12. Computational prediction of formulation strategies for beyond-rule-of-5 compounds.

    PubMed

    Bergström, Christel A S; Charman, William N; Porter, Christopher J H

    2016-06-01

    The physicochemical properties of some contemporary drug candidates are moving towards higher molecular weight, and coincidentally also higher lipophilicity in the quest for biological selectivity and specificity. These physicochemical properties move the compounds towards beyond rule-of-5 (B-r-o-5) chemical space and often result in lower water solubility. For such B-r-o-5 compounds non-traditional delivery strategies (i.e. those other than conventional tablet and capsule formulations) typically are required to achieve adequate exposure after oral administration. In this review, we present the current status of computational tools for prediction of intestinal drug absorption, models for prediction of the most suitable formulation strategies for B-r-o-5 compounds and models to obtain an enhanced understanding of the interplay between drug, formulation and physiological environment. In silico models are able to identify the likely molecular basis for low solubility in physiologically relevant fluids such as gastric and intestinal fluids. With this baseline information, a formulation scientist can, at an early stage, evaluate different orally administered, enabling formulation strategies. Recent computational models have emerged that predict glass-forming ability and crystallisation tendency and therefore the potential utility of amorphous solid dispersion formulations. Further, computational models of loading capacity in lipids, and therefore the potential for formulation as a lipid-based formulation, are now available. Whilst such tools are useful for rapid identification of suitable formulation strategies, they do not reveal drug localisation and molecular interaction patterns between drug and excipients. For the latter, Molecular Dynamics simulations provide an insight into the interplay between drug, formulation and intestinal fluid. These different computational approaches are reviewed. Additionally, we analyse the molecular requirements of different targets, since these can provide an early signal that enabling formulation strategies will be required. Based on the analysis we conclude that computational biopharmaceutical profiling can be used to identify where non-conventional gateways, such as prediction of 'formulate-ability' during lead optimisation and early development stages, are important and may ultimately increase the number of orally tractable contemporary targets. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Updating Parameters for Volcanic Hazard Assessment Using Multi-parameter Monitoring Data Streams And Bayesian Belief Networks

    NASA Astrophysics Data System (ADS)

    Odbert, Henry; Aspinall, Willy

    2014-05-01

    Evidence-based hazard assessment at volcanoes assimilates knowledge about the physical processes of hazardous phenomena and observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We discuss the uncertainty of inferences, and how our method provides a route to formal propagation of uncertainties in hazard models. Such approaches provide an attractive route to developing an interface between volcano monitoring analyses and probabilistic hazard scenario analysis. We discuss the use of BBNs in hazard analysis as a tractable and traceable tool for fast, rational assimilation of complex, multi-parameter data sets in the context of timely volcanic crisis decision support.

  14. Combining Volcano Monitoring Timeseries Analyses with Bayesian Belief Networks to Update Hazard Forecast Estimates

    NASA Astrophysics Data System (ADS)

    Odbert, Henry; Hincks, Thea; Aspinall, Willy

    2015-04-01

    Volcanic hazard assessments must combine information about the physical processes of hazardous phenomena with observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We show how our method provides a route to formal propagation of uncertainties in hazard models. Such approaches provide an attractive route to developing an interface between volcano monitoring analyses and probabilistic hazard scenario analysis. We discuss the use of BBNs in hazard analysis as a tractable and traceable tool for fast, rational assimilation of complex, multi-parameter data sets in the context of timely volcanic crisis decision support.

  15. 14 CFR 25.527 - Hull and main float load factors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... float load factors. (a) Water reaction load factors n W must be computed in the following manner: (1... following values are used: (1) n W=water reaction load factor (that is, the water reaction divided by...

  16. LIVING SHORES GALLERY MX964015

    EPA Science Inventory

    An interactive computer kiosk will allow the Texas State Aquarium to deliver a considerable amount of information in an efficient and highly effective manner. Touch screen interactives have proven to be excellent teaching tools in the Aquarium's Jellies: Floating Phantoms galler...

  17. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  18. Large-scale symmetry-adapted perturbation theory computations via density fitting and Laplace transformation techniques: investigating the fundamental forces of DNA-intercalator interactions.

    PubMed

    Hohenstein, Edward G; Parrish, Robert M; Sherrill, C David; Turney, Justin M; Schaefer, Henry F

    2011-11-07

    Symmetry-adapted perturbation theory (SAPT) provides a means of probing the fundamental nature of intermolecular interactions. Low-orders of SAPT (here, SAPT0) are especially attractive since they provide qualitative (sometimes quantitative) results while remaining tractable for large systems. The application of density fitting and Laplace transformation techniques to SAPT0 can significantly reduce the expense associated with these computations and make even larger systems accessible. We present new factorizations of the SAPT0 equations with density-fitted two-electron integrals and the first application of Laplace transformations of energy denominators to SAPT. The improved scalability of the DF-SAPT0 implementation allows it to be applied to systems with more than 200 atoms and 2800 basis functions. The Laplace-transformed energy denominators are compared to analogous partial Cholesky decompositions of the energy denominator tensor. Application of our new DF-SAPT0 program to the intercalation of DNA by proflavine has allowed us to determine the nature of the proflavine-DNA interaction. Overall, the proflavine-DNA interaction contains important contributions from both electrostatics and dispersion. The energetics of the intercalator interaction are are dominated by the stacking interactions (two-thirds of the total), but contain important contributions from the intercalator-backbone interactions. It is hypothesized that the geometry of the complex will be determined by the interactions of the intercalator with the backbone, because by shifting toward one side of the backbone, the intercalator can form two long hydrogen-bonding type interactions. The long-range interactions between the intercalator and the next-nearest base pairs appear to be negligible, justifying the use of truncated DNA models in computational studies of intercalation interaction energies.

  19. Large-scale symmetry-adapted perturbation theory computations via density fitting and Laplace transformation techniques: Investigating the fundamental forces of DNA-intercalator interactions

    NASA Astrophysics Data System (ADS)

    Hohenstein, Edward G.; Parrish, Robert M.; Sherrill, C. David; Turney, Justin M.; Schaefer, Henry F.

    2011-11-01

    Symmetry-adapted perturbation theory (SAPT) provides a means of probing the fundamental nature of intermolecular interactions. Low-orders of SAPT (here, SAPT0) are especially attractive since they provide qualitative (sometimes quantitative) results while remaining tractable for large systems. The application of density fitting and Laplace transformation techniques to SAPT0 can significantly reduce the expense associated with these computations and make even larger systems accessible. We present new factorizations of the SAPT0 equations with density-fitted two-electron integrals and the first application of Laplace transformations of energy denominators to SAPT. The improved scalability of the DF-SAPT0 implementation allows it to be applied to systems with more than 200 atoms and 2800 basis functions. The Laplace-transformed energy denominators are compared to analogous partial Cholesky decompositions of the energy denominator tensor. Application of our new DF-SAPT0 program to the intercalation of DNA by proflavine has allowed us to determine the nature of the proflavine-DNA interaction. Overall, the proflavine-DNA interaction contains important contributions from both electrostatics and dispersion. The energetics of the intercalator interaction are are dominated by the stacking interactions (two-thirds of the total), but contain important contributions from the intercalator-backbone interactions. It is hypothesized that the geometry of the complex will be determined by the interactions of the intercalator with the backbone, because by shifting toward one side of the backbone, the intercalator can form two long hydrogen-bonding type interactions. The long-range interactions between the intercalator and the next-nearest base pairs appear to be negligible, justifying the use of truncated DNA models in computational studies of intercalation interaction energies.

  20. A Computational Model Predicting Disruption of Blood Vessel Development

    PubMed Central

    Kleinstreuer, Nicole; Dix, David; Rountree, Michael; Baker, Nancy; Sipes, Nisha; Reif, David; Spencer, Richard; Knudsen, Thomas

    2013-01-01

    Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a variety of biological pathways linked to endothelial cell (EC) behavior, extracellular matrix (ECM) remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/) modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS) dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a morphogenetic series of events and for the first time demonstrate the applicability of these models for predictive toxicology. PMID:23592958

  1. Extension of a Kinetic Approach to Chemical Reactions to Electronic Energy Levels and Reactions Involving Charged Species with Application to DSMC Simulations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.

  2. Turbulence modeling for compressible flows

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.

    1977-01-01

    Material prepared for a course on Applications and Fundamentals of Turbulence given at the University of Tennessee Space Institute, January 10 and 11, 1977, is presented. A complete concept of turbulence modeling is described, and examples of progess for its use in computational aerodynimics are given. Modeling concepts, experiments, and computations using the concepts are reviewed in a manner that provides an up-to-date statement on the status of this problem for compressible flows.

  3. Transient Simulation of Ram Accelerator Flowfields

    DTIC Science & Technology

    1993-01-01

    PROPULSIVE FLOWS WITH COMIBUSTION CHEMISTRY, ADVANCED TURBULENCE MODELS - 1񕚑ŕ. Reference: Dash, "Advanced Computational Models for Analyzing High - Speed...coupled, implicit manner. Near-wall effects have been dealt with via the low Reynolds number formulation of Chien: and the recent model of Rodi.3 High ...July 1989. 19 Dash, S.M., ’Advanced Computational Models for Analyzing High -Speed Propulsive Flowficlds,’ 199M J&ý,NAF Propulsion Meeting, CPIA Pub. 550

  4. Implementation of a fully-balanced periodic tridiagonal solver on a parallel distributed memory architecture

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Erlebacher, G.

    1994-01-01

    While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.

  5. Numerical computation of space shuttle orbiter flow field

    NASA Technical Reports Server (NTRS)

    Tannehill, John C.

    1988-01-01

    A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.

  6. Computational method for determining n and k for a thin film from the measured reflectance, transmittance, and film thickness.

    PubMed

    Bennett, J M; Booty, M J

    1966-01-01

    A computational method of determining n and k for an evaporated film from the measured reflectance, transmittance, and film thickness has been programmed for an IBM 7094 computer. The method consists of modifications to the NOTS multilayer film program. The basic program computes normal incidence reflectance, transmittance, phase change on reflection, and other parameters from the optical constants and thicknesses of all materials. In the modification, n and k for the film are varied in a prescribed manner, and the computer picks from among these values one n and one k which yield reflectance and transmittance values almost equalling the measured values. Results are given for films of silicon and aluminum.

  7. Tractable Analysis for Large Social Networks

    ERIC Educational Resources Information Center

    Zhang, Bin

    2012-01-01

    Social scientists usually are more interested in consumers' dichotomous choice, such as purchase a product or not, adopt a technology or not, etc. However, up to date, there is nearly no model can help us solve the problem of multi-network effects comparison with a dichotomous dependent variable. Furthermore, the study of multi-network…

  8. Sharp Truncation of an Electric Field: An Idealized Model That Warrants Caution

    ERIC Educational Resources Information Center

    Tu, Hong; Zhu, Jiongming

    2016-01-01

    In physics, idealized models are often used to simplify complex situations. The motivation of the idealization is to make the real complex system tractable by adopting certain simplifications. In this treatment some unnecessary, negligible aspects are stripped away (so-called Aristotelian idealization), or some deliberate distortions are involved…

  9. Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.

    ERIC Educational Resources Information Center

    Deane, Paul; Sheehan, Kathleen

    This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…

  10. Functional Imaging and Optogenetics in Drosophila

    PubMed Central

    Simpson, Julie H.; Looger, Loren L.

    2018-01-01

    Understanding how activity patterns in specific neural circuits coordinate an animal’s behavior remains a key area of neuroscience research. Genetic tools and a brain of tractable complexity make Drosophila a premier model organism for these studies. Here, we review the wealth of reagents available to map and manipulate neuronal activity with light. PMID:29618589

  11. What the Student Does: Teaching for Enhanced Learning

    ERIC Educational Resources Information Center

    Biggs, John

    2012-01-01

    Many teachers see major difficulties in maintaining academic standards in today's larger and more diversified classes. The problem becomes more tractable if learning outcomes are seen as more a function of students' activities than of their fixed characteristics. The teacher's job is then to organise the teaching/learning context so that all…

  12. Design and Functionality of the Graphical Interactive Narrative (Gin) System Version 0.2

    DTIC Science & Technology

    2012-08-01

    System. The purpose of the Gin system is to increase the interactivity and sense of agency for human subjects in virtual environments (VEs) used for...tractability of scenario development while providing the user with an increased sense of agency by allowing them to control their own navigation

  13. Integrating Model-Based Verification into Software Design Education

    ERIC Educational Resources Information Center

    Yilmaz, Levent; Wang, Shuo

    2005-01-01

    Proper design analysis is indispensable to assure quality and reduce emergent costs due to faulty software. Teaching proper design verification skills early during pedagogical development is crucial, as such analysis is the only tractable way of resolving software problems early when they are easy to fix. The premise of the presented strategy is…

  14. A BAC-based physical map of the Hessian fly (Mayetiola destructor) genome anchored to polytene chromosomes

    USDA-ARS?s Scientific Manuscript database

    The Hessian fly (Mayetiola destructor) is an important insect pest of wheat and an experimental organism for studies of plant-insect interactions. It has tractable genetics, polytene chromosomes, a relatively small genome (158 Mb), and shares a gene-for-gene relationship with wheat. To improve its...

  15. Daddy issues: paternal effects on phenotype

    PubMed Central

    Rando, Oliver J.

    2012-01-01

    The once-popular, then heretical, idea that ancestral environment can affect the phenotype of future generations is coming back into vogue, due to advances in the field of epigenetic inheritance. How paternal environmental conditions influence the phenotype of progeny is now a tractable question, and researchers are exploring potential mechanisms underlying such effects. PMID:23141533

  16. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  17. Rendering the Intractable More Tractable: Tools from Caenorhabditis elegans Ripe for Import into Parasitic Nematodes

    PubMed Central

    Ward, Jordan D.

    2015-01-01

    Recent and rapid advances in genetic and molecular tools have brought spectacular tractability to Caenorhabditis elegans, a model that was initially prized because of its simple design and ease of imaging. C. elegans has long been a powerful model in biomedical research, and tools such as RNAi and the CRISPR/Cas9 system allow facile knockdown of genes and genome editing, respectively. These developments have created an additional opportunity to tackle one of the most debilitating burdens on global health and food security: parasitic nematodes. I review how development of nonparasitic nematodes as genetic models informs efforts to import tools into parasitic nematodes. Current tools in three commonly studied parasites (Strongyloides spp., Brugia malayi, and Ascaris suum) are described, as are tools from C. elegans that are ripe for adaptation and the benefits and barriers to doing so. These tools will enable dissection of a huge array of questions that have been all but completely impenetrable to date, allowing investigation into host–parasite and parasite–vector interactions, and the genetic basis of parasitism. PMID:26644478

  18. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  19. Comparison of DSMC and CFD Solutions of Fire II Including Radiative Heating

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Johnston, Christopher O.; Lewis, Mark J.

    2011-01-01

    The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. These flows may also contain significant radiative heating. To prepare for these missions, NASA is developing the capability to simulate rarefied, ionized flows and to then calculate the resulting radiative heating to the vehicle's surface. In this study, the DSMC codes DAC and DS2V are used to obtain charge-neutral ionization solutions. NASA s direct simulation Monte Carlo code DAC is currently being updated to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced Quantum-Kinetic chemistry model, and to include electronic energy levels as an additional internal energy mode. The Fire II flight test is used in this study to assess these new capabilities. The 1634 second data point was chosen for comparisons to be made in order to include comparisons to computational fluid dynamics solutions. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid. It is shown that there can be quite a bit of variability in the vibrational temperature inferred from DSMC solutions and that, from how radiative heating is computed, the electronic temperature is much better suited for radiative calculations. To include the radiative portion of heating, the flow-field solutions are post-processed by the non-equilibrium radiation code HARA. Acceptable agreement between CFD and DSMC flow field solutions is demonstrated and the progress of the updates to DAC, along with an appropriate radiative heating solution, are discussed. In addition, future plans to generate more high fidelity radiative heat transfer solutions are discussed.

  20. Knowledge-Based Environmental Context Modeling

    NASA Astrophysics Data System (ADS)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient use of our renewable natural resources. [1] C2M2L (Component, Context, and Manufacturing Model Library) Final Report, https://doi.org/10.13140/RG.2.1.4956.3604

  1. Emission Computed Tomography: A New Technique for the Quantitative Physiologic Study of Brain and Heart in Vivo

    DOE R&D Accomplishments Database

    Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Schelbert, H. R.; Kuhl, D. E.

    1978-01-01

    Emission computed tomography can provide a quantitative in vivo measurement of regional tissue radionuclide tracer concentrations. This facility when combined with physiologic models and radioactively labeled physiologic tracers that behave in a predictable manner allow measurement of a wide variety of physiologic variables. This integrated technique has been referred to as Physiologic Tomography (PT). PT requires labeled compounds which trace physiologic processes in a known and predictable manner, and physiologic models which are appropriately formulated and validated to derive physiologic variables from ECT data. In order to effectively achieve this goal, PT requires an ECT system that is capable of performing truly quantitative or analytical measurements of tissue tracer concentrations and which has been well characterized in terms of spatial resolution, sensitivity and signal to noise ratios in the tomographic image. This paper illustrates the capabilities of emission computed tomography and provides examples of physiologic tomography for the regional measurement of cerebral and myocardial metabolic rate for glucose, regional measurement of cerebral blood volume, gated cardiac blood pools and capillary perfusion in brain and heart. Studies on patients with stroke and myocardial ischemia are also presented.

  2. Radiatively driven winds from magnetic, fast-rotating stars

    NASA Technical Reports Server (NTRS)

    Nerney, S.

    1986-01-01

    An analytical procedure is developed to solve the magnetohydrodynamic equations for the stellar wind problem in the strong-magnetic field, optically thick limit for hot stars. The slow-mode, Alfven, and fast-mode critical points are modified by the radiation terms in the force equation but in a manner that can be treated relatively easily. Once the velocities at the critical points and the distances to the points are known, the streamline constants are determined in a straight-forward manner. This allows the structure of the wind to be elucidated without recourse to complicated computational schemes.

  3. Magnetic Photon Splitting: The S-Matrix Formulation in the Landau Representation

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.

    1999-01-01

    Calculations of reaction rates for the third-order QED process of photon splitting gamma yields gamma.gamma in strong magnetic fields traditionally have employed either the effective Lagrangian method or variants of Schwinger's proper-time technique. Recently, Mentzel, Berg and Wunner [1] presented an alternative derivation via an S-matrix formulation in the Landau representation. Advantages of such a formulation include the ability to compute rates near pair resonances above pair threshold. This paper presents new developments of the Landau representation formalism as applied to photon splitting, providing significant, advances beyond the work of [1] by summing over the spin quantum numbers of the electron propagators, and analytically integrating over the component of momentum of the intermediate states that is parallel to field. The ensuing tractable expressions for the scattering amplitudes are satisfyingly compact, and of an appearance familiar to S-matrix theory applications. Such developments can facilitate numerical computations of splitting considerably both below and above pair threshold. Specializations to two regimes of interest are obtained, namely the limit of highly supercritical fields and the domain where photon energies are far inferior to that for the threshold of single-photon pair creation. In particular, for the first time the low-frequency amplitudes are simply expressed in terms of the Gamma function, its integral and its derivatives. In addition, the equivalence of the asymptotic forms in these two domains to extant results from effective Lagrangian/proper- time formulations is demonstrated.

  4. Rheological Models in the Time-Domain Modeling of Seismic Motion

    NASA Astrophysics Data System (ADS)

    Moczo, P.; Kristek, J.

    2004-12-01

    The time-domain stress-strain relation in a viscoelastic medium has a form of the convolutory integral which is numerically intractable. This was the reason for the oversimplified models of attenuation in the time-domain seismic wave propagation and earthquake motion modeling. In their pioneering work, Day and Minster (1984) showed the way how to convert the integral into numerically tractable differential form in the case of a general viscoelastic modulus. In response to the work by Day and Minster, Emmerich and Korn (1987) suggested using the rheology of their generalized Maxwell body (GMB) while Carcione et al. (1988) suggested using the generalized Zener body (GZB). The viscoelastic moduli of both rheological models have a form of the rational function and thus the differential form of the stress-strain relation is rather easy to obtain. After the papers by Emmerich and Korn and Carcione et al. numerical modelers decided either for the GMB or GZB rheology and developed 'non-communicating' algorithms. In the many following papers the authors using the GMB never commented the GZB rheology and the corresponding algorithms, and the authors using the GZB never related their methods to the GMB rheology and algorithms. We analyze and compare both rheologies and the corresponding incorporations of the realistic attenuation into the time-domain computations. We then focus on the most recent staggered-grid finite-difference modeling, mainly on accounting for the material heterogeneity in the viscoelastic media, and the computational efficiency of the finite-difference algorithms.

  5. HIV-TRACE (Transmission Cluster Engine): a tool for large scale molecular epidemiology of HIV-1 and other rapidly evolving pathogens.

    PubMed

    Kosakovsky Pond, Sergei L; Weaver, Steven; Leigh Brown, Andrew J; Wertheim, Joel O

    2018-01-31

    In modern applications of molecular epidemiology, genetic sequence data are routinely used to identify clusters of transmission in rapidly evolving pathogens, most notably HIV-1. Traditional 'shoeleather' epidemiology infers transmission clusters by tracing chains of partners sharing epidemiological connections (e.g., sexual contact). Here, we present a computational tool for identifying a molecular transmission analog of such clusters: HIV-TRACE (TRAnsmission Cluster Engine). HIV-TRACE implements an approach inspired by traditional epidemiology, by identifying chains of partners whose viral genetic relatedness imply direct or indirect epidemiological connections. Molecular transmission clusters are constructed using codon-aware pairwise alignment to a reference sequence followed by pairwise genetic distance estimation among all sequences. This approach is computationally tractable and is capable of identifying HIV-1 transmission clusters in large surveillance databases comprising tens or hundreds of thousands of sequences in near real time, i.e., on the order of minutes to hours. HIV-TRACE is available at www.hivtrace.org and from github.com/veg/hivtrace, along with the accompanying result visualization module from github.com/veg/hivtrace-viz. Importantly, the approach underlying HIV-TRACE is not limited to the study of HIV-1 and can be applied to study outbreaks and epidemics of other rapidly evolving pathogens. © The Author 2018. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Heuristics as Bayesian inference under extreme priors.

    PubMed

    Parpart, Paula; Jones, Matt; Love, Bradley C

    2018-05-01

    Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Spectral functions of strongly correlated extended systems via an exact quantum embedding

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Chan, Garnet Kin-Lic

    2015-04-01

    Density matrix embedding theory (DMET) [Phys. Rev. Lett. 109, 186404 (2012), 10.1103/PhysRevLett.109.186404], introduced an approach to quantum cluster embedding methods whereby the mapping of strongly correlated bulk problems to an impurity with finite set of bath states was rigorously formulated to exactly reproduce the entanglement of the ground state. The formalism provided similar physics to dynamical mean-field theory at a tiny fraction of the cost but was inherently limited by the construction of a bath designed to reproduce ground-state, static properties. Here, we generalize the concept of quantum embedding to dynamic properties and demonstrate accurate bulk spectral functions at similarly small computational cost. The proposed spectral DMET utilizes the Schmidt decomposition of a response vector, mapping the bulk dynamic correlation functions to that of a quantum impurity cluster coupled to a set of frequency-dependent bath states. The resultant spectral functions are obtained on the real-frequency axis, without bath discretization error, and allows for the construction of arbitrary dynamic correlation functions. We demonstrate the method on the one- (1D) and two-dimensional (2D) Hubbard model, where we obtain zero temperature and thermodynamic limit spectral functions, and show the trivial extension to two-particle Green's functions. This advance therefore extends the scope and applicability of DMET in condensed-matter problems as a computationally tractable route to correlated spectral functions of extended systems and provides a competitive alternative to dynamical mean-field theory for dynamic quantities.

  8. Scaled effective on-site Coulomb interaction in the DFT+U method for correlated materials

    NASA Astrophysics Data System (ADS)

    Nawa, Kenji; Akiyama, Toru; Ito, Tomonori; Nakamura, Kohji; Oguchi, Tamio; Weinert, M.

    2018-01-01

    The first-principles calculation of correlated materials within density functional theory remains challenging, but the inclusion of a Hubbard-type effective on-site Coulomb term (Ueff) often provides a computationally tractable and physically reasonable approach. However, the reported values of Ueff vary widely, even for the same ionic state and the same material. Since the final physical results can depend critically on the choice of parameter and the computational details, there is a need to have a consistent procedure to choose an appropriate one. We revisit this issue from constraint density functional theory, using the full-potential linearized augmented plane wave method. The calculated Ueff parameters for the prototypical transition-metal monoxides—MnO, FeO, CoO, and NiO—are found to depend significantly on the muffin-tin radius RMT, with variations of more than 2-3 eV as RMT changes from 2.0 to 2.7 aB. Despite this large variation in Ueff, the calculated valence bands differ only slightly. Moreover, we find an approximately linear relationship between Ueff(RMT) and the number of occupied localized electrons within the sphere, and give a simple scaling argument for Ueff; these results provide a rationalization for the large variation in reported values. Although our results imply that Ueff values are not directly transferable among different calculation methods (or even the same one with different input parameters such as RMT), use of this scaling relationship should help simplify the choice of Ueff.

  9. Privacy preserving interactive record linkage (PPIRL).

    PubMed

    Kum, Hye-Chung; Krishnamurthy, Ashok; Machanavajjhala, Ashwin; Reiter, Michael K; Ahalt, Stanley

    2014-01-01

    Record linkage to integrate uncoordinated databases is critical in biomedical research using Big Data. Balancing privacy protection against the need for high quality record linkage requires a human-machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic Big Data. In the computer science literature, private record linkage is the most published area. It investigates how to apply a known linkage function safely when linking two tables. However, in practice, the linkage function is rarely known. Thus, there are many data linkage centers whose main role is to be the trusted third party to determine the linkage function manually and link data for research via a master population list for a designated region. Recently, a more flexible computerized third-party linkage platform, Secure Decoupled Linkage (SDLink), has been proposed based on: (1) decoupling data via encryption, (2) obfuscation via chaffing (adding fake data) and universe manipulation; and (3) minimum information disclosure via recoding. We synthesize this literature to formalize a new framework for privacy preserving interactive record linkage (PPIRL) with tractable privacy and utility properties and then analyze the literature using this framework. Human-based third-party linkage centers for privacy preserving record linkage are the accepted norm internationally. We find that a computer-based third-party platform that can precisely control the information disclosed at the micro level and allow frequent human interaction during the linkage process, is an effective human-machine hybrid system that significantly improves on the linkage center model both in terms of privacy and utility.

  10. A Primer on Metagenomics

    PubMed Central

    Wooley, John C.; Godzik, Adam; Friedberg, Iddo

    2010-01-01

    Metagenomics is a discipline that enables the genomic study of uncultured microorganisms. Faster, cheaper sequencing technologies and the ability to sequence uncultured microbes sampled directly from their habitats are expanding and transforming our view of the microbial world. Distilling meaningful information from the millions of new genomic sequences presents a serious challenge to bioinformaticians. In cultured microbes, the genomic data come from a single clone, making sequence assembly and annotation tractable. In metagenomics, the data come from heterogeneous microbial communities, sometimes containing more than 10,000 species, with the sequence data being noisy and partial. From sampling, to assembly, to gene calling and function prediction, bioinformatics faces new demands in interpreting voluminous, noisy, and often partial sequence data. Although metagenomics is a relative newcomer to science, the past few years have seen an explosion in computational methods applied to metagenomic-based research. It is therefore not within the scope of this article to provide an exhaustive review. Rather, we provide here a concise yet comprehensive introduction to the current computational requirements presented by metagenomics, and review the recent progress made. We also note whether there is software that implements any of the methods presented here, and briefly review its utility. Nevertheless, it would be useful if readers of this article would avail themselves of the comment section provided by this journal, and relate their own experiences. Finally, the last section of this article provides a few representative studies illustrating different facets of recent scientific discoveries made using metagenomics. PMID:20195499

  11. Reinforcement learning and episodic memory in humans and animals: an integrative framework

    PubMed Central

    Gershman, Samuel J.; Daw, Nathaniel D.

    2018-01-01

    We review the psychology and neuroscience of reinforcement learning (RL), which has witnessed significant progress in the last two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, the simplicity of these tasks misses important aspects of reinforcement learning in the real world: (i) State spaces are high-dimensional, continuous, and partially observable; this implies that (ii) data are relatively sparse: indeed precisely the same situation may never be encountered twice; and also that (iii) rewards depend on long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, these theories have largely connected with procedural and semantic memory: how knowledge about action values or world models extracted gradually from many experiences can drive choice. This misses many aspects of memory related to traces of individual events, such as episodic memory. We suggest that these two gaps are related. In particular, the computational challenges can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (i) efficiently approximate value functions over complex state spaces, (ii) learn with very little data, and (iii) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system. PMID:27618944

  12. Signature detection and matching for document image retrieval.

    PubMed

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  13. Stochastic simulation of spatially correlated geo-processes

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.

  14. Mobile healthcare information management utilizing Cloud Computing and Android OS.

    PubMed

    Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias

    2010-01-01

    Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.

  15. Reverse logistics system planning for recycling computers hardware: A case study

    NASA Astrophysics Data System (ADS)

    Januri, Siti Sarah; Zulkipli, Faridah; Zahari, Siti Meriam; Shamsuri, Siti Hajar

    2014-09-01

    This paper describes modeling and simulation of reverse logistics networks for collection of used computers in one of the company in Selangor. The study focuses on design of reverse logistics network for used computers recycling operation. Simulation modeling, presented in this work allows the user to analyze the future performance of the network and to understand the complex relationship between the parties involved. The findings from the simulation suggest that the model calculates processing time and resource utilization in a predictable manner. In this study, the simulation model was developed by using Arena simulation package.

  16. GADRAS-DRF 18.6 User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, Steve M.; Thoreson, Greg G.; Theisen, Lisa A.

    2016-05-01

    The Gamma Detector Response and Analysis Software–Detector Response Function (GADRAS-DRF) application computes the response of gamma-ray and neutron detectors to incoming radiation. This manual provides step-by-step procedures to acquaint new users with the use of the application. The capabilities include characterization of detector response parameters, plotting and viewing measured and computed spectra, analyzing spectra to identify isotopes, and estimating source energy distributions from measured spectra. GADRAS-DRF can compute and provide detector responses quickly and accurately, giving users the ability to obtain usable results in a timely manner (a matter of seconds or minutes).

  17. Composite panel development at JPL

    NASA Technical Reports Server (NTRS)

    Mcelroy, Paul; Helms, Rich

    1988-01-01

    Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.

  18. Computer program for prediction of the deposition of material released from fixed and rotary wing aircraft

    NASA Technical Reports Server (NTRS)

    Teske, M. E.

    1984-01-01

    This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.

  19. VASCOMP 2. The V/STOL aircraft sizing and performance computer program. Volume 6: User's manual, revision 3

    NASA Technical Reports Server (NTRS)

    Schoen, A. H.; Rosenstein, H.; Stanzione, K.; Wisniewski, J. S.

    1980-01-01

    This report describes the use of the V/STOL Aircraft Sizing and Performance Computer Program (VASCOMP II). The program is useful in performing aircraft parametric studies in a quick and cost efficient manner. Problem formulation and data development were performed by the Boeing Vertol Company and reflects the present preliminary design technology. The computer program, written in FORTRAN IV, has a broad range of input parameters, to enable investigation of a wide variety of aircraft. User oriented features of the program include minimized input requirements, diagnostic capabilities, and various options for program flexibility.

  20. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

Top