Sample records for compute regularization paths

  1. Interprocedural Analysis and the Verification of Concurrent Programs

    DTIC Science & Technology

    2009-01-01

    SSPE ) problem is to compute a regular expression that represents paths(s, v) for all vertices v in the graph. The syntax of regular expressions is as...follows: r ::= ∅ | ε | e | r1 ∪ r2 | r1.r2 | r∗, where e stands for an edge in G. We can use any algorithm for SSPE to compute regular expressions for...a closed representation of loops provides an exponential speedup.2 Tarjan’s path-expression algorithm solves the SSPE problem efficiently. It uses

  2. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  3. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  4. Regular paths in SparQL: querying the NCI Thesaurus.

    PubMed

    Detwiler, Landon T; Suciu, Dan; Brinkley, James F

    2008-11-06

    OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.

  5. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  6. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  7. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  8. Consistent Partial Least Squares Path Modeling via Regularization.

    PubMed

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  9. Consistent Partial Least Squares Path Modeling via Regularization

    PubMed Central

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491

  10. Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies

    NASA Astrophysics Data System (ADS)

    Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura

    2018-05-01

    Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.

  11. The Container Problem in Bubble-Sort Graphs

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasuto; Kaneko, Keiichi

    Bubble-sort graphs are variants of Cayley graphs. A bubble-sort graph is suitable as a topology for massively parallel systems because of its simple and regular structure. Therefore, in this study, we focus on n-bubble-sort graphs and propose an algorithm to obtain n-1 disjoint paths between two arbitrary nodes in time bounded by a polynomial in n, the degree of the graph plus one. We estimate the time complexity of the algorithm and the sum of the path lengths after proving the correctness of the algorithm. In addition, we report the results of computer experiments evaluating the average performance of the algorithm.

  12. Calculating Path-Dependent Travel Time Prediction Variance and Covariance fro a Global Tomographic P-Velocity Model

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.

    2012-12-01

    Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  13. RPT: A Low Overhead Single-End Probing Tool for Detecting Network Congestion Positions

    DTIC Science & Technology

    2003-12-20

    complete evaluation on the Internet , we need to know the real available bandwidth on all the links of a network path. But that information is hard to...School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Detecting the points of network congestion is an intriguing...research problem, because this infor- mation can benefit both regular network users and Internet Service Providers. This is also a highly challenging

  14. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    PubMed

    Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob

    2011-03-01

    We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  15. Frequency guided methods for demodulation of a single fringe pattern.

    PubMed

    Wang, Haixia; Kemao, Qian

    2009-08-17

    Phase demodulation from a single fringe pattern is a challenging task but of interest. A frequency-guided regularized phase tracker and a frequency-guided sequential demodulation method with Levenberg-Marquardt optimization are proposed to demodulate a single fringe pattern. Demodulation path guided by the local frequency from the highest to the lowest is applied in both methods. Since critical points have low local frequency values, they are processed last so that the spurious sign problem caused by these points is avoided. These two methods can be considered as alternatives to the effective fringe follower regularized phase tracker. Demodulation results from one computer-simulated and two experimental fringe patterns using the proposed methods will be demonstrated. (c) 2009 Optical Society of America

  16. Subjective randomness as statistical inference.

    PubMed

    Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B

    2018-06-01

    Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  18. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  19. Collective dynamics of 'small-world' networks.

    PubMed

    Watts, D J; Strogatz, S H

    1998-06-04

    Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.

  20. Distributional and regularized radiation fields of non-uniformly moving straight dislocations, and elastodynamic Tamm problem

    NASA Astrophysics Data System (ADS)

    Lazar, Markus; Pellegrini, Yves-Patrick

    2016-11-01

    This work introduces original explicit solutions for the elastic fields radiated by non-uniformly moving, straight, screw or edge dislocations in an isotropic medium, in the form of time-integral representations in which acceleration-dependent contributions are explicitly separated out. These solutions are obtained by applying an isotropic regularization procedure to distributional expressions of the elastodynamic fields built on the Green tensor of the Navier equation. The obtained regularized field expressions are singularity-free, and depend on the dislocation density rather than on the plastic eigenstrain. They cover non-uniform motion at arbitrary speeds, including faster-than-wave ones. A numerical method of computation is discussed, that rests on discretizing motion along an arbitrary path in the plane transverse to the dislocation, into a succession of time intervals of constant velocity vector over which time-integrated contributions can be obtained in closed form. As a simple illustration, it is applied to the elastodynamic equivalent of the Tamm problem, where fields induced by a dislocation accelerated from rest beyond the longitudinal wave speed, and thereafter put to rest again, are computed. As expected, the proposed expressions produce Mach cones, the dynamic build-up and decay of which is illustrated by means of full-field calculations.

  1. Formal language constrained path problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less

  2. Traveling in the dark: the legibility of a regular and predictable structure of the environment extends beyond its borders.

    PubMed

    Yaski, Osnat; Portugali, Juval; Eilam, David

    2012-04-01

    The physical structure of the surrounding environment shapes the paths of progression, which in turn reflect the structure of the environment and the way that it shapes behavior. A regular and coherent physical structure results in paths that extend over the entire environment. In contrast, irregular structure results in traveling over a confined sector of the area. In this study, rats were tested in a dark arena in which half the area contained eight objects in a regular grid layout, and the other half contained eight objects in an irregular layout. In subsequent trials, a salient landmark was placed first within the irregular half, and then within the grid. We hypothesized that rats would favor travel in the area with regular order, but found that activity in the area with irregular object layout did not differ from activity in the area with grid layout, even when the irregular half included a salient landmark. Thus, the grid impact in one arena half extended to the other half and overshadowed the presumed impact of the salient landmark. This could be explained by mechanisms that control spatial behavior, such as grid cells and odometry. However, when objects were spaced irregularly over the entire arena, the salient landmark became dominant and the paths converged upon it, especially from objects with direct access to the salient landmark. Altogether, three environmental properties: (i) regular and predictable structure; (ii) salience of landmarks; and (iii) accessibility, hierarchically shape the paths of progression in a dark environment. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Book Review:

    NASA Astrophysics Data System (ADS)

    Louko, Jorma

    2007-04-01

    Bastianelli and van Nieuwenhuizen's monograph `Path Integrals and Anomalies in Curved Space' collects in one volume the results of the authors' 15-year research programme on anomalies that arise in Feynman diagrams of quantum field theories on curved manifolds. The programme was spurred by the path-integral techniques introduced in Alvarez-Gaumé and Witten's renowned 1983 paper on gravitational anomalies which, together with the anomaly cancellation paper by Green and Schwarz, led to the string theory explosion of the 1980s. The authors have produced a tour de force, giving a comprehensive and pedagogical exposition of material that is central to current research. The first part of the book develops from scratch a formalism for defining and evaluating quantum mechanical path integrals in nonlinear sigma models, using time slicing regularization, mode regularization and dimensional regularization. The second part applies this formalism to quantum fields of spin 0, 1/2, 1 and 3/2 and to self-dual antisymmetric tensor fields. The book concludes with a discussion of gravitational anomalies in 10-dimensional supergravities, for both classical and exceptional gauge groups. The target audience is researchers and graduate students in curved spacetime quantum field theory and string theory, and the aims, style and pedagogical level have been chosen with this audience in mind. Path integrals are treated as calculational tools, and the notation and terminology are throughout tailored to calculational convenience, rather than to mathematical rigour. The style is closer to that of an exceedingly thorough and self-contained review article than to that of a textbook. As the authors mention, the first part of the book can be used as an introduction to path integrals in quantum mechanics, although in a classroom setting perhaps more likely as supplementary reading than a primary class text. Readers outside the core audience, including this reviewer, will gain from the book a heightened appreciation of the central role of regularization as a defining ingredient of a quantum field theory and will be impressed by the agreement of results arising from different regularization schemes. The readers may in particular enjoy the authors' `brief history of anomalies' in quantum field theory, as well as a similar historical discussion of path integrals in quantum mechanics.

  4. Gauge fixing and BFV quantization

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-01-01

    Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.

  5. Nonlinear refraction and reflection travel time tomography

    USGS Publications Warehouse

    Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.

    1998-01-01

    We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.

  6. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  7. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  8. Accelerated observers and the notion of singular spacetime

    NASA Astrophysics Data System (ADS)

    Olmo, Gonzalo J.; Rubiera-Garcia, Diego; Sanchez-Puente, Antonio

    2018-03-01

    Geodesic completeness is typically regarded as a basic criterion to determine whether a given spacetime is regular or singular. However, the principle of general covariance does not privilege any family of observers over the others and, therefore, observers with arbitrary motions should be able to provide a complete physical description of the world. This suggests that in a regular spacetime, all physically acceptable observers should have complete paths. In this work we explore this idea by studying the motion of accelerated observers in spherically symmetric spacetimes and illustrate it by considering two geodesically complete black hole spacetimes recently described in the literature. We show that for bound and locally unbound accelerations, the paths of accelerated test particles are complete, providing further support to the regularity of such spacetimes.

  9. Computational study of the melting-freezing transition in the quantum hard-sphere system for intermediate densities. II. Structural features.

    PubMed

    Sesé, Luis M; Bailey, Lorna E

    2007-04-28

    The structural features of the quantum hard-sphere system in the region of the fluid-face-centered-cubic-solid transition, for reduced number densities 0.45

  10. Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.

    PubMed

    Hu, Yue; Allen, Genevera I

    2015-12-01

    Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.

  11. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    PubMed

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath.

  12. Three layers multi-granularity OCDM switching system based on learning-stateful PCE

    NASA Astrophysics Data System (ADS)

    Wang, Yubao; Liu, Yanfei; Sun, Hao

    2017-10-01

    In the existing three layers multi-granularity OCDM switching system (TLMG-OCDMSS), F-LSP, L-LSP and OC-LSP can be bundled as switching granularity. For CPU-intensive network, the node not only needs to compute the path but also needs to bundle the switching granularity so that the load of single node is heavy. The node will paralyze when the traffic of the node is too heavy, which will impact the performance of the whole network seriously. The introduction of stateful PCE(S-PCE) will effectively solve these problems. PCE is composed of two parts, namely, the path computation element and the database (TED and LSPDB), and returns the result of path computation to PCC (path computation clients) after PCC sends the path computation request to it. In this way, the pressure of the distributed path computation in each node is reduced. In this paper, we propose the concept of Learning PCE (L-PCE), which uses the existing LSPDB as the data source of PCE's learning. By this means, we can simplify the path computation and reduce the network delay, as a result, improving the performance of network.

  13. Path-Following Solutions Of Nonlinear Equations

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Walters, Robert W.

    1989-01-01

    Report describes some path-following techniques for solution of nonlinear equations and compares with other methods. Use of multipurpose techniques applicable at more than one stage of path-following computation results in system relatively simple to understand, program, and use. Comparison of techniques with method of parametric differentiation (MPD) reveals definite advantages for path-following methods. Emphasis in investigation on multiuse techniques being applied at more than one stage of path-following computation. Incorporation of multipurpose techniques results in concise computer code relatively simple to use.

  14. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.

  15. Planning paths to multiple targets: memory involvement and planning heuristics in spatial problem solving.

    PubMed

    Wiener, J M; Ehbauer, N N; Mallot, H A

    2009-09-01

    For large numbers of targets, path planning is a complex and computationally expensive task. Humans, however, usually solve such tasks quickly and efficiently. We present experiments studying human path planning performance and the cognitive processes and heuristics involved. Twenty-five places were arranged on a regular grid in a large room. Participants were repeatedly asked to solve traveling salesman problems (TSP), i.e., to find the shortest closed loop connecting a start location with multiple target locations. In Experiment 1, we tested whether humans employed the nearest neighbor (NN) strategy when solving the TSP. Results showed that subjects outperform the NN-strategy, suggesting that it is not sufficient to explain human route planning behavior. As a second possible strategy we tested a hierarchical planning heuristic in Experiment 2, demonstrating that participants first plan a coarse route on the region level that is refined during navigation. To test for the relevance of spatial working memory (SWM) and spatial long-term memory (LTM) for planning performance and the planning heuristics applied, we varied the memory demands between conditions in Experiment 2. In one condition the target locations were directly marked, such that no memory was required; a second condition required participants to memorize the target locations during path planning (SWM); in a third condition, additionally, the locations of targets had to retrieved from LTM (SWM and LTM). Results showed that navigation performance decreased with increasing memory demands while the dependence on the hierarchical planning heuristic increased.

  16. A Path Algorithm for Constrained Estimation

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382

  17. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Assessment of critical path analyses of the relationship between permeability and electrical conductivity of pore networks

    NASA Astrophysics Data System (ADS)

    Skaggs, Todd H.

    2011-10-01

    Critical path analysis (CPA) is a method for estimating macroscopic transport coefficients of heterogeneous materials that are highly disordered at the micro-scale. Developed originally to model conduction in semiconductors, numerous researchers have noted that CPA might also have relevance to flow and transport processes in porous media. However, the results of several numerical investigations of critical path analysis on pore network models raise questions about the applicability of CPA to porous media. Among other things, these studies found that (i) in well-connected 3D networks, CPA predictions were inaccurate and became worse when heterogeneity was increased; and (ii) CPA could not fully explain the transport properties of 2D networks. To better understand the applicability of CPA to porous media, we made numerical computations of permeability and electrical conductivity on 2D and 3D networks with differing pore-size distributions and geometries. A new CPA model for the relationship between the permeability and electrical conductivity was found to be in good agreement with numerical data, and to be a significant improvement over a classical CPA model. In sufficiently disordered 3D networks, the new CPA prediction was within ±20% of the true value, and was nearly optimal in terms of minimizing the squared prediction errors across differing network configurations. The agreement of CPA predictions with 2D network computations was similarly good, although 2D networks are in general not well-suited for evaluating CPA. Numerical transport coefficients derived for regular 3D networks of slit-shaped pores were found to be in better agreement with experimental data from rock samples than were coefficients derived for networks of cylindrical pores.

  19. Stretching of passive tracers and implications for mantle mixing

    NASA Astrophysics Data System (ADS)

    Conjeepuram, N.; Kellogg, L. H.

    2007-12-01

    Mid ocean ridge basalts(MORB) and ocean island basalts(OIB) have fundamentally different geochemical signatures. Understanding this difference requires a fundamental knowledge of the mixing processes that led to their formation. Quantitative methods used to assess mixing include examining the distribution of passive tracers, attaching time-evolution information to simulate decay of radioactive isotopes, and, for chaotic flows, calculating the Lyapunov exponent, which characterizes whether two nearby particles diverge at an exponential rate. Although effective, these methods are indirect measures of the two fundamental processes associated with mixing namely, stretching and folding. Building on work done by Kellogg and Turcotte, we present a method to compute the stretching and thinning of a passive, ellipsoidal tracer in three orthogonal directions in isoviscous, incompressible three dimensional flows. We also compute the Lyapunov exponents associated with the given system based on the quantitative measures of stretching and thinning. We test our method with two analytical and three numerical flow fields which exhibit Lagrangian turbulence. The ABC and STF class of analytical flows are a three and two parameter class of flows respectively and have been well studied for fast dynamo action. Since they generate both periodic and chaotic particle paths depending either on the starting point or on the choice of the parameters, they provide a good foundation to understand mixing. The numerical flow fields are similar to the geometries used by Ferrachat and Ricard (1998) and emulate a ridge - transform system. We also compute the stable and unstable manifolds associated with the numerical flow fields to illustrate the directions of rapid and slow mixing. We find that stretching in chaotic flow fields is significantly more effective than regular or periodic flow fields. Consequently, chaotic mixing is far more efficient than regular mixing. We also find that in the numerical flow field, there is a fundamental topological difference in the regions exhibiting slow or regular mixing for different model geometries.

  20. Identification of Vehicle Axle Loads from Bridge Dynamic Responses

    NASA Astrophysics Data System (ADS)

    ZHU, X. Q.; LAW, S. S.

    2000-09-01

    A method is presented to identify moving loads on a bridge deck modelled as an orthotropic rectangular plate. The dynamic behavior of the bridge deck under moving loads is analyzed using the orthotropic plate theory and modal superposition principle, and Tikhonov regularization procedure is applied to provide bounds to the identified forces in the time domain. The identified results using a beam model and a plate model of the bridge deck are compared, and the conditions under which the bridge deck can be simplified as an equivalent beam model are discussed. Computation simulation and laboratory tests show the effectiveness and the validity of the proposed method in identifying forces travelling along the central line or at an eccentric path on the bridge deck.

  1. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens

    PubMed Central

    2012-01-01

    Background Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. Results In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. Conclusions We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath. PMID:23282057

  2. ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.

    PubMed

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.

  3. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node operably connected to the first multiplexer whereby the second watch dog node can selectively communicate with individual ones of the computing nodes through the second and fourth networks. The branch is completed by a first load balancing node; and a second multiplexer connected between the first load balancing node and the first and second watch dog nodes, allowing the first load balancing node to selectively communicate with the first and second watch dog nodes.

  4. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  5. Computational path planner for product assembly in complex environments

    NASA Astrophysics Data System (ADS)

    Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi

    2013-03-01

    Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.

  6. Weak-noise limit of a piecewise-smooth stochastic differential equation.

    PubMed

    Chen, Yaming; Baule, Adrian; Touchette, Hugo; Just, Wolfram

    2013-11-01

    We investigate the validity and accuracy of weak-noise (saddle-point or instanton) approximations for piecewise-smooth stochastic differential equations (SDEs), taking as an illustrative example a piecewise-constant SDE, which serves as a simple model of Brownian motion with solid friction. For this model, we show that the weak-noise approximation of the path integral correctly reproduces the known propagator of the SDE at lowest order in the noise power, as well as the main features of the exact propagator with higher-order corrections, provided the singularity of the path integral associated with the nonsmooth SDE is treated with some heuristics. We also show that, as in the case of smooth SDEs, the deterministic paths of the noiseless system correctly describe the behavior of the nonsmooth SDE in the low-noise limit. Finally, we consider a smooth regularization of the piecewise-constant SDE and study to what extent this regularization can rectify some of the problems encountered when dealing with discontinuous drifts and singularities in SDEs.

  7. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  8. Offdiagonal complexity: A computationally quick complexity measure for graphs and networks

    NASA Astrophysics Data System (ADS)

    Claussen, Jens Christian

    2007-02-01

    A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.

  9. Investigation of the spinfoam path integral with quantum cuboid intertwiners

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin; Steinhaus, Sebastian

    2016-05-01

    In this work, we investigate the 4d path integral for Euclidean quantum gravity on a hypercubic lattice, as given by the spinfoam model by Engle, Pereira, Rovelli, Livine, Freidel and Krasnov. To tackle the problem, we restrict to a set of quantum geometries that reflects the large amount of lattice symmetries. In particular, the sum over intertwiners is restricted to quantum cuboids, i.e. coherent intertwiners which describe a cuboidal geometry in the large-j limit. Using asymptotic expressions for the vertex amplitude, we find several interesting properties of the state sum. First of all, the value of coupling constants in the amplitude functions determines whether geometric or nongeometric configurations dominate the path integral. Secondly, there is a critical value of the coupling constant α , which separates two phases. In both phases, the diffeomorphism symmetry appears to be broken. In one, the dominant contribution comes from highly irregular, in the other from highly regular configurations, both describing flat Euclidean space with small quantum fluctuations around them, viewed in different coordinate systems. On the critical point diffeomorphism symmetry is nearly restored, however. Thirdly, we use the state sum to compute the physical norm of kinematical states, i.e. their norm in the physical Hilbert space. We find that states which describe boundary geometry with high torsion have an exponentially suppressed physical norm. We argue that this allows one to exclude them from the state sum in calculations.

  10. Using GPS RO L1 data for calibration of the atmospheric path delay model for data reduction of the satellite altimetery observations.

    NASA Astrophysics Data System (ADS)

    Petrov, L.

    2017-12-01

    Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.

  11. 20 CFR 226.35 - Deductions from regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  12. Processor Would Find Best Paths On Map

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1990-01-01

    Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.

  13. Understanding health care communication preferences of veteran primary care users.

    PubMed

    LaVela, Sherri L; Schectman, Gordon; Gering, Jeffrey; Locatelli, Sara M; Gawron, Andrew; Weaver, Frances M

    2012-09-01

    To assess veterans' health communication preferences (in-person, telephone, or electronic) for primary care needs and the impact of computer use on preferences. Structured patient interviews (n=448). Bivariate analyses examined preferences for primary care by 'infrequent' vs. 'regular' computer users. Only 54% were regular computer users, nearly all of whom had ever used the internet. 'Telephone' was preferred for 6 of 10 reasons (general medical questions, medication questions and refills, preventive care reminders, scheduling, and test results); although telephone was preferred by markedly fewer regular computer users. 'In-person' was preferred for new/ongoing conditions/symptoms, treatment instructions, and next care steps; these preferences were unaffected by computer use frequency. Among regular computer users, 1/3 preferred 'electronic' for preventive reminders (37%), test results (34%), and refills (32%). For most primary care needs, telephone communication was preferred, although by a greater proportion of infrequent vs. regular computer users. In-person communication was preferred for reasons that may require an exam or visual instructions. About 1/3 of regular computer users prefer electronic communication for routine needs, e.g., preventive reminders, test results, and refills. These findings can be used to plan patient-centered care that is aligned with veterans' preferred health communication methods. Published by Elsevier Ireland Ltd.

  14. ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM

    PubMed Central

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932

  15. 20 CFR 226.14 - Employee regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  16. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  17. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning

    PubMed Central

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630

  18. Solving a Hamiltonian Path Problem with a bacterial computer

    PubMed Central

    Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T

    2009-01-01

    Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof-of-concept experiment demonstrates that bacterial computing is a new way to address NP-complete problems using the inherent advantages of genetic systems. The results of our experiments also validate synthetic biology as a valuable approach to biological engineering. We designed and constructed basic parts, devices, and systems using synthetic biology principles of standardization and abstraction. PMID:19630940

  19. Path Not Found: Disparities in Access to Computer Science Courses in California High Schools

    ERIC Educational Resources Information Center

    Martin, Alexis; McAlear, Frieda; Scott, Allison

    2015-01-01

    "Path Not Found: Disparities in Access to Computer Science Courses in California High Schools" exposes one of the foundational causes of underrepresentation in computing: disparities in access to computer science courses in California's public high schools. This report provides new, detailed data on these disparities by student body…

  20. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  1. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  2. 29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...

  3. 29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...

  4. 29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...

  5. 29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except seven specified types of payments...

  6. 29 CFR 778.208 - Inclusion and exclusion of bonuses in computing the “regular rate.”

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Inclusion and exclusion of bonuses in computing the... Inclusion and exclusion of bonuses in computing the “regular rate.” Section 7(e) of the Act requires the inclusion in the regular rate of all remuneration for employment except eight specified types of payments...

  7. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    PubMed

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.

  8. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path

    PubMed Central

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901

  9. Regularized Chapman-Enskog expansion for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Schochet, Steven; Tadmor, Eitan

    1990-01-01

    Rosenau has recently proposed a regularized version of the Chapman-Enskog expansion of hydrodynamics. This regularized expansion resembles the usual Navier-Stokes viscosity terms at law wave-numbers, but unlike the latter, it has the advantage of being a bounded macroscopic approximation to the linearized collision operator. The behavior of Rosenau regularization of the Chapman-Enskog expansion (RCE) is studied in the context of scalar conservation laws. It is shown that thie RCE model retains the essential properties of the usual viscosity approximation, e.g., existence of traveling waves, monotonicity, upper-Lipschitz continuity..., and at the same time, it sharpens the standard viscous shock layers. It is proved that the regularized RCE approximation converges to the underlying inviscid entropy solution as its mean-free-path epsilon approaches 0, and the convergence rate is estimated.

  10. Zero-Slack, Noncritical Paths

    ERIC Educational Resources Information Center

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  11. Computing Diffeomorphic Paths for Large Motion Interpolation.

    PubMed

    Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C

    2013-06-01

    In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

  12. Admission Path, Family Structure and Outcomes in Ghana's Public Universities: Evidence from KNUST Students Enrolled in the Social Sciences

    ERIC Educational Resources Information Center

    Yusif, Hadrat; Ofori-Abebrese, Grace

    2017-01-01

    At the Kwame Nkrumah University of Science and Technology (KNUST) in Ghana, first year enrolment increased by 1466.81% from 708 in 1961/1962 to 11,093 in 2011. In the 2013/2014 academic year, the total student population was 45,897. There are now five main admission paths, comprising regular, mature, fee paying, less endowed, and protocol/staff…

  13. Distributed multiple path routing in complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Wang, San-Xiu; Wu, Ling-Wei; Mei, Pan; Yang, Xu-Hua; Wen, Guang-Hui

    2016-12-01

    Routing in complex transmission networks is an important problem that has garnered extensive research interest in the recent years. In this paper, we propose a novel routing strategy called the distributed multiple path (DMP) routing strategy. For each of the O-D node pairs in a given network, the DMP routing strategy computes and stores multiple short-length paths that overlap less with each other in advance. And during the transmission stage, it rapidly selects an actual routing path which provides low transmission cost from the pre-computed paths for each transmission task, according to the real-time network transmission status information. Computer simulation results obtained for the lattice, ER random, and scale-free networks indicate that the strategy can significantly improve the anti-congestion ability of transmission networks, as well as provide favorable routing robustness against partial network failures.

  14. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  15. Path Expressions

    DTIC Science & Technology

    1975-06-01

    ORGANIZATION NAME AND ADDRESS Carnegie-Mellon University Computer Science Dept Pittsburgh, Pa 15213 II. CONTROLLING OFFICE NAMF AND ADDRESS...programmer. Example 1. A communciation between two procasses is initiated by declaring a buffer which can hold a message whose interpretation is Known...words, the functions named in a path are automatically embedded in a critical region specific for that path.) The computation of the next state in

  16. Spectral determinants for twist field correlators

    NASA Astrophysics Data System (ADS)

    Belitsky, A. V.

    2018-04-01

    Twist fields were introduced a few decades ago as a quantum counterpart to classical kink configurations and disorder variables in low dimensional field theories. In recent years they received a new incarnation within the framework of geometric entropy and strong coupling limit of four-dimensional scattering amplitudes. In this paper, we study their two-point correlation functions in a free massless scalar theory, namely, twist-twist and twist-antitwist correlators. In spite of the simplicity of the model in question, the properties of the latter are far from being trivial. The problem is reduced, within the formalism of the path integral, to the study of spectral determinants on surfaces with conical points, which are then computed exactly making use of the zeta function regularization. We also provide an insight into twist correlators for a massive complex scalar by means of the Lifshitz-Krein trace formula.

  17. The Hidden Job Requirements for a Software Engineer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinovici, Maria C.; Kirkham, Harold; Glass, Kevin A.

    In a world increasingly operated by computers, where innovation depends on software, the software engineer’s role is changing continuously and gaining new dimensions. In commercial software development as well as scientific research environments, the way software developers are perceived is changing, because they are more important to the business than ever before. Nowadays, their job requires skills extending beyond the regular job description posted by HR, and more is expected. To advance and thrive in their new roles, the software engineers must embrace change, and practice the themes of the new era (integration, collaboration and optimization). The challenges may bemore » somehow intimidating for freshly graduated software engineers. Through this paper the authors hope to set them on a path for success, by helping them relinquish their fear of the unknown.« less

  18. Distribution of shortest cycle lengths in random networks

    NASA Astrophysics Data System (ADS)

    Bonneau, Haggai; Hassid, Aviv; Biham, Ofer; Kühn, Reimer; Katzav, Eytan

    2017-12-01

    We present analytical results for the distribution of shortest cycle lengths (DSCL) in random networks. The approach is based on the relation between the DSCL and the distribution of shortest path lengths (DSPL). We apply this approach to configuration model networks, for which analytical results for the DSPL were obtained before. We first calculate the fraction of nodes in the network which reside on at least one cycle. Conditioning on being on a cycle, we provide the DSCL over ensembles of configuration model networks with degree distributions which follow a Poisson distribution (Erdős-Rényi network), degenerate distribution (random regular graph), and a power-law distribution (scale-free network). The mean and variance of the DSCL are calculated. The analytical results are found to be in very good agreement with the results of computer simulations.

  19. Instant-Form and Light-Front Quantization of Field Theories

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James

    2018-05-01

    In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.

  20. Graph Coarsening for Path Finding in Cybersecurity Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Johnson, John R.; Halappanavar, Mahantesh

    2013-01-01

    n the pass-the-hash attack, hackers repeatedly steal password hashes and move through a computer network with the goal of reaching a computer with high level administrative privileges. In this paper we apply graph coarsening in network graphs for the purpose of detecting hackers using this attack or assessing the risk level of the network's current state. We repeatedly take graph minors, which preserve the existence of paths in the graph, and take powers of the adjacency matrix to count the paths. This allows us to detect the existence of paths as well as find paths that have high risk ofmore » being used by adversaries.« less

  1. Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.

    PubMed

    Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron

    2017-10-21

    The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.

  2. Feature Clustering for Accelerating Parallel Coordinate Descent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherrer, Chad; Tewari, Ambuj; Halappanavar, Mahantesh

    2012-12-06

    We demonstrate an approach for accelerating calculation of the regularization path for L1 sparse logistic regression problems. We show the benefit of feature clustering as a preconditioning step for parallel block-greedy coordinate descent algorithms.

  3. Nonintrusive performance measurement of a gas turbine engine in real time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul P.; Claussen, Heiko

    Performance of a gas turbine engine is monitored by computing a mass flow rate through the engine. Acoustic time-of-flight measurements are taken between acoustic transmitters and receivers in the flow path of the engine. The measurements are processed to determine average speeds of sound and gas flow velocities along those lines-of-sound. A volumetric flow rate in the flow path is computed using the gas flow velocities together with a representation of the flow path geometry. A gas density in the flow path is computed using the speeds of sound and a measured static pressure. The mass flow rate is calculatedmore » from the gas density and the volumetric flow rate.« less

  4. Diffractive paths for weak localization in quantum billiards

    NASA Astrophysics Data System (ADS)

    Březinová, Iva; Stampfer, Christoph; Wirtz, Ludger; Rotter, Stefan; Burgdörfer, Joachim

    2008-04-01

    We study the weak-localization effect in quantum transport through a clean ballistic cavity with regular classical dynamics. We address the question which paths account for the suppression of conductance through a system where disorder and chaos are absent. By exploiting both quantum and semiclassical methods, we unambiguously identify paths that are diffractively backscattered into the cavity (when approaching the lead mouths from the cavity interior) to play a key role. Diffractive scattering couples transmitted and reflected paths and is thus essential to reproduce the weak-localization peak in reflection and the corresponding antipeak in transmission. A comparison of semiclassical calculations featuring these diffractive paths yields good agreement with full quantum calculations and experimental data. Our theory provides system-specific predictions for the quantum regime of few open lead modes and can be expected to be relevant also for mixed as well as chaotic systems.

  5. Elastic strain relaxation in interfacial dislocation patterns: I. A parametric energy-based framework

    NASA Astrophysics Data System (ADS)

    Vattré, A.

    2017-08-01

    A parametric energy-based framework is developed to describe the elastic strain relaxation of interface dislocations. By means of the Stroh sextic formalism with a Fourier series technique, the proposed approach couples the classical anisotropic elasticity theory with surface/interface stress and elasticity properties in heterogeneous interface-dominated materials. For any semicoherent interface of interest, the strain energy landscape is computed using the persistent elastic fields produced by infinitely periodic hexagonal-shaped dislocation configurations with planar three-fold nodes. A finite element based procedure combined with the conjugate gradient and nudged elastic band methods is applied to determine the minimum-energy paths for which the pre-computed energy landscapes yield to elastically favorable dislocation reactions. Several applications on the Au/Cu heterosystems are given. The simple and limiting case of a single set of infinitely periodic dislocations is introduced to determine exact closed-form expressions for stresses. The second limiting case of the pure (010) Au/Cu heterophase interfaces containing two crossing sets of straight dislocations investigates the effects due to the non-classical boundary conditions on the stress distributions, including separate and appropriate constitutive relations at semicoherent interfaces and free surfaces. Using the quantized Frank-Bilby equation, it is shown that the elastic strain landscape exhibits intrinsic dislocation configurations for which the junction formation is energetically unfavorable. On the other hand, the mismatched (111) Au/Cu system gives rise to the existence of a minimum-energy path where the fully strain-relaxed equilibrium and non-regular intrinsic hexagonal-shaped dislocation rearrangement is accompanied by a significant removal of the short-range elastic energy.

  6. Harmonic Fourier beads method for studying rare events on rugged energy surfaces.

    PubMed

    Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L

    2006-11-07

    We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.

  7. Visual environment recognition for robot path planning using template matched filters

    NASA Astrophysics Data System (ADS)

    Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto

    2017-08-01

    A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.

  8. Sensitivity of rough differential equations: An approach through the Omega lemma

    NASA Astrophysics Data System (ADS)

    Coutin, Laure; Lejay, Antoine

    2018-03-01

    The Itô map gives the solution of a Rough Differential Equation, a generalization of an Ordinary Differential Equation driven by an irregular path, when existence and uniqueness hold. By studying how a path is transformed through the vector field which is integrated, we prove that the Itô map is Hölder or Lipschitz continuous with respect to all its parameters. This result unifies and weakens the hypotheses of the regularity results already established in the literature.

  9. Effect of non-classical current paths in networks of 1-dimensional wires

    NASA Astrophysics Data System (ADS)

    Echternach, P. M.; Mikhalchuk, A. G.; Bozler, H. M.; Gershenson, M. E.; Bogdanov, A. L.; Nilsson, B.

    1996-04-01

    At low temperatures, the quantum corrections to the resistance due to weak localization and electron-electron interaction are affected by the shape and topology of samples. We observed these effects in the resistance of 2D percolation networks made from 1D wires and in a series of long 1D wires with regularly spaced side branches. Branches outside the classical current path strongly reduce the quantum corrections to the resistance and these reductions become a measure of the quantum lengths.

  10. A path-integral approach to the problem of time

    NASA Astrophysics Data System (ADS)

    Amaral, M. M.; Bojowald, Martin

    2018-01-01

    Quantum transition amplitudes are formulated for model systems with local internal time, using intuition from path integrals. The amplitudes are shown to be more regular near a turning point of internal time than could be expected based on existing canonical treatments. In particular, a successful transition through a turning point is provided in the model systems, together with a new definition of such a transition in general terms. Some of the results rely on a fruitful relation between the problem of time and general Gribov problems.

  11. 29 CFR 548.100 - Introductory statement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... simplify bookkeeping and computation of overtime pay. 1 The regular rate is the average hourly earnings of... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Introduction § 548.100... requirements of computing overtime pay at the regular rate, 1 and to allow, under specific conditions, the use...

  12. 20 CFR 226.33 - Spouse regular annuity rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Spouse regular annuity rate. 226.33 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.33 Spouse regular annuity rate. The final tier I and tier II rates, from §§ 226.30 and 226.32, are...

  13. Simulating Fragmentation and Fluid-Induced Fracture in Disordered Media Using Random Finite-Element Meshes

    DOE PAGES

    Bishop, Joseph E.; Martinez, Mario J.; Newell, Pania

    2016-11-08

    Fracture and fragmentation are extremely nonlinear multiscale processes in which microscale damage mechanisms emerge at the macroscale as new fracture surfaces. Numerous numerical methods have been developed for simulating fracture initiation, propagation, and coalescence. In this paper, we present a computational approach for modeling pervasive fracture in quasi-brittle materials based on random close-packed Voronoi tessellations. Each Voronoi cell is formulated as a polyhedral finite element containing an arbitrary number of vertices and faces. Fracture surfaces are allowed to nucleate only at the intercell faces. Cohesive softening tractions are applied to new fracture surfaces in order to model the energy dissipatedmore » during fracture growth. The randomly seeded Voronoi cells provide a regularized discrete random network for representing fracture surfaces. The potential crack paths within the random network are viewed as instances of realizable crack paths within the continuum material. Mesh convergence of fracture simulations is viewed in a weak, or distributional, sense. The explicit facet representation of fractures within this approach is advantageous for modeling contact on new fracture surfaces and fluid flow within the evolving fracture network. Finally, applications of interest include fracture and fragmentation in quasi-brittle materials and geomechanical applications such as hydraulic fracturing, engineered geothermal systems, compressed-air energy storage, and carbon sequestration.« less

  14. A Tissue Relevance and Meshing Method for Computing Patient-Specific Anatomical Models in Endoscopic Sinus Surgery Simulation

    NASA Astrophysics Data System (ADS)

    Audette, M. A.; Hertel, I.; Burgert, O.; Strauss, G.

    This paper presents on-going work on a method for determining which subvolumes of a patient-specific tissue map, extracted from CT data of the head, are relevant to simulating endoscopic sinus surgery of that individual, and for decomposing these relevant tissues into triangles and tetrahedra whose mesh size is well controlled. The overall goal is to limit the complexity of the real-time biomechanical interaction while ensuring the clinical relevance of the simulation. Relevant tissues are determined as the union of the pathology present in the patient, of critical tissues deemed to be near the intended surgical path or pathology, and of bone and soft tissue near the intended path, pathology or critical tissues. The processing of tissues, prior to meshing, is based on the Fast Marching method applied under various guises, in a conditional manner that is related to tissue classes. The meshing is based on an adaptation of a meshing method of ours, which combines the Marching Tetrahedra method and the discrete Simplex mesh surface model to produce a topologically faithful surface mesh with well controlled edge and face size as a first stage, and Almost-regular Tetrahedralization of the same prescribed mesh size as a last stage.

  15. Simulating Fragmentation and Fluid-Induced Fracture in Disordered Media Using Random Finite-Element Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, Joseph E.; Martinez, Mario J.; Newell, Pania

    Fracture and fragmentation are extremely nonlinear multiscale processes in which microscale damage mechanisms emerge at the macroscale as new fracture surfaces. Numerous numerical methods have been developed for simulating fracture initiation, propagation, and coalescence. In this paper, we present a computational approach for modeling pervasive fracture in quasi-brittle materials based on random close-packed Voronoi tessellations. Each Voronoi cell is formulated as a polyhedral finite element containing an arbitrary number of vertices and faces. Fracture surfaces are allowed to nucleate only at the intercell faces. Cohesive softening tractions are applied to new fracture surfaces in order to model the energy dissipatedmore » during fracture growth. The randomly seeded Voronoi cells provide a regularized discrete random network for representing fracture surfaces. The potential crack paths within the random network are viewed as instances of realizable crack paths within the continuum material. Mesh convergence of fracture simulations is viewed in a weak, or distributional, sense. The explicit facet representation of fractures within this approach is advantageous for modeling contact on new fracture surfaces and fluid flow within the evolving fracture network. Finally, applications of interest include fracture and fragmentation in quasi-brittle materials and geomechanical applications such as hydraulic fracturing, engineered geothermal systems, compressed-air energy storage, and carbon sequestration.« less

  16. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  17. Development of Anthropometric Analogous Headforms. Phase 1.

    DTIC Science & Technology

    1994-10-31

    shown in figure 5. This surface mesh can then be transformed into polygon faces that are able to be rendered by the AutoCAD rendering tools . Rendering of...computer-generated surfaces. The material removal techniques require the programming of the tool path of the cutter and in some cases requires specialized... tooling . Tool path programs are available to transfer the computer-generated surface into actual paths of the cutting tool . In cases where the

  18. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  19. Tomographic reconstruction of an aerosol plume using passive multiangle observations from the MISR satellite instrument

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Davis, Anthony B.; Diner, David J.

    2016-12-01

    We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.

  20. Computer Game Play as an Imaginary Stage for Reading: Implicit Spatial Effects of Computer Games Embedded in Hard Copy Books

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon

    2012-01-01

    This study compared books with embedded computer games (via pentop computers with microdot paper and audio feedback) with regular books with maps, in terms of fifth graders' comprehension and retention of spatial details from stories. One group read a story in hard copy with embedded computer games, the other group read it in regular book format…

  1. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  2. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  3. Transient Heat Conduction Simulation around Microprocessor Die

    NASA Astrophysics Data System (ADS)

    Nishi, Koji

    This paper explains about fundamental formula of calculating power consumption of CMOS (Complementary Metal-Oxide-Semiconductor) devices and its voltage and temperature dependency, then introduces equation for estimating power consumption of the microprocessor for notebook PC (Personal Computer). The equation is applied to heat conduction simulation with simplified thermal model and evaluates in sub-millisecond time step calculation. In addition, the microprocessor has two major heat conduction paths; one is from the top of the silicon die via thermal solution and the other is from package substrate and pins via PGA (Pin Grid Array) socket. Even though the dominant factor of heat conduction is the former path, the latter path - from package substrate and pins - plays an important role in transient heat conduction behavior. Therefore, this paper tries to focus the path from package substrate and pins, and to investigate more accurate method of estimating heat conduction paths of the microprocessor. Also, cooling performance expression of heatsink fan is one of key points to assure result with practical accuracy, while finer expression requires more computation resources which results in longer computation time. Then, this paper discusses the expression to minimize computation workload with a practical accuracy of the result.

  4. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  5. User's guide to Monte Carlo methods for evaluating path integrals

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  6. Path Integral Computation of Quantum Free Energy Differences Due to Alchemical Transformations Involving Mass and Potential.

    PubMed

    Pérez, Alejandro; von Lilienfeld, O Anatole

    2011-08-09

    Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.

  7. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  8. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  9. Development of FB-MultiPier dynamic vessel-collision analysis models, phase 2.

    DOT National Transportation Integrated Search

    2014-07-01

    Massive waterway vessels such as barges regularly transit navigable waterways in the U.S. During passages that fall within : the vicinity of bridge structures, vessels may (under extreme circumstances) deviate from the intended vessel transit path. A...

  10. Using Social Media in Exercises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaldson, Jeff

    This presentation discusses the use of social media as a tool during the full-scale exercise Tremor-14 in Las Vegas, and examines Lessons Learned as a path forward in using social media to disseminate Emergency Public Information (EPI) on a regular basis.

  11. Investigation of progressive failure robustness and alternate load paths for damage tolerant structures

    NASA Astrophysics Data System (ADS)

    Marhadi, Kun Saptohartyadi

    Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.

  12. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by employing bandwidth shells at areas of overutilization

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-04-27

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. An automated routing strategy routes packets through one or more intermediate nodes of the network to reach a final destination. The default routing strategy is altered responsive to detection of overutilization of a particular path of one or more links, and at least some traffic is re-routed by distributing the traffic among multiple paths (which may include the default path). An alternative path may require a greater number of link traversals to reach the destination node.

  13. Laser production of articles from powders

    DOEpatents

    Lewis, Gary K.; Milewski, John O.; Cremers, David A.; Nemec, Ronald B.; Barbe, Michael R.

    1998-01-01

    Method and apparatus for forming articles from materials in particulate form in which the materials are melted by a laser beam and deposited at points along a tool path to form an article of the desired shape and dimensions. Preferably the tool path and other parameters of the deposition process are established using computer-aided design and manufacturing techniques. A controller comprised of a digital computer directs movement of a deposition zone along the tool path and provides control signals to adjust apparatus functions, such as the speed at which a deposition head which delivers the laser beam and powder to the deposition zone moves along the tool path.

  14. Laser production of articles from powders

    DOEpatents

    Lewis, G.K.; Milewski, J.O.; Cremers, D.A.; Nemec, R.B.; Barbe, M.R.

    1998-11-17

    Method and apparatus for forming articles from materials in particulate form in which the materials are melted by a laser beam and deposited at points along a tool path to form an article of the desired shape and dimensions. Preferably the tool path and other parameters of the deposition process are established using computer-aided design and manufacturing techniques. A controller comprised of a digital computer directs movement of a deposition zone along the tool path and provides control signals to adjust apparatus functions, such as the speed at which a deposition head which delivers the laser beam and powder to the deposition zone moves along the tool path. 20 figs.

  15. On edge-aware path-based color spatial sampling for Retinex: from Termite Retinex to Light Energy-driven Termite Retinex

    NASA Astrophysics Data System (ADS)

    Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela

    2017-05-01

    Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.

  16. The elastic ratio: introducing curvature into ratio-based image segmentation.

    PubMed

    Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel

    2011-09-01

    We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.

  17. SIMULATION STUDY FOR GASEOUS FLUXES FROM AN AREA SOURCE USING COMPUTED TOMOGRAPHY AND OPTICAL REMOTE SENSING

    EPA Science Inventory

    The paper presents a new approach to quantifying emissions from fugitive gaseous air pollution sources. Computed tomography (CT) and path-integrated optical remote sensing (PI-ORS) concentration data are combined in a new field beam geometry. Path-integrated concentrations are ...

  18. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  19. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  20. Research Productivity: Some Paths Less Travelled

    ERIC Educational Resources Information Center

    Martin, Brian

    2009-01-01

    Conventional approaches for fostering research productivity, such as recruitment and incentives, do relatively little to develop latent capacities in researchers. Six promising unorthodox approaches are the promotion of regular writing, tools for creativity, good luck, happiness, good health and crowd wisdom. These options challenge conventional…

  1. Semianalytical computation of path lines for finite-difference models

    USGS Publications Warehouse

    Pollock, D.W.

    1988-01-01

    A semianalytical particle tracking method was developed for use with velocities generated from block-centered finite-difference ground-water flow models. Based on the assumption that each directional velocity component varies linearly within a grid cell in its own coordinate directions, the method allows an analytical expression to be obtained describing the flow path within an individual grid cell. Given the intitial position of a particle anywhere in a cell, the coordinates of any other point along its path line within the cell, and the time of travel between them, can be computed directly. For steady-state systems, the exit point for a particle entering a cell at any arbitrary location can be computed in a single step. By following the particle as it moves from cell to cell, this method can be used to trace the path of a particle through any multidimensional flow field generated from a block-centered finite-difference flow model. -Author

  2. Evaluation of Computer Based Testing in lieu of Regular Examinations in Computer Literacy

    NASA Astrophysics Data System (ADS)

    Murayama, Koichi

    Because computer based testing (CBT) has many advantages compared with the conventional paper and pencil testing (PPT) examination method, CBT has begun to be used in various situations in Japan, such as in qualifying examinations and in the TOEFL. This paper describes the usefulness and the problems of CBT applied to a regular college examination. The regular computer literacy examinations for first year students were held using CBT, and the results were analyzed. Responses to a questionnaire indicated many students accepted CBT with no unpleasantness and considered CBT a positive factor, improving their motivation to study. CBT also decreased the work of faculty in terms of marking tests and reducing data.

  3. Renormalized stress-energy tensor for stationary black holes

    NASA Astrophysics Data System (ADS)

    Levi, Adam

    2017-01-01

    We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.

  4. From Constructive Field Theory to Fractional Stochastic Calculus. (II) Constructive Proof of Convergence for the Lévy Area of Fractional Brownian Motion with Hurst Index {{alpha} {in} ((1)/(8),(1)/(4))}

    NASA Astrophysics Data System (ADS)

    Magnen, Jacques; Unterberger, Jérémie

    2012-03-01

    {Let $B=(B_1(t),...,B_d(t))$ be a $d$-dimensional fractional Brownian motion with Hurst index $\\alpha<1/4$, or more generally a Gaussian process whose paths have the same local regularity. Defining properly iterated integrals of $B$ is a difficult task because of the low H\\"older regularity index of its paths. Yet rough path theory shows it is the key to the construction of a stochastic calculus with respect to $B$, or to solving differential equations driven by $B$. We intend to show in a series of papers how to desingularize iterated integrals by a weak, singular non-Gaussian perturbation of the Gaussian measure defined by a limit in law procedure. Convergence is proved by using "standard" tools of constructive field theory, in particular cluster expansions and renormalization. These powerful tools allow optimal estimates, and call for an extension of Gaussian tools such as for instance the Malliavin calculus. After a first introductory paper \\cite{MagUnt1}, this one concentrates on the details of the constructive proof of convergence for second-order iterated integrals, also known as L\\'evy area.

  5. The semantic distance task: Quantifying semantic distance with semantic network path length.

    PubMed

    Kenett, Yoed N; Levi, Effi; Anaki, David; Faust, Miriam

    2017-09-01

    Semantic distance is a determining factor in cognitive processes, such as semantic priming, operating upon semantic memory. The main computational approach to compute semantic distance is through latent semantic analysis (LSA). However, objections have been raised against this approach, mainly in its failure at predicting semantic priming. We propose a novel approach to computing semantic distance, based on network science methodology. Path length in a semantic network represents the amount of steps needed to traverse from 1 word in the network to the other. We examine whether path length can be used as a measure of semantic distance, by investigating how path length affect performance in a semantic relatedness judgment task and recall from memory. Our results show a differential effect on performance: Up to 4 steps separating between word-pairs, participants exhibit an increase in reaction time (RT) and decrease in the percentage of word-pairs judged as related. From 4 steps onward, participants exhibit a significant decrease in RT and the word-pairs are dominantly judged as unrelated. Furthermore, we show that as path length between word-pairs increases, success in free- and cued-recall decreases. Finally, we demonstrate how our measure outperforms computational methods measuring semantic distance (LSA and positive pointwise mutual information) in predicting participants RT and subjective judgments of semantic strength. Thus, we provide a computational alternative to computing semantic distance. Furthermore, this approach addresses key issues in cognitive theory, namely the breadth of the spreading activation process and the effect of semantic distance on memory retrieval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. A Numerical Model of Exchange Chromatography Through 3D Lattice Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher; Robinson, David B.

    Rapid progress in the development of additive manufacturing technologies is opening new opportunities to fabricate structures that control mass transport in three dimensions across a broad range of length scales. We describe a structure that can be fabricated by newly available commercial 3D printers. It contains an array of regular three-dimensional flow paths that are in intimate contact with a solid phase, and thoroughly shuffle material among the paths. We implement a chemically reacting flow model to study its behavior as an exchange chromatography column, and compare it to an array of one-dimensional flow paths that resemble more traditional honeycombmore » monoliths. A reaction front moves through the columns and then elutes. Here, the front is sharper at all flow rates for the structure with three-dimensional flow paths, and this structure is more robust to channel width defects than the one-dimensional array.« less

  7. A Numerical Model of Exchange Chromatography Through 3D Lattice Structures

    DOE PAGES

    Salloum, Maher; Robinson, David B.

    2018-01-30

    Rapid progress in the development of additive manufacturing technologies is opening new opportunities to fabricate structures that control mass transport in three dimensions across a broad range of length scales. We describe a structure that can be fabricated by newly available commercial 3D printers. It contains an array of regular three-dimensional flow paths that are in intimate contact with a solid phase, and thoroughly shuffle material among the paths. We implement a chemically reacting flow model to study its behavior as an exchange chromatography column, and compare it to an array of one-dimensional flow paths that resemble more traditional honeycombmore » monoliths. A reaction front moves through the columns and then elutes. Here, the front is sharper at all flow rates for the structure with three-dimensional flow paths, and this structure is more robust to channel width defects than the one-dimensional array.« less

  8. Solar energy incident at the receiver of a solar tower plant, derived from remote sensing: Computation of both DNI and slant path transmittance

    NASA Astrophysics Data System (ADS)

    Elias, Thierry; Ramon, Didier; Garnero, Marie-Agnès; Dubus, Laurent; Bourdil, Charles

    2017-06-01

    By scattering and absorbing solar radiation, aerosols generate production losses in solar plants. Due to the specific design of solar tower plants, solar radiation is attenuated not only in the atmospheric column but also in the slant path between the heliostats and the receiver. Broadband attenuation by aerosols is estimated in both the column and the slant path for Ouarzazate, Morocco, using spectral measurements of aerosol optical thickness (AOT) collected by AERONET. The proportion of AOT below the tower's height is computed assuming a single uniform aerosol layer of height equal to the boundary layer height computed by ECMWF for the Operational Analysis. The monthly average of the broadband attenuation by aerosols in the slant path was 6.9±3.0% in August 2012 at Ouarzazate, for 1-km distance between the heliostat and the receiver. The slant path attenuation should be added to almost 40% attenuation along the atmospheric column, with aerosols in an approximate 4.7-km aerosol layer. Also, around 1.5% attenuation is caused by Rayleigh and water vapour in the slant path. The monochromatic-broadband extrapolation is validated by comparing computed and observed direct normal irradiance (DNI). DNI observed around noon varied from more than 1000 W/m2 to around 400 W/m2 at Ouarzazate in 2012 because of desert dust plumes transported from North African desert areas.

  9. When helping helps: exploring health benefits of cancer survivors participating in for-cause physical activity events.

    PubMed

    Umstattd Meyer, M Renée; Meyer, Andrew R; Wu, Cindy; Bernhart, John

    2018-05-29

    Over 15.5 million Americans live with cancer and 5-year survival rates have risen to 69%. Evidence supports important health benefits of regular physical activity for cancer survivors, including increased strength and quality of life, and reduced fatigue, recurrence, and mortality. However, physical activity participation among cancer survivors remains low. Cancer organizations provide various resources and support for cancer survivors, including emotional, instrumental, informational, and appraisal support. Many cancer organizations, like the LIVESTRONG Foundation, support the cancer community by sponsoring and hosting for-cause physical activity events, providing opportunities for anyone (including cancer survivors) to "help"/support those living with cancer. The concept of helping others has been positively related with wellbeing, physical activity, and multiple health behaviors for those helping. However, the role of helping others has not been examined in the context of being physically active to help others or its relationship with overall physical activity and quality of life among those helping. Therefore, we developed a path model to examine relationships between cancer survivors' (1) desire to help others with cancer, (2) physically active LIVESTRONG participation to help others, (3) regular physical activity engagement, and (4) quality of life. In 2010, 3257 cancer survivors responded to an online survey sent to all people involved with the LIVESTRONG organization at any level. The hypothesized path model was tested using path analysis (Mplus 8). After list-wise deletion of missing responses, our final sample size was 3122 (61.8% female, mean age: 48.2 years [SD = 12.7]). Results indicated that the model yielded perfect fit indexes. Controlling for age, sex, income, and survivorship length, desire to help was positively related with physically active LIVESTRONG participation (β = .11, p < .001), which was positively related with regular physical activity (β = .30, p < .001), and regular physical activity was positively related with quality of life (β = .194, p < .001). Results suggest that cancer survivors can benefit from participating in for-cause physical activity events, including more regular physical activity. Researchers need to further investigate the role of helping others when examining health behaviors and outcomes, and cancer organizations should continue encouraging cancer survivors to help others by participating in physical activity events.

  10. Brain-computer interface control along instructed paths

    NASA Astrophysics Data System (ADS)

    Sadtler, P. T.; Ryu, S. I.; Tyler-Kabara, E. C.; Yu, B. M.; Batista, A. P.

    2015-02-01

    Objective. Brain-computer interfaces (BCIs) are being developed to assist paralyzed people and amputees by translating neural activity into movements of a computer cursor or prosthetic limb. Here we introduce a novel BCI task paradigm, intended to help accelerate improvements to BCI systems. Through this task, we can push the performance limits of BCI systems, we can quantify more accurately how well a BCI system captures the user’s intent, and we can increase the richness of the BCI movement repertoire. Approach. We have implemented an instructed path task, wherein the user must drive a cursor along a visible path. The instructed path task provides a versatile framework to increase the difficulty of the task and thereby push the limits of performance. Relative to traditional point-to-point tasks, the instructed path task allows more thorough analysis of decoding performance and greater richness of movement kinematics. Main results. We demonstrate that monkeys are able to perform the instructed path task in a closed-loop BCI setting. We further investigate how the performance under BCI control compares to native arm control, whether users can decrease their movement variability in the face of a more demanding task, and how the kinematic richness is enhanced in this task. Significance. The use of the instructed path task has the potential to accelerate the development of BCI systems and their clinical translation.

  11. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  12. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  13. SPILC: An expert student advisor

    NASA Technical Reports Server (NTRS)

    Read, D. R.

    1990-01-01

    The Lamar University Computer Science Department serves about 350 undergraduate C.S. majors, and 70 graduate majors. B.S. degrees are offered in Computer Science and Computer and Information Science, and an M.S. degree is offered in Computer Science. In addition, the Computer Science Department plays a strong service role, offering approximately sixteen service course sections per long semester. The department has eight regular full-time faculty members, including the Department Chairman and the Undergraduate Advisor, and from three to seven part-time faculty members. Due to the small number of regular faculty members and the resulting very heavy teaching loads, undergraduate advising has become a difficult problem for the department. There is a one week early registration period and a three-day regular registration period once each semester. The Undergraduate Advisor's regular teaching load of two classes, 6 - 8 semester hours, per semester, together with the large number of majors and small number of regular faculty, cause long queues and short tempers during these advising periods. The situation is aggravated by the fact that entering freshmen are rarely accompanied by adequate documentation containing the facts necessary for proper counselling. There has been no good method of obtaining necessary facts and documenting both the information provided by the student and the resulting advice offered by the counsellors.

  14. A Study of Moisture Induced Material Loss of Hot Mix Asphalt (HMA)

    DOT National Transportation Integrated Search

    2017-10-31

    Maine Department of Transportation has noticed the partial or complete loss of material within 2-3 years of construction in the traffic wheel path in the presence of moisture in few of their mixes. Regularly used moisture susceptibility tests are una...

  15. Path scanning for the detection of anomalous subgraphs and use of DNS requests and host agents for anomaly/change detection and network situational awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neil, Joshua Charles; Fisk, Michael Edward; Brugh, Alexander William

    A system, apparatus, computer-readable medium, and computer-implemented method are provided for detecting anomalous behavior in a network. Historical parameters of the network are determined in order to determine normal activity levels. A plurality of paths in the network are enumerated as part of a graph representing the network, where each computing system in the network may be a node in the graph and the sequence of connections between two computing systems may be a directed edge in the graph. A statistical model is applied to the plurality of paths in the graph on a sliding window basis to detect anomalousmore » behavior. Data collected by a Unified Host Collection Agent ("UHCA") may also be used to detect anomalous behavior.« less

  16. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.

    PubMed

    Reid, Stephen; Tibshirani, Rob

    2014-07-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.

  17. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package

    PubMed Central

    Reid, Stephen; Tibshirani, Rob

    2014-01-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587

  18. Some path-following techniques for solution of nonlinear equations and comparison with parametric differentiation

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Walters, R. W.

    1986-01-01

    Some path-following techniques are described and compared with other methods. Use of multipurpose techniques that can be used at more than one stage of the path-following computation results in a system that is relatively simple to understand, program, and use. Comparison of path-following methods with the method of parametric differentiation reveals definite advantages for the path-following methods. The fact that parametric differentiation has found a broader range of applications indicates that path-following methods have been underutilized.

  19. Biologically important conformational features of DNA as interpreted by quantum mechanics and molecular mechanics computations of its simple fragments.

    PubMed

    Poltev, V; Anisimov, V M; Dominguez, V; Gonzalez, E; Deriabina, A; Garcia, D; Rivas, F; Polteva, N A

    2018-02-01

    Deciphering the mechanism of functioning of DNA as the carrier of genetic information requires identifying inherent factors determining its structure and function. Following this path, our previous DFT studies attributed the origin of unique conformational characteristics of right-handed Watson-Crick duplexes (WCDs) to the conformational profile of deoxydinucleoside monophosphates (dDMPs) serving as the minimal repeating units of DNA strand. According to those findings, the directionality of the sugar-phosphate chain and the characteristic ranges of dihedral angles of energy minima combined with the geometric differences between purines and pyrimidines determine the dependence on base sequence of the three-dimensional (3D) structure of WCDs. This work extends our computational study to complementary deoxydinucleotide-monophosphates (cdDMPs) of non-standard conformation, including those of Z-family, Hoogsteen duplexes, parallel-stranded structures, and duplexes with mispaired bases. For most of these systems, except Z-conformation, computations closely reproduce experimental data within the tolerance of characteristic limits of dihedral parameters for each conformation family. Computation of cdDMPs with Z-conformation reveals that their experimental structures do not correspond to the internal energy minimum. This finding establishes the leading role of external factors in formation of the Z-conformation. Energy minima of cdDMPs of non-Watson-Crick duplexes demonstrate different sequence-dependence features than those known for WCDs. The obtained results provide evidence that the biologically important regularities of 3D structure distinguish WCDs from duplexes having non-Watson-Crick nucleotide pairing.

  20. Statistical Analysis of the First Passage Path Ensemble of Jump Processes

    NASA Astrophysics Data System (ADS)

    von Kleist, Max; Schütte, Christof; Zhang, Wei

    2018-02-01

    The transition mechanism of jump processes between two different subsets in state space reveals important dynamical information of the processes and therefore has attracted considerable attention in the past years. In this paper, we study the first passage path ensemble of both discrete-time and continuous-time jump processes on a finite state space. The main approach is to divide each first passage path into nonreactive and reactive segments and to study them separately. The analysis can be applied to jump processes which are non-ergodic, as well as continuous-time jump processes where the waiting time distributions are non-exponential. In the particular case that the jump processes are both Markovian and ergodic, our analysis elucidates the relations between the study of the first passage paths and the study of the transition paths in transition path theory. We provide algorithms to numerically compute statistics of the first passage path ensemble. The computational complexity of these algorithms scales with the complexity of solving a linear system, for which efficient methods are available. Several examples demonstrate the wide applicability of the derived results across research areas.

  1. Recurrent Otitis Media and Attachment Security: A Path Model.

    ERIC Educational Resources Information Center

    McCallum, Michelle S.; McKim, Margaret K.

    1999-01-01

    Used regular telephone interviews over six months to examine processes through which recurrent episodes of otitis media influence children's attachment security. Found that recurrent otitis media negatively affected attachment security by increasing mothers' perceptions of their children as behaving more negatively. Parenting stress was not…

  2. Generic effective source for scalar self-force calculations

    NASA Astrophysics Data System (ADS)

    Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter

    2012-05-01

    A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.

  3. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    NASA Astrophysics Data System (ADS)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.

  4. Quality of service routing in wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Sane, Sachin J.; Patcha, Animesh; Mishra, Amitabh

    2003-08-01

    An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.

  5. Broadcasting a message in a parallel computer

    DOEpatents

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  6. Real-time path planning in dynamic virtual environments using multiagent navigation graphs.

    PubMed

    Sud, Avneesh; Andersen, Erik; Curtis, Sean; Lin, Ming C; Manocha, Dinesh

    2008-01-01

    We present a novel approach for efficient path planning and navigation of multiple virtual agents in complex dynamic scenes. We introduce a new data structure, Multi-agent Navigation Graph (MaNG), which is constructed using first- and second-order Voronoi diagrams. The MaNG is used to perform route planning and proximity computations for each agent in real time. Moreover, we use the path information and proximity relationships for local dynamics computation of each agent by extending a social force model [Helbing05]. We compute the MaNG using graphics hardware and present culling techniques to accelerate the computation. We also address undersampling issues and present techniques to improve the accuracy of our algorithm. Our algorithm is used for real-time multi-agent planning in pursuit-evasion, terrain exploration and crowd simulation scenarios consisting of hundreds of moving agents, each with a distinct goal.

  7. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  8. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE PAGES

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.; ...

    2016-10-11

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  9. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  10. Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop: August 4-5, 2015, Washington, D.C.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Boldyrev, Stanislav; Fischer, Paul

    This report details the impact exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the DOE applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought together experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.

  11. Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol

    PubMed Central

    2015-01-01

    Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726

  12. Functional integration of vertical flight path and speed control using energy principles

    NASA Technical Reports Server (NTRS)

    Lambregts, A. A.

    1984-01-01

    A generalized automatic flight control system was developed which integrates all longitudinal flight path and speed control functions previously provided by a pitch autopilot and autothrottle. In this design, a net thrust command is computed based on total energy demand arising from both flight path and speed targets. The elevator command is computed based on the energy distribution error between flight path and speed. The engine control is configured to produce the commanded net thrust. The design incorporates control strategies and hierarchy to deal systematically and effectively with all aircraft operational requirements, control nonlinearities, and performance limits. Consistent decoupled maneuver control is achieved for all modes and flight conditions without outer loop gain schedules, control law submodes, or control function duplication.

  13. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  14. PathCase-SB architecture and database design

    PubMed Central

    2011-01-01

    Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889

  15. A Lens on Learning: Early Vision Screening Can Set Children on the Path to Achievement.

    ERIC Educational Resources Information Center

    Black, Susan

    2002-01-01

    Discusses student learning difficulties linked to visual disorders such as dyslexia and amblyopia, problems associated with current school vision-screening procedures, and recommendations to improve preschool and in-school vision-screening practices with an emphasis on early, regular, and comprehensive eye examinations. (PKP)

  16. Minimizing Wide-Area Performance Disruptions in Inter-Domain Routing

    DTIC Science & Technology

    2011-09-01

    Servers As another example, we saw the average round-trip time double for an ISP in Malaysia . The RTT increase was caused by a traffic shift to different... censorship , conduct wiretapping, or offer poor performance. This is achieved by applying regular expressions to the AS-PATH to assign lower preference

  17. Outline of a novel architecture for cortical computation.

    PubMed

    Majumdar, Kaushik

    2008-03-01

    In this paper a novel architecture for cortical computation has been proposed. This architecture is composed of computing paths consisting of neurons and synapses. These paths have been decomposed into lateral, longitudinal and vertical components. Cortical computation has then been decomposed into lateral computation (LaC), longitudinal computation (LoC) and vertical computation (VeC). It has been shown that various loop structures in the cortical circuit play important roles in cortical computation as well as in memory storage and retrieval, keeping in conformity with the molecular basis of short and long term memory. A new learning scheme for the brain has also been proposed and how it is implemented within the proposed architecture has been explained. A few mathematical results about the architecture have been proposed, some of which are without proof.

  18. Excitation of nucleobases from a computational perspective I: reaction paths.

    PubMed

    Giussani, Angelo; Segarra-Martí, Javier; Roca-Sanjuán, Daniel; Merchán, Manuela

    2015-01-01

    The main intrinsic photochemical events in nucleobases can be described on theoretical grounds within the realm of non-adiabatic computational photochemistry. From a static standpoint, the photochemical reaction path approach (PRPA), through the computation of the respective minimum energy path (MEP), can be regarded as the most suitable strategy in order to explore the electronically excited isolated nucleobases. Unfortunately, the PRPA does not appear widely in the studies reported in the last decade. The main ultrafast decay observed experimentally for the gas-phase excited nucleobases is related to the computed barrierless MEPs from the bright excited state connecting the initial Franck-Condon region and a conical intersection involving the ground state. At the highest level of theory currently available (CASPT2//CASPT2), the lowest excited (1)(ππ*) hypersurface for cytosine has a shallow minimum along the MEP deactivation pathway. In any case, the internal conversion processes in all the natural nucleobases are attained by means of interstate crossings, a self-protection mechanism that prevents the occurrence of photoinduced damage of nucleobases by ultraviolet radiation. Many alternative and secondary paths have been proposed in the literature, which ultimately provide a rich and constructive interplay between experimentally and theoretically oriented research.

  19. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  20. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  1. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  2. Shortest path problem on a grid network with unordered intermediate points

    NASA Astrophysics Data System (ADS)

    Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen

    2017-10-01

    We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.

  3. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles

    PubMed Central

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently. PMID:28255297

  4. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    PubMed

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  5. Impedance computed tomography using an adaptive smoothing coefficient algorithm.

    PubMed

    Suzuki, A; Uchiyama, A

    2001-01-01

    In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.

  6. Portable low-coherence interferometry for quantitatively imaging fast dynamics with extended field of view

    NASA Astrophysics Data System (ADS)

    Shaked, Natan T.; Girshovitz, Pinhas; Frenklach, Irena

    2014-06-01

    We present our recent advances in the development of compact, highly portable and inexpensive wide-field interferometric modules. By a smart design of the interferometric system, including the usage of low-coherence illumination sources and common-path off-axis geometry of the interferometers, spatial and temporal noise levels of the resulting quantitative thickness profile can be sub-nanometric, while processing the phase profile in real time. In addition, due to novel experimentally-implemented multiplexing methods, we can capture low-coherence off-axis interferograms with significantly extended field of view and in faster acquisition rates. Using these techniques, we quantitatively imaged rapid dynamics of live biological cells including sperm cells and unicellular microorganisms. Then, we demonstrated dynamic profiling during lithography processes of microscopic elements, with thicknesses that may vary from several nanometers to hundreds of microns. Finally, we present new algorithms for fast reconstruction (including digital phase unwrapping) of off-axis interferograms, which allow real-time processing in more than video rate on regular single-core computers.

  7. Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.

    PubMed

    Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan

    2018-06-05

    Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.

  8. OSI Network-layer Abstraction: Analysis of Simulation Dynamics and Performance Indicators

    NASA Astrophysics Data System (ADS)

    Lawniczak, Anna T.; Gerisch, Alf; Di Stefano, Bruno

    2005-06-01

    The Open Systems Interconnection (OSI) reference model provides a conceptual framework for communication among computers in a data communication network. The Network Layer of this model is responsible for the routing and forwarding of packets of data. We investigate the OSI Network Layer and develop an abstraction suitable for the study of various network performance indicators, e.g. throughput, average packet delay, average packet speed, average packet path-length, etc. We investigate how the network dynamics and the network performance indicators are affected by various routing algorithms and by the addition of randomly generated links into a regular network connection topology of fixed size. We observe that the network dynamics is not simply the sum of effects resulting from adding individual links to the connection topology but rather is governed nonlinearly by the complex interactions caused by the existence of all randomly added and already existing links in the network. Data for our study was gathered using Netzwerk-1, a C++ simulation tool that we developed for our abstraction.

  9. Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol

    DOE PAGES

    Kale, Seyit; Sode, Olaseni; Weare, Jonathan; ...

    2014-11-07

    Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum undermore » DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.« less

  10. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  11. Path description of coordinate-space amplitudes

    NASA Astrophysics Data System (ADS)

    Erdoǧan, Ozan; Sterman, George

    2017-06-01

    We develop a coordinate version of light-cone-ordered perturbation theory, for general time-ordered products of fields, by carrying out integrals over one light-cone coordinate for each interaction vertex. The resulting expressions depend on the lengths of paths, measured in the same light-cone coordinate. Each path is associated with a denominator equal to a "light-cone deficit," analogous to the "energy deficits" of momentum-space time- or light-cone-ordered perturbation theory. In effect, the role played by intermediate states in momentum space is played by paths between external fields in coordinate space. We derive a class of identities satisfied by coordinate diagrams, from which their imaginary parts can be derived. Using scalar QED as an example, we show how the eikonal approximation arises naturally when the external points in a Green function approach the light cone, and we give applications to products of Wilson lines. Although much of our discussion is directed at massless fields in four dimensions, we extend the formalism to massive fields and dimensional regularization.

  12. Langevin Dynamics, Large Deviations and Instantons for the Quasi-Geostrophic Model and Two-Dimensional Euler Equations

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2014-09-01

    We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.

  13. Path optimization with limited sensing ability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Sung Ha, E-mail: kang@math.gatech.edu; Kim, Seong Jun, E-mail: skim396@math.gatech.edu; Zhou, Haomin, E-mail: hmzhou@math.gatech.edu

    2015-10-15

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducingmore » its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.« less

  14. Developmental dysplasia of the hip: A computational biomechanical model of the path of least energy for closed reduction.

    PubMed

    Zwawi, Mohammed A; Moslehy, Faissal A; Rose, Christopher; Huayamave, Victor; Kassab, Alain J; Divo, Eduardo; Jones, Brendan J; Price, Charles T

    2017-08-01

    This study utilized a computational biomechanical model and applied the least energy path principle to investigate two pathways for closed reduction of high grade infantile hip dislocation. The principle of least energy when applied to moving the femoral head from an initial to a final position considers all possible paths that connect them and identifies the path of least resistance. Clinical reports of severe hip dysplasia have concluded that reduction of the femoral head into the acetabulum may occur by a direct pathway over the posterior rim of the acetabulum when using the Pavlik harness, or by an indirect pathway with reduction through the acetabular notch when using the modified Hoffman-Daimler method. This computational study also compared the energy requirements for both pathways. The anatomical and muscular aspects of the model were derived using a combination of MRI and OpenSim data. Results of this study indicate that the path of least energy closely approximates the indirect pathway of the modified Hoffman-Daimler method. The direct pathway over the posterior rim of the acetabulum required more energy for reduction. This biomechanical analysis confirms the clinical observations of the two pathways for closed reduction of severe hip dysplasia. The path of least energy closely approximated the modified Hoffman-Daimler method. Further study of the modified Hoffman-Daimler method for reduction of severe hip dysplasia may be warranted based on this computational biomechanical analysis. © 2016 The Authors. Journal of Orthopaedic Research Published by Wiley Periodicals, Inc. on behalf of Orthopaedic Research Society. J Orthop Res 35:1799-1805, 2017. © 2016 The Authors. Journal of Orthopaedic Research Published by Wiley Periodicals, Inc. on behalf of Orthopaedic Research Society.

  15. Computer calculation of Witten's 3-manifold invariant

    NASA Astrophysics Data System (ADS)

    Freed, Daniel S.; Gompf, Robert E.

    1991-10-01

    Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant.

  16. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-06-28

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.

  17. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.

    1994-01-01

    An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.

  18. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  19. High-speed manufacturing of highly regular femtosecond laser-induced periodic surface structures: physical origin of regularity.

    PubMed

    Gnilitskyi, Iaroslav; Derrien, Thibault J-Y; Levy, Yoann; Bulgakova, Nadezhda M; Mocek, Tomáš; Orazi, Leonardo

    2017-08-16

    Highly regular laser-induced periodic surface structures (HR-LIPSS) have been fabricated on surfaces of Mo, steel alloy and Ti at a record processing speed on large areas and with a record regularity in the obtained sub-wavelength structures. The physical mechanisms governing LIPSS regularity are identified and linked with the decay length (i.e. the mean free path) of the excited surface electromagnetic waves (SEWs). The dispersion of the LIPSS orientation angle well correlates with the SEWs decay length: the shorter this length, the more regular are the LIPSS. A material dependent criterion for obtaining HR-LIPSS is proposed for a large variety of metallic materials. It has been found that decreasing the spot size close to the SEW decay length is a key for covering several cm 2 of material surface by HR-LIPSS in a few seconds. Theoretical predictions suggest that reducing the laser wavelength can provide the possibility of HR-LIPSS production on principally any metal. This new achievement in the unprecedented level of control over the laser-induced periodic structure formation makes this laser-writing technology to be flexible, robust and, hence, highly competitive for advanced industrial applications based on surface nanostructuring.

  20. 5 CFR 532.257 - Regular nonappropriated fund wage schedules in foreign areas.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... These schedules will provide rates of pay for nonsupervisory, leader, and supervisory employees. (b) Schedules will be— (1) Computed on the basis of a simple average of all regular nonappropriated fund wage... each nonsupervisory grade will be derived by computing a simple average of each step 2 rate for each of...

  1. 5 CFR 532.257 - Regular nonappropriated fund wage schedules in foreign areas.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... These schedules will provide rates of pay for nonsupervisory, leader, and supervisory employees. (b) Schedules will be— (1) Computed on the basis of a simple average of all regular nonappropriated fund wage... each nonsupervisory grade will be derived by computing a simple average of each step 2 rate for each of...

  2. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less

  3. Fast incorporation of optical flow into active polygons.

    PubMed

    Unal, Gozde; Krim, Hamid; Yezzi, Anthony

    2005-06-01

    In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.

  4. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  5. INNOVATIVE APPROACH FOR MEASURING AMMONIA AND METHANE FLUXES FROM A HOG FARM USING OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROSCOPY

    EPA Science Inventory

    The paper describes a new approach to quantify emissions from area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) technique and computed tomography (CT) technique. In this study, an...

  6. Adaptive Dynamics, Control, and Extinction in Networked Populations

    DTIC Science & Technology

    2015-07-09

    network geometries. From the pre-history of paths that go extinct, a density function is created from the prehistory of these paths, and a clear local...density plots of Fig. 3b. Using the IAMM to compute the most probable path and comparing it to the prehistory of extinction events on stochastic networks

  7. Autonomous Navigation by a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand

    2005-01-01

    ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.

  8. Dissociable cognitive mechanisms underlying human path integration.

    PubMed

    Wiener, Jan M; Berthoz, Alain; Wolbers, Thomas

    2011-01-01

    Path integration is a fundamental mechanism of spatial navigation. In non-human species, it is assumed to be an online process in which a homing vector is updated continuously during an outward journey. In contrast, human path integration has been conceptualized as a configural process in which travelers store working memory representations of path segments, with the computation of a homing vector only occurring when required. To resolve this apparent discrepancy, we tested whether humans can employ different path integration strategies in the same task. Using a triangle completion paradigm, participants were instructed either to continuously update the start position during locomotion (continuous strategy) or to remember the shape of the outbound path and to calculate home vectors on basis of this representation (configural strategy). While overall homing accuracy was superior in the configural condition, participants were quicker to respond during continuous updating, strongly suggesting that homing vectors were computed online. Corroborating these findings, we observed reliable differences in head orientation during the outbound path: when participants applied the continuous updating strategy, the head deviated significantly from straight ahead in direction of the start place, which can be interpreted as a continuous motor expression of the homing vector. Head orientation-a novel online measure for path integration-can thus inform about the underlying updating mechanism already during locomotion. In addition to demonstrating that humans can employ different cognitive strategies during path integration, our two-systems view helps to resolve recent controversies regarding the role of the medial temporal lobe in human path integration.

  9. A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement

    NASA Astrophysics Data System (ADS)

    Koner, P.; Battaglia, A.; Simmer, C.

    2009-04-01

    The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.

  10. Cutting tool form compensation system and method

    DOEpatents

    Barkman, W.E.; Babelay, E.F. Jr.; Klages, E.J.

    1993-10-19

    A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed. 9 figures.

  11. Cutting tool form compensaton system and method

    DOEpatents

    Barkman, William E.; Babelay, Jr., Edwin F.; Klages, Edward J.

    1993-01-01

    A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed.

  12. Control of the transition between regular and mach reflection of shock waves

    NASA Astrophysics Data System (ADS)

    Alekseev, A. K.

    2012-06-01

    A control problem was considered that makes it possible to switch the flow between stationary Mach and regular reflection of shock waves within the dual solution domain. The sensitivity of the flow was computed by solving adjoint equations. A control disturbance was sought by applying gradient optimization methods. According to the computational results, the transition from regular to Mach reflection can be executed by raising the temperature. The transition from Mach to regular reflection can be achieved by lowering the temperature at moderate Mach numbers and is impossible at large numbers. The reliability of the numerical results was confirmed by verifying them with the help of a posteriori analysis.

  13. Speed and path control for conflict-free flight in high air traffic demand in terminal airspace

    NASA Astrophysics Data System (ADS)

    Rezaei, Ali

    To accommodate the growing air traffic demand, flights will need to be planned and navigated with a much higher level of precision than today's aircraft flight path. The Next Generation Air Transportation System (NextGen) stands to benefit significantly in safety and efficiency from such movement of aircraft along precisely defined paths. Air Traffic Operations (ATO) relying on such precision--the Precision Air Traffic Operations or PATO--are the foundation of high throughput capacity envisioned for the future airports. In PATO, the preferred method is to manage the air traffic by assigning a speed profile to each aircraft in a given fleet in a given airspace (in practice known as (speed control). In this research, an algorithm has been developed, set in the context of a Hybrid Control System (HCS) model, that determines whether a speed control solution exists for a given fleet of aircraft in a given airspace and if so, computes this solution as a collective speed profile that assures separation if executed without deviation. Uncertainties such as weather are not considered but the algorithm can be modified to include uncertainties. The algorithm first computes all feasible sequences (i.e., all sequences that allow the given fleet of aircraft to reach destinations without violating the FAA's separation requirement) by looking at all pairs of aircraft. Then, the most likely sequence is determined and the speed control solution is constructed by a backward trajectory generation, starting with the aircraft last out and proceeds to the first out. This computation can be done for different sequences in parallel which helps to reduce the computation time. If such a solution does not exist, then the algorithm calculates a minimal path modification (known as path control) that will allow separation-compliance speed control. We will also prove that the algorithm will modify the path without creating a new separation violation. The new path will be generated by adding new waypoints in the airspace. As a byproduct, instead of minimal path modification, one can use the aircraft arrival time schedule to generate the sequence in which the aircraft reach their destinations.

  14. Computer-implemented remote sensing techniques for measuring coastal productivity and nutrient transport systems

    NASA Technical Reports Server (NTRS)

    Butera, M. K.

    1981-01-01

    An automatic technique has been developed to measure marsh plant production by inference from a species classification derived from Landsat MSS data. A separate computer technique has been developed to calculate the transport path length of detritus and nutrients from their point of origin in the marsh to the shoreline from Landsat data. A nutrient availability indicator, the ratio of production to transport path length, was derived for each marsh-identified Landsat cell. The use of a data base compatible with the Landsat format facilitated data handling and computations.

  15. Administrative Support and Its Mediating Effect on US Public School Teachers

    ERIC Educational Resources Information Center

    Tickle, Benjamin R.; Chang, Mido; Kim, Sunha

    2011-01-01

    This study examined the effect of administrative support on teachers' job satisfaction and intent to stay in teaching. The study employed a path analysis to the data of regular, full-time, public school teachers from the Schools and Staffing Survey teacher questionnaire. Administrative support was the most significant predictor of teachers' job…

  16. Ecological Consultancy

    ERIC Educational Resources Information Center

    Wilson, Scott McG.; Tattersfield, Peter

    2004-01-01

    This is the first of a new regular feature on careers, designed to provide those who teach biology with some inspiration when advising their students. In this issue, two consultant ecologists explain how their career paths developed. It is a misconception that there are few jobs in ecology. Over the past 20 or 30 years ecological consultancy has…

  17. Exploring the Special Education versus Regular Education Decisions of Future Teachers in the Rural Midwest

    ERIC Educational Resources Information Center

    DeSutter, Keri L.; Lemire, Steven Dale

    2016-01-01

    Persistent shortages of special education teachers, particularly in rural areas, exist across the country. This study assessed the openness of teacher candidates enrolled in an introductory education course at two rural Midwest universities to a special education career path. Survey findings confirmed that work or volunteer experience involving…

  18. Teaching World History: One Path through the Forest

    ERIC Educational Resources Information Center

    Fisher, Eve

    2012-01-01

    Teaching world history presents any number of challenges. World history requires constantly shifting perspectives in order to keep students oriented in time and space while providing contemporary relevance, emphasizing themes with regularity, having a certain amount of fun, and moving at the warp speed that covering 10,000 years in 31 weeks…

  19. Delayed Acquisition of Non-Adjacent Vocalic Distributional Regularities

    ERIC Educational Resources Information Center

    Gonzalez-Gomez, Nayeli; Nazzi, Thierry

    2016-01-01

    The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two…

  20. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  1. Computation of repetitions and regularities of biologically weighted sequences.

    PubMed

    Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K

    2006-01-01

    Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.

  2. Bayesian feature selection for high-dimensional linear regression via the Ising approximation with applications to genomics.

    PubMed

    Fisher, Charles K; Mehta, Pankaj

    2015-06-01

    Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Path statistics, memory, and coarse-graining of continuous-time random walks on networks

    PubMed Central

    Kion-Crosby, Willow; Morozov, Alexandre V.

    2015-01-01

    Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868

  4. Analysis of bubble plume spacing produced by regular breaking waves

    NASA Astrophysics Data System (ADS)

    Phaksopa, J.; Haller, M. C.

    2012-12-01

    The breaking wave process in the ocean is a significant mechanism for energy dissipation, splash, and entrainment of air. The relationship between breaking waves and bubble plume characteristics is still a mystery because of the complexity of the breaking wave mechanism. This study takes a unique approach to quantitatively analyze bubble plumes produced by regular breaking waves. Various previous studies have investigated the formation and the characteristics of bubble plumes using either field observations, laboratory experiments, or numerical modeling However, in most observational work the plume characteristics have been studied from the underneath the water surface. In addition, though numerical simulations are able to include much of the important physics, the computational costs are high and bubble plume events are only simulated for short times. Hence, bubble plume evolution and generation throughout the surf zone is not yet computationally feasible. In the present work we take a unique approach to analyzing bubble plumes. These data may be of use for model/data comparisons as numerical simulations become more tractable. The remotely sensed video data from freshwater breaking waves in the OSU Large Wave Flume (Catalan and Haller, 2008) are analyzed. The data set contains six different regular wave conditions and the video intensity data are used to estimate the spacing of plume events (wavenumber spectrum), to calculate the spectral width (i.e. the range of plume spacing), and to relate these with the wave conditions. The video intensity data capture the evolution of the wave passage over a fixed bed arranged in a bar-trough morphology. Bright regions represent the moving path or trajectory coincident with bubble plume of each wave. It also shows the bubble foam were generated and released from wave crest shown in the form of bubble tails with almost regular spacing for each wave. The bubble tails show that most bubbles did not move along with wave. For the estimation of wavenumber spectrum, the density is high at low wavenumber and it decreases toward high wavenumber. The average spectrum bandwidth was estimated and represented as the bubble event spacing for each run. It is found that its magnitude varies with wave conditions range from 8.81 - 11.82 and is related to the waveheight. Additionally, the calculated wavenumbers from power density function vary in the range of 0.80 - 1.58 meters-1. It is found that the bubble wavenumbers are mostly higher than the wavenumbers calculated from the linear wave theory between 0.2L-0.7L. In other words, the bubble plume length does not exceed the progressive wavelength.

  5. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  6. Room Use for Group Instruction in Regularly Scheduled Classes.

    ERIC Educational Resources Information Center

    Phay, John E.; McCary, Arthur D.

    A method by which accurate accounting by computer might be made of apace and room use by regularly scheduled classes in institutions of higher learning is furnished. Based on well-defined terms, a master room schedule and a master course schedule are prepared on computer cards. This information is then compared with the reported individual room…

  7. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  8. Mobile robot dynamic path planning based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Zhou, Heng; Wang, Ying

    2017-08-01

    In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.

  9. Solving the Curriculum Sequencing Problem with DNA Computing Approach

    ERIC Educational Resources Information Center

    Debbah, Amina; Ben Ali, Yamina Mohamed

    2014-01-01

    In the e-learning systems, a learning path is known as a sequence of learning materials linked to each others to help learners achieving their learning goals. As it is impossible to have the same learning path that suits different learners, the Curriculum Sequencing problem (CS) consists of the generation of a personalized learning path for each…

  10. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less

  11. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  12. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  13. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  14. A study of partial coherence for identifying interior noise sources and paths on general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1979-01-01

    The partial coherence analysis method for noise source/path determination is summarized and the application to a two input, single output system with coherence between the inputs is illustrated. The augmentation of the calculations on a digital computer interfaced with a two channel, real time analyzer is also discussed. The results indicate possible sources of error in the computations and suggest procedures for avoiding these errors.

  15. Investigating walking environments in and around assisted living facilities: a facility visit study.

    PubMed

    Lu, Zhipeng

    2010-01-01

    This study explores assisted living residents' walking behaviors, locations where residents prefer to walk, and walking environments in and around assisted living facilities. Regular walking is beneficial to older adults' physical and psychological health. Yet frail older residents in assisted living are usually too sedentary to achieve these benefits. The physical environment plays an important role in promoting physical activity. However, there is little research exploring this relationship in assisted living settings. The researcher visited 34 assisted living facilities in a major Texas city. Methods included walk-through observation with the Assisted Living Facility Walking Environment Checklist, and interviews with administrators by open- and close-ended questions. The data from 26 facilities were analyzed using descriptive statistics (for quantitative data) and content analysis (for qualitative data). The results indicate that (a) residents were walking both indoors and outdoors for exercise or other purposes (e.g., going to destinations); (b) assisted living facility planning and design details-such as neighborhood sidewalk conditions, facility site selection, availability of seating, walking path configuration (e.g., looped/nonlooped path), amount of shading along the path, presence of handrails, existence of signage, etc.-may influence residents' walking behaviors; and (c) current assisted living facilities need improvement in all aspects to make their environments more walkable for residents. Findings of the study provide recommendations for assisted living facilities to improve the walkability of environments and to create environmental interventions to promote regular walking among their residents. This study also implies several directions for future research.

  16. System and method for measuring residual stress

    DOEpatents

    Prime, Michael B.

    2002-01-01

    The present invention is a method and system for determining the residual stress within an elastic object. In the method, an elastic object is cut along a path having a known configuration. The cut creates a portion of the object having a new free surface. The free surface then deforms to a contour which is different from the path. Next, the contour is measured to determine how much deformation has occurred across the new free surface. Points defining the contour are collected in an empirical data set. The portion of the object is then modeled in a computer simulator. The points in the empirical data set are entered into the computer simulator. The computer simulator then calculates the residual stress along the path which caused the points within the object to move to the positions measured in the empirical data set. The calculated residual stress is then presented in a useful format to an analyst.

  17. Computer-implemented method and apparatus for autonomous position determination using magnetic field data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A. (Inventor)

    2000-01-01

    A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.

  18. Introducing Adaptivity Features to a Regular Learning Management System to Support Creation of Advanced eLessons

    ERIC Educational Resources Information Center

    Komlenov, Zivana; Budimac, Zoran; Ivanovic, Mirjana

    2010-01-01

    In order to improve the learning process for students with different pre-knowledge, personal characteristics and preferred learning styles, a certain degree of adaptability must be introduced to online courses. In learning environments that support such kind of functionalities students can explicitly choose different paths through course contents…

  19. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    ERIC Educational Resources Information Center

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  20. In Search of the Optimal Path: How Learners at Task Use an Online Dictionary

    ERIC Educational Resources Information Center

    Hamel, Marie-Josee

    2012-01-01

    We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…

  1. An Extended Trajectory Mechanics Approach for Calculating the Path of a Pressure Transient: Derivation and Illustration

    NASA Astrophysics Data System (ADS)

    Vasco, D. W.

    2018-04-01

    Following an approach used in quantum dynamics, an exponential representation of the hydraulic head transforms the diffusion equation governing pressure propagation into an equivalent set of ordinary differential equations. Using a reservoir simulator to determine one set of dependent variables leaves a reduced set of equations for the path of a pressure transient. Unlike the current approach for computing the path of a transient, based on a high-frequency asymptotic solution, the trajectories resulting from this new formulation are valid for arbitrary spatial variations in aquifer properties. For a medium containing interfaces and layers with sharp boundaries, the trajectory mechanics approach produces paths that are compatible with travel time fields produced by a numerical simulator, while the asymptotic solution produces paths that bend too strongly into high permeability regions. The breakdown of the conventional asymptotic solution, due to the presence of sharp boundaries, has implications for model parameter sensitivity calculations and the solution of the inverse problem. For example, near an abrupt boundary, trajectories based on the asymptotic approach deviate significantly from regions of high sensitivity observed in numerical computations. In contrast, paths based on the new trajectory mechanics approach coincide with regions of maximum sensitivity to permeability changes.

  2. TOOTHPASTEV6.11.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sankel, David J.; Clair, Aaron B. St.; Langsfield, Joshua D.

    2006-11-01

    Toothpaste is a graphical user interface and Computer Aided Drafting/Manufacturing (CAD/CAM) software package used to plan tool paths for Galil Motion Control hardware. The software is a tool for computer controlled dispensing of materials. The software may be used for solid freeform fabrication of components or the precision printing of inks. Mathematical calculations are used to produce a set of segments and arcs that when coupled together will fill space. The paths of the segments and arcs are then translated into a machine language that controls the motion of motors and translational stages to produce tool paths in three dimensions.more » As motion begins material(s) are dispensed or printed along the three-dimensional pathway.« less

  3. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Multiple altitude TM thermal infrared images were analyzed and the observed radiance values were computed. The data obtained represent an experimental relation between preceived radiance and altitude. A LOWTRAB approach was tested which incorporates a modification to the path radiance model. This modification assumes that the scattering out of the optical path is equal in magnitude and direction to the scattering into the path. The radiance observed at altitude by an aircraft sensor was used as input to the model. Expected radiance as a function of altitude was then computed down to the ground. The results were not very satisfactory because of somewhat large errors in temperature and because of the difference in the shape of the modeled and experimental curves.

  4. Knock-Outs, Stick-Outs, Cut-Outs: Clipping Paths Separate Objects from Background.

    ERIC Educational Resources Information Center

    Wilson, Bradley

    1998-01-01

    Outlines a six-step process that allows computer operators, using Photoshop software, to create "knock-outs" to precisely define the path that will serve to separate the object from the background. (SR)

  5. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  6. Digital Parallel Processor Array for Optimum Path Planning

    NASA Technical Reports Server (NTRS)

    Kremeny, Sabrina E. (Inventor); Fossum, Eric R. (Inventor); Nixon, Robert H. (Inventor)

    1996-01-01

    The invention computes the optimum path across a terrain or topology represented by an array of parallel processor cells interconnected between neighboring cells by links extending along different directions to the neighboring cells. Such an array is preferably implemented as a high-speed integrated circuit. The computation of the optimum path is accomplished by, in each cell, receiving stimulus signals from neighboring cells along corresponding directions, determining and storing the identity of a direction along which the first stimulus signal is received, broadcasting a subsequent stimulus signal to the neighboring cells after a predetermined delay time, whereby stimulus signals propagate throughout the array from a starting one of the cells. After propagation of the stimulus signal throughout the array, a master processor traces back from a selected destination cell to the starting cell along an optimum path of the cells in accordance with the identity of the directions stored in each of the cells.

  7. Quantifying Traversability of Terrain for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Seraji, Homayoun; Werger, Barry

    2005-01-01

    A document presents an updated discussion on a method of autonomous navigation for a robotic vehicle navigating across rough terrain. The method involves, among other things, the use of a measure of traversability, denoted the fuzzy traversability index, which embodies the information about the slope and roughness of terrain obtained from analysis of images acquired by cameras mounted on the robot. The improvements presented in the report focus on the use of the fuzzy traversability index to generate a traversability map and a grid map for planning the safest path for the robot. Once grid traversability values have been computed, they are utilized for rejecting unsafe path segments and for computing a traversalcost function for ranking candidate paths, selected by a search algorithm, from a specified initial position to a specified final position. The output of the algorithm is a set of waypoints designating a path having a minimal-traversal cost.

  8. Descent graphs in pedigree analysis: applications to haplotyping, location scores, and marker-sharing statistics.

    PubMed Central

    Sobel, E.; Lange, K.

    1996-01-01

    The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310

  9. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  10. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  11. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  12. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.701 Customarily and regularly. The phrase “customarily and...

  13. A computer method for schedule processing and quick-time updating.

    NASA Technical Reports Server (NTRS)

    Mccoy, W. H.

    1972-01-01

    A schedule analysis program is presented which can be used to process any schedule with continuous flow and with no loops. Although generally thought of as a management tool, it has applicability to such extremes as music composition and computer program efficiency analysis. Other possibilities for its use include the determination of electrical power usage during some operation such as spacecraft checkout, and the determination of impact envelopes for the purpose of scheduling payloads in launch processing. At the core of the described computer method is an algorithm which computes the position of each activity bar on the output waterfall chart. The algorithm is basically a maximal-path computation which gives to each node in the schedule network the maximal path from the initial node to the given node.

  14. Black Swans and the Effectiveness of Remediating Groundwater Contamination

    NASA Astrophysics Data System (ADS)

    Siegel, D. I.; Otz, M. H.; Otz, I.

    2013-12-01

    Black swans, outliers, dominate science far more than do predictable outcomes. Predictable success constitutes the Black Swan in groundwater remediation. Even the National Research Council concluded that remediating groundwater to drinking water standards has failed in typically complex hydrogeologic settings where heterogeneities and preferential flow paths deflect flow paths obliquely to hydraulic gradients. Natural systems, be they biological or physical, build upon a combination of large-scale regularity coupled to chaos at smaller scales. We show through a review of over 25 case studies that groundwater remediation efforts are best served by coupling parsimonious site characterization to natural and induced geochemical tracer tests to at least know where contamination advects with groundwater in the subsurface. In the majority of our case studies, actual flow paths diverge tens of degrees from anticipated flow paths because of unrecognized heterogeneities in the horizontal direction of transport, let alone the vertical direction. Consequently, regulatory agencies would better serve both the public and the environment by recognizing that long-term groundwater cleanup probably is futile in most hydrogeologic settings except to relaxed standards similar to brownfielding. A Black Swan

  15. Worldline approach for numerical computation of electromagnetic Casimir energies: Scalar field coupled to magnetodielectric media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackrory, Jonathan B.; Bhattacharya, Tanmoy; Steck, Daniel A.

    Here, we present a worldline method for the calculation of Casimir energies for scalar fields coupled to magnetodielectric media. The scalar model we consider may be applied in arbitrary geometries, and it corresponds exactly to one polarization of the electromagnetic field in planar layered media. Starting from the field theory for electromagnetism, we work with the two decoupled polarizations in planar media and develop worldline path integrals, which represent the two polarizations separately, for computing both Casimir and Casimir-Polder potentials. We then show analytically that the path integrals for the transverse-electric polarization coupled to a dielectric medium converge to themore » proper solutions in certain special cases, including the Casimir-Polder potential of an atom near a planar interface, and the Casimir energy due to two planar interfaces. We also evaluate the path integrals numerically via Monte Carlo path-averaging for these cases, studying the convergence and performance of the resulting computational techniques. Lastly, while these scalar methods are only exact in particular geometries, they may serve as an approximation for Casimir energies for the vector electromagnetic field in other geometries.« less

  16. Worldline approach for numerical computation of electromagnetic Casimir energies: Scalar field coupled to magnetodielectric media

    DOE PAGES

    Mackrory, Jonathan B.; Bhattacharya, Tanmoy; Steck, Daniel A.

    2016-10-12

    Here, we present a worldline method for the calculation of Casimir energies for scalar fields coupled to magnetodielectric media. The scalar model we consider may be applied in arbitrary geometries, and it corresponds exactly to one polarization of the electromagnetic field in planar layered media. Starting from the field theory for electromagnetism, we work with the two decoupled polarizations in planar media and develop worldline path integrals, which represent the two polarizations separately, for computing both Casimir and Casimir-Polder potentials. We then show analytically that the path integrals for the transverse-electric polarization coupled to a dielectric medium converge to themore » proper solutions in certain special cases, including the Casimir-Polder potential of an atom near a planar interface, and the Casimir energy due to two planar interfaces. We also evaluate the path integrals numerically via Monte Carlo path-averaging for these cases, studying the convergence and performance of the resulting computational techniques. Lastly, while these scalar methods are only exact in particular geometries, they may serve as an approximation for Casimir energies for the vector electromagnetic field in other geometries.« less

  17. Traffic engineering and regenerator placement in GMPLS networks with restoration

    NASA Astrophysics Data System (ADS)

    Yetginer, Emre; Karasan, Ezhan

    2002-07-01

    In this paper we study regenerator placement and traffic engineering of restorable paths in Generalized Multipro-tocol Label Switching (GMPLS) networks. Regenerators are necessary in optical networks due to transmission impairments. We study a network architecture where there are regenerators at selected nodes and we propose two heuristic algorithms for the regenerator placement problem. Performances of these algorithms in terms of required number of regenerators and computational complexity are evaluated. In this network architecture with sparse regeneration, offline computation of working and restoration paths is studied with bandwidth reservation and path rerouting as the restoration scheme. We study two approaches for selecting working and restoration paths from a set of candidate paths and formulate each method as an Integer Linear Programming (ILP) prob-lem. Traffic uncertainty model is developed in order to compare these methods based on their robustness with respect to changing traffic patterns. Traffic engineering methods are compared based on number of additional demands due to traffic uncertainty that can be carried. Regenerator placement algorithms are also evaluated from a traffic engineering point of view.

  18. Generalized causal mediation and path analysis: Extensions and practical considerations.

    PubMed

    Albert, Jeffrey M; Cho, Jang Ik; Liu, Yiying; Nelson, Suchitra

    2018-01-01

    Causal mediation analysis seeks to decompose the effect of a treatment or exposure among multiple possible paths and provide casually interpretable path-specific effect estimates. Recent advances have extended causal mediation analysis to situations with a sequence of mediators or multiple contemporaneous mediators. However, available methods still have limitations, and computational and other challenges remain. The present paper provides an extended causal mediation and path analysis methodology. The new method, implemented in the new R package, gmediation (described in a companion paper), accommodates both a sequence (two stages) of mediators and multiple mediators at each stage, and allows for multiple types of outcomes following generalized linear models. The methodology can also handle unsaturated models and clustered data. Addressing other practical issues, we provide new guidelines for the choice of a decomposition, and for the choice of a reference group multiplier for the reduction of Monte Carlo error in mediation formula computations. The new method is applied to data from a cohort study to illuminate the contribution of alternative biological and behavioral paths in the effect of socioeconomic status on dental caries in adolescence.

  19. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  20. Geologic, hydrologic, and geochemical identification of flow paths in the Edwards Aquifer, northeastern Bexar and southern Comal Counties, Texas

    USGS Publications Warehouse

    Otero, Cassi L.

    2007-01-01

    The U.S. Geological Survey, in cooperation with the San Antonio Water System, conducted a 4-year study during 2002?06 to identify major flow paths in the Edwards aquifer in northeastern Bexar and southern Comal Counties (study area). In the study area, faulting directs ground water into three hypothesized flow paths that move water, generally, from the southwest to the northeast. These flow paths are identified as the southern Comal flow path, the central Comal flow path, and the northern Comal flow path. Statistical correlations between water levels for six observation wells and between the water levels and discharges from Comal Springs and Hueco Springs yielded evidence for the hypothesized flow paths. Strong linear correlations were evident between the datasets from wells and springs within the same flow path and the datasets from wells in areas where flow between flow paths was suspected. Geochemical data (major ions, stable isotopes, sulfur hexafluoride, and tritium and helium) were used in graphical analyses to obtain evidence of the flow path from which wells or springs derive water. Major-ion geochemistry in samples from selected wells and springs showed relatively little variation. Samples from the southern Comal flow path were characterized by relatively high sulfate and chloride concentrations, possibly indicating that the water in the flow path was mixing with small amounts of saline water from the freshwater/saline-water transition zone. Samples from the central Comal flow path yielded the most varied major-ion geochemistry of the three hypothesized flow paths. Central Comal flow path samples were characterized, in general, by high calcium concentrations and low magnesium concentrations. Samples from the northern Comal flow path were characterized by relatively low sulfate and chloride concentrations and high magnesium concentrations. The high magnesium concentrations characteristic of northern Comal flow path samples from the recharge zone in Comal County might indicate that water from the Trinity aquifer is entering the Edwards aquifer in the subsurface. A graph of the relation between the stable isotopes deuterium and delta-18 oxygen showed that, except for samples collected following an unusually intense rain storm, there was not much variation in stable isotope values among the flow paths. In the study area deuterium ranged from -36.00 to -20.89 per mil and delta-18 oxygen ranged from -6.03 to -3.70 per mil. Excluding samples collected following the intense rain storm, the deuterium range in the study area was -33.00 to -20.89 per mil and the delta-18 oxygen range was -4.60 to -3.70 per mil. Two ground-water age-dating techniques, sulfur hexafluoride concentrations and tritium/helium-3 isotope ratios, were used to compute apparent ages (time since recharge occurred) of water samples collected in the study area. In general, the apparent ages computed by the two methods do not seem to indicate direction of flow. Apparent ages computed for water samples in northeastern Bexar and southern Comal Counties do not vary greatly except for some very young water in the recharge zone in central Comal County.

  1. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    PubMed Central

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  2. Parameter optimization on the convergence surface of path simulations

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, Srinivas Niranj

    Computational treatments of protein conformational changes tend to focus on the trajectories themselves, despite the fact that it is the transition state structures that contain information about the barriers that impose multi-state behavior. PATH is an algorithm that computes a transition pathway between two protein crystal structures, along with the transition state structure, by minimizing the Onsager-Machlup action functional. It is rapid but depends on several unknown input parameters whose range of different values can potentially generate different transition-state structures. Transition-state structures arising from different input parameters cannot be uniquely compared with those generated by other methods. I outline modifications that I have made to the PATH algorithm that estimates these input parameters in a manner that circumvents these difficulties, and describe two complementary tests that validate the transition-state structures found by the PATH algorithm. First, I show that although the PATH algorithm and two other approaches to computing transition pathways produce different low-energy structures connecting the initial and final ground-states with the transition state, all three methods agree closely on the configurations of their transition states. Second, I show that the PATH transition states are close to the saddle points of free-energy surfaces connecting initial and final states generated by replica-exchange Discrete Molecular Dynamics simulations. I show that aromatic side-chain rearrangements create similar potential energy barriers in the transition-state structures identified by PATH for a signaling protein, a contractile protein, and an enzyme. Finally, I observed, but cannot account for, the fact that trajectories obtained for all-atom and Calpha-only simulations identify transition state structures in which the Calpha atoms are in essentially the same positions. The consistency between transition-state structures derived by different algorithms for unrelated protein systems argues that although functionally important protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. In the end, I outline the strategies that could enhance the efficiency and applicability of PATH.

  3. A binary-decision-diagram-based two-bit arithmetic logic unit on a GaAs-based regular nanowire network with hexagonal topology.

    PubMed

    Zhao, Hong-Quan; Kasai, Seiya; Shiratori, Yuta; Hashizume, Tamotsu

    2009-06-17

    A two-bit arithmetic logic unit (ALU) was successfully fabricated on a GaAs-based regular nanowire network with hexagonal topology. This fundamental building block of central processing units can be implemented on a regular nanowire network structure with simple circuit architecture based on graphical representation of logic functions using a binary decision diagram and topology control of the graph. The four-instruction ALU was designed by integrating subgraphs representing each instruction, and the circuitry was implemented by transferring the logical graph structure to a GaAs-based nanowire network formed by electron beam lithography and wet chemical etching. A path switching function was implemented in nodes by Schottky wrap gate control of nanowires. The fabricated circuit integrating 32 node devices exhibits the correct output waveforms at room temperature allowing for threshold voltage variation.

  4. Two arm robot path planning in a static environment using polytopes and string stretching. Thesis

    NASA Technical Reports Server (NTRS)

    Schima, Francis J., III

    1990-01-01

    The two arm robot path planning problem has been analyzed and reduced into components to be simplified. This thesis examines one component in which two Puma-560 robot arms are simultaneously holding a single object. The problem is to find a path between two points around obstacles which is relatively fast and minimizes the distance. The thesis involves creating a structure on which to form an advanced path planning algorithm which could ideally find the optimum path. An actual path planning method is implemented which is simple though effective in most common situations. Given the limits of computer technology, a 'good' path is currently found. Objects in the workspace are modeled with polytopes. These are used because they can be used for rapid collision detection and still provide a representation which is adequate for path planning.

  5. Neural correlates of lexicon and grammar: evidence from the production, reading, and judgment of inflection in aphasia.

    PubMed

    Ullman, Michael T; Pancheva, Roumyana; Love, Tracy; Yee, Eiling; Swinney, David; Hickok, Gregory

    2005-05-01

    Are the linguistic forms that are memorized in the mental lexicon and those that are specified by the rules of grammar subserved by distinct neurocognitive systems or by a single computational system with relatively broad anatomic distribution? On a dual-system view, the productive -ed-suffixation of English regular past tense forms (e.g., look-looked) depends upon the mental grammar, whereas irregular forms (e.g., dig-dug) are retrieved from lexical memory. On a single-mechanism view, the computation of both past tense types depends on associative memory. Neurological double dissociations between regulars and irregulars strengthen the dual-system view. The computation of real and novel, regular and irregular past tense forms was investigated in 20 aphasic subjects. Aphasics with non-fluent agrammatic speech and left frontal lesions were consistently more impaired at the production, reading, and judgment of regular than irregular past tenses. Aphasics with fluent speech and word-finding difficulties, and with left temporal/temporo-parietal lesions, showed the opposite pattern. These patterns held even when measures of frequency, phonological complexity, articulatory difficulty, and other factors were held constant. The data support the view that the memorized words of the mental lexicon are subserved by a brain system involving left temporal/temporo-parietal structures, whereas aspects of the mental grammar, in particular the computation of regular morphological forms, are subserved by a distinct system involving left frontal structures.

  6. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  7. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  8. 5 CFR 550.1305 - Treatment as basic pay.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... within the regular tour of duty, but outside the basic 40-hour workweek, is basic pay only for the... hours that are part of a firefighter's regular tour of duty (as computed under § 550.1303) and the... overtime pay for hours in a firefighter's regular tour of duty is derived by multiplying the applicable...

  9. Robot path planning algorithm based on symbolic tags in dynamic environment

    NASA Astrophysics Data System (ADS)

    Vokhmintsev, A.; Timchenko, M.; Melnikov, A.; Kozko, A.; Makovetskii, A.

    2017-09-01

    The present work will propose a new heuristic algorithms for path planning of a mobile robot in an unknown dynamic space that have theoretically approved estimates of computational complexity and are approbated for solving specific applied problems.

  10. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  11. Polarization Considerations for the Laser Interferometer Space Antenna

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Pedersen, Tracy R.; McNamara, Paul

    2005-01-01

    A polarization ray trace model of the Laser Interferometer Space Antenna s (LISA) optical path is being created. The model will be able to assess the effects of various polarizing elements and the optical coatings on the required, very long path length, picometer level dynamic interferometry. The computational steps are described. This should eliminate any ambiguities associated with polarization ray tracing of interferometers and provide a basis for determining the computer model s limitations and serve as a clearly defined starting point for future work.

  12. Computer-Aided Remote Driving

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    1994-01-01

    System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.

  13. Lizard movement tracks: variation in path re-use behaviour is consistent with a scent-marking function

    PubMed Central

    Jackson, Grant; Roddick, John F.; Bull, C. Michael

    2016-01-01

    Individual movement influences the spatial and social structuring of a population. Animals regularly use the same paths to move efficiently to familiar places, or to patrol and mark home ranges. We found that Australian sleepy lizards (Tiliqua rugosa), a monogamous species with stable pair-bonds, repeatedly used the same paths within their home ranges and investigated whether path re-use functions as a scent-marking behaviour, or whether it is influenced by site familiarity. Lizards can leave scent trails on the substrate when moving through the environment and have a well-developed vomeronasal system to detect and respond to those scents. Path re-use would allow sleepy lizards to concentrate scent marks along these well-used trails, advertising their presence. Hypotheses of mate attraction and mating competition predict that sleepy lizard males, which experience greater intra-sexual competition, mark more strongly. Consistent with those hypotheses, males re-used their paths more than females, and lizards that showed pairing behaviour with individuals of the opposite sex re-used paths more than unpaired lizards, particularly among females. Hinterland marking is most economic when home ranges are large and mobility is low, as is the case in the sleepy lizard. Consistent with this strategy, re-used paths were predominantly located in the inner 50% home range areas. Together, our detailed movement analyses suggest that path re-use is a scent marking behaviour in the sleepy lizard. We also investigated but found less support for alternative explanations of path re-use behaviour, such as site familiarity and spatial knowledge. Lizards established the same number of paths, and used them as often, whether they had occupied their home ranges for one or for more years. We discuss our findings in relation to maintenance of the monogamous mating system of this species, and the spatial and social structuring of the population. PMID:27019790

  14. Lizard movement tracks: variation in path re-use behaviour is consistent with a scent-marking function.

    PubMed

    Leu, Stephan T; Jackson, Grant; Roddick, John F; Bull, C Michael

    2016-01-01

    Individual movement influences the spatial and social structuring of a population. Animals regularly use the same paths to move efficiently to familiar places, or to patrol and mark home ranges. We found that Australian sleepy lizards (Tiliqua rugosa), a monogamous species with stable pair-bonds, repeatedly used the same paths within their home ranges and investigated whether path re-use functions as a scent-marking behaviour, or whether it is influenced by site familiarity. Lizards can leave scent trails on the substrate when moving through the environment and have a well-developed vomeronasal system to detect and respond to those scents. Path re-use would allow sleepy lizards to concentrate scent marks along these well-used trails, advertising their presence. Hypotheses of mate attraction and mating competition predict that sleepy lizard males, which experience greater intra-sexual competition, mark more strongly. Consistent with those hypotheses, males re-used their paths more than females, and lizards that showed pairing behaviour with individuals of the opposite sex re-used paths more than unpaired lizards, particularly among females. Hinterland marking is most economic when home ranges are large and mobility is low, as is the case in the sleepy lizard. Consistent with this strategy, re-used paths were predominantly located in the inner 50% home range areas. Together, our detailed movement analyses suggest that path re-use is a scent marking behaviour in the sleepy lizard. We also investigated but found less support for alternative explanations of path re-use behaviour, such as site familiarity and spatial knowledge. Lizards established the same number of paths, and used them as often, whether they had occupied their home ranges for one or for more years. We discuss our findings in relation to maintenance of the monogamous mating system of this species, and the spatial and social structuring of the population.

  15. Effectiveness of a Computer-Tailored Print-Based Physical Activity Intervention among French Canadians with Type 2 Diabetes in a Real-Life Setting

    ERIC Educational Resources Information Center

    Boudreau, Francois; Godin, Gaston; Poirier, Paul

    2011-01-01

    The promotion of regular physical activity for people with type 2 diabetes poses a challenge for public health authorities. The purpose of this study was to evaluate the efficiency of a computer-tailoring print-based intervention to promote the adoption of regular physical activity among people with type 2 diabetes. An experimental design was…

  16. Sampling-Based Coverage Path Planning for Complex 3D Structures

    DTIC Science & Technology

    2012-09-01

    one such task, in which a single robot must sweep its end effector over the entirety of a known workspace. For two-dimensional environments, optimal...structures. First, we introduce a new algorithm for planning feasible coverage paths. It is more computationally efficient in problems of complex geometry...iteratively shortens and smooths a feasible coverage path; robot configurations are adjusted without violating any coverage con- straints. Third, we propose

  17. Singularity of the time-energy uncertainty in adiabatic perturbation and cycloids on a Bloch sphere

    PubMed Central

    Oh, Sangchul; Hu, Xuedong; Nori, Franco; Kais, Sabre

    2016-01-01

    Adiabatic perturbation is shown to be singular from the exact solution of a spin-1/2 particle in a uniformly rotating magnetic field. Due to a non-adiabatic effect, its quantum trajectory on a Bloch sphere is a cycloid traced by a circle rolling along an adiabatic path. As the magnetic field rotates more and more slowly, the time-energy uncertainty, proportional to the length of the quantum trajectory, calculated by the exact solution is entirely different from the one obtained by the adiabatic path traced by the instantaneous eigenstate. However, the non-adiabatic Aharonov- Anandan geometric phase, measured by the area enclosed by the exact path, approaches smoothly the adiabatic Berry phase, proportional to the area enclosed by the adiabatic path. The singular limit of the time-energy uncertainty and the regular limit of the geometric phase are associated with the arc length and arc area of the cycloid on a Bloch sphere, respectively. Prolate and curtate cycloids are also traced by different initial states outside and inside of the rolling circle, respectively. The axis trajectory of the rolling circle, parallel to the adiabatic path, is shown to be an example of transitionless driving. The non-adiabatic resonance is visualized by the number of cycloid arcs. PMID:26916031

  18. Functional Itô versus Banach space stochastic calculus and strict solutions of semilinear path-dependent equations

    NASA Astrophysics Data System (ADS)

    Cosso, Andrea; Russo, Francesco

    2016-11-01

    Functional Itô calculus was introduced in order to expand a functional F(t,Xṡ+t,Xt) depending on time t, past and present values of the process X. Another possibility to expand F(t,Xṡ+t,Xt) consists in considering the path Xṡ+t = {Xx+t,x ∈ [-T, 0]} as an element of the Banach space of continuous functions on C([-T, 0]) and to use Banach space stochastic calculus. The aim of this paper is threefold. (1) To reformulate functional Itô calculus, separating time and past, making use of the regularization procedures which match more naturally the notion of horizontal derivative which is one of the tools of that calculus. (2) To exploit this reformulation in order to discuss the (not obvious) relation between the functional and the Banach space approaches. (3) To study existence and uniqueness of smooth solutions to path-dependent partial differential equations which naturally arise in the study of functional Itô calculus. More precisely, we study a path-dependent equation of Kolmogorov type which is related to the window process of the solution to an Itô stochastic differential equation with path-dependent coefficients. We also study a semilinear version of that equation.

  19. A novel representation for planning 3-D collision-free paths

    NASA Technical Reports Server (NTRS)

    Bonner, Susan; Kelley, Robert B.

    1990-01-01

    A new scheme for the representation of objects, the successive spherical approximation (SSA), facilitates the rapid planning of collision-free paths in a dynamic three-dimensional environment. The hierarchical nature of the SSA allows collisions to be determined efficiently while still providing an exact representation of objects. The rapidity with which collisions can be detected, less than 1 sec per environment object per path, makes it possible to use a generate-and-test path-planning strategy driven by human conceptual knowledge to determine collision-free paths in a matter of seconds on a Sun 3/180 computer. A hierarchy of rules, based on the concept of a free space cell, is used to find heuristically satisfying collision-free paths in a structured environment.

  20. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

    PubMed Central

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872

  1. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.

    PubMed

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

  2. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  3. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  4. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE PAGES

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    2016-03-24

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  5. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  6. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  7. A Behavioral Study of Regularity, Irregularity and Rules in the English Past Tense

    ERIC Educational Resources Information Center

    Magen, Harriet S.

    2014-01-01

    Opposing views of storage and processing of morphologically complex words (e.g., past tense) have been suggested: the dual system, whereby regular forms are not in the lexicon but are generated by rule, while irregular forms are explicitly represented; the single system, whereby regular and irregular forms are computed by a single system, using…

  8. Directed polymers on a disordered tree with a defect subtree

    NASA Astrophysics Data System (ADS)

    Madras, Neal; Yıldırım, Gökhan

    2018-04-01

    We study the question of how the competition between bulk disorder and a localized microscopic defect affects the macroscopic behavior of a system in the directed polymer context at the free energy level. We consider the directed polymer model on a disordered d-ary tree and represent the localized microscopic defect by modifying the disorder distribution at each vertex in a single path (branch), or in a subtree, of the tree. The polymer must choose between following the microscopic defect and finding the best branches through the bulk disorder. We describe three possible phases, called the fully pinned, partially pinned and depinned phases. When the microscopic defect is associated only with a single branch, we compute the free energy and the critical curve of the model, and show that the partially pinned phase does not occur. When the localized microscopic defect is associated with a non-disordered regular subtree of the disordered tree, the picture is more complicated. We prove that all three phases are non-empty below a critical temperature, and that the partially pinned phase disappears above the critical temperature.

  9. Methods for Generating Complex Networks with Selected Structural Properties for Simulations: A Review and Tutorial for Neuroscientists

    PubMed Central

    Prettejohn, Brenton J.; Berryman, Matthew J.; McDonnell, Mark D.

    2011-01-01

    Many simulations of networks in computational neuroscience assume completely homogenous random networks of the Erdös–Rényi type, or regular networks, despite it being recognized for some time that anatomical brain networks are more complex in their connectivity and can, for example, exhibit the “scale-free” and “small-world” properties. We review the most well known algorithms for constructing networks with given non-homogeneous statistical properties and provide simple pseudo-code for reproducing such networks in software simulations. We also review some useful mathematical results and approximations associated with the statistics that describe these network models, including degree distribution, average path length, and clustering coefficient. We demonstrate how such results can be used as partial verification and validation of implementations. Finally, we discuss a sometimes overlooked modeling choice that can be crucially important for the properties of simulated networks: that of network directedness. The most well known network algorithms produce undirected networks, and we emphasize this point by highlighting how simple adaptations can instead produce directed networks. PMID:21441986

  10. Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael; Boldyrev, Stanislav; Chang, Choong-Seock

    This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less

  11. Regularity of center-of-pressure trajectories depends on the amount of attention invested in postural control

    PubMed Central

    Donker, Stella F.; Roerdink, Melvyn; Greven, An J.

    2007-01-01

    The influence of attention on the dynamical structure of postural sway was examined in 30 healthy young adults by manipulating the focus of attention. In line with the proposed direct relation between the amount of attention invested in postural control and regularity of center-of-pressure (COP) time series, we hypothesized that: (1) increasing cognitive involvement in postural control (i.e., creating an internal focus by increasing task difficulty through visual deprivation) increases COP regularity, and (2) withdrawing attention from postural control (i.e., creating an external focus by performing a cognitive dual task) decreases COP regularity. We quantified COP dynamics in terms of sample entropy (regularity), standard deviation (variability), sway-path length of the normalized posturogram (curviness), largest Lyapunov exponent (local stability), correlation dimension (dimensionality) and scaling exponent (scaling behavior). Consistent with hypothesis 1, standing with eyes closed significantly increased COP regularity. Furthermore, variability increased and local stability decreased, implying ineffective postural control. Conversely, and in line with hypothesis 2, performing a cognitive dual task while standing with eyes closed led to greater irregularity and smaller variability, suggesting an increase in the “efficiency, or “automaticity” of postural control”. In conclusion, these findings not only indicate that regularity of COP trajectories is positively related to the amount of attention invested in postural control, but also substantiate that in certain situations an increased internal focus may in fact be detrimental to postural control. PMID:17401553

  12. Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.

    PubMed

    López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth

    2010-08-01

    In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.

  13. Arctic curves in path models from the tangent method

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  14. Computer Tomography 3-D Imaging of the Metal Deformation Flow Path in Friction Stir Welding

    NASA Technical Reports Server (NTRS)

    Schneider, Judy; Beshears, Ronald; Nunes, Arthur C., Jr.

    2004-01-01

    In friction stir welding, a rotating threaded pin tool is inserted into a weld seam and literally stirs the edges of the seam together. This solid-state technique has been successfully used in the joining of materials that are difficult to fusion weld such as aluminum alloys. To determine optimal processing parameters for producing a defect free weld, a better understanding of the resulting metal deformation flow path is required. Marker studies are the principal method of studying the metal deformation flow path around the FSW pin tool. In our study, we have used computed tomography (CT) scans to reveal the flow pattern of a lead wire embedded in a FSW weld seam. At the welding temperature of aluminum, the lead becomes molten and thus tracks the aluminum deformation flow paths in a unique 3-dimensional manner. CT scanning is a convenient and comprehensive way of collecting and displaying tracer data. It marks an advance over previous more tedious and ambiguous radiographic/metallographic data collection methods.

  15. Amalgamation of East Eurasia Since Late Paleozoic: Constraints from the Apparent Polar Wander Paths of the Major China Blocks

    NASA Astrophysics Data System (ADS)

    Wu, L.; Kravchinsky, V. A.; Potter, D. K.

    2014-12-01

    It has been a longstanding challenge in the last few decades to quantitatively reconstruct the paleogeographic evolution of East Eurasia because of its great tectonic complexities. As the core region, the major China cratons including North China Block, South China Block and Tarim Block hold the key clues for the understanding of the amalgamation history, tectonic activities and biological affinity among the component blocks and terranes in East Eurasia. Compared with the major Gondwana and Laurentia plates, however, the apparent polar wander paths of China are not well constrained due to the outdated paleomagnetic database and relatively loose pole selection process. With the recruitment of the new high-fidelity poles published in the last decade, the rejection of the low quality data and the strict implementation of Voo's grading scheme, we build an updated paleomagnetic database for the three blocks from which three types of apparent polar wander paths (APWP) are computed. Version 1 running mean paths are constructed during the pole selection and compared with those from the previous publications. Version 2 running mean and spline paths with different sliding time windows are computed from the thoroughly examined poles to find the optimal paths with the steady trend, reasonable speed for the polar drift and plate rotation. The spline paths are recommended for the plate reconstructions, however, considering the poor data coverage during certain periods. Our new China APWPs, together with the latest European reference path, the geological, geochronological and biological evidence from the studied Asian plates allow us to reevaluate the paleogeographic and tectonic history of East Eurasia.

  16. Subgroups of adolescents differing in physical and social environmental preferences towards cycling for transport: A latent class analysis.

    PubMed

    Verhoeven, Hannah; Ghekiere, Ariane; Van Cauwenberg, Jelle; Van Dyck, Delfien; De Bourdeaudhuij, Ilse; Clarys, Peter; Deforche, Benedicte

    2018-07-01

    In order to be able to tailor environmental interventions to adolescents at risk for low levels of physical activity, the aim of the present study is to identify subgroups of adolescents with different physical and social environmental preferences towards cycling for transport and to determine differences in individual characteristics between these subgroups. In this experimental study, 882 adolescents (12-16 years) completed 15 choice tasks with manipulated photographs. Participants chose between two possible routes to cycle to a friend's house which differed in seven physical micro-environmental factors, cycling distance and co-participation in cycling (i.e. cycling alone or with a friend). Latent class analysis was performed. Data were collected from March till October 2016 across Flanders (Belgium). Three subgroups could be identified. Subgroup 1 attached most importance to separation of the cycle path and safety-related aspects. Subgroup 2 attached most importance to being able to cycle together with a friend and had the highest percentage of regular cyclists. In subgroup 3, the importance of cycling distance clearly stood out. This subgroup included the lowest percentage of regular cyclists. Results showed that in order to stimulate the least regular cyclists, and thus also the subgroup most at risk for low levels of active transport, cycling distances should be as short as possible. In general, results showed that providing well-separated cycle paths which enable adolescents to cycle side by side and introducing shortcuts for cyclists may encourage different subgroups of adolescents to cycle for transport without discouraging other subgroups. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Constructing a logical, regular axis topology from an irregular topology

    DOEpatents

    Faraj, Daniel A.

    2014-07-22

    Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.

  18. Constructing a logical, regular axis topology from an irregular topology

    DOEpatents

    Faraj, Daniel A.

    2014-07-01

    Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.

  19. Georges Lemaître: The Priest Who Invented the Big Bang

    NASA Astrophysics Data System (ADS)

    Lambert, Dominique

    This contribution gives a concise survey of Georges Lemaître works and life, shedding some light on less-known aspects. Lemaître is a Belgian catholic priest who gave for the first time in 1927 the explanation of the Hubble law and who proposed in 1931 the "Primeval Atom Hypothesis", considered as the first step towards the Big Bang cosmology. But the scientific work of Lemaître goes far beyond Physical Cosmology. Indeed, he contributed also to the theory of Cosmis Rays, to the Spinor theory, to Analytical mechanics (regularization of 3- Bodies problem), to Numerical Analysis (Fast Fourier Transform), to Computer Science (he introduced and programmed the first computer of Louvain),… Lemaître took part to the "Science and Faith" debate. He defended a position that has some analogy with the NOMA principle, making a sharp distinction between what he called the "two paths to Truth" (a scientific one and a theological one). In particular, he never made a confusion between the theological concept of "creation" and the scientific notion of "natural beginning" (initial singularity). Lemaître was deeply rooted in his faith and sacerdotal vocation. Remaining a secular priest, he belonged to a community of priests called "The Friends of Jesus", characterized by a deep spirituality and special vows (for example the vow of poverty). He had also an apostolic activity amongst Chinese students.

  20. Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization

    DTIC Science & Technology

    2016-11-22

    structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The

  1. Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes

    NASA Astrophysics Data System (ADS)

    Bachlechner, Martina E.

    2009-03-01

    Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.

  2. Computer Games as Instructional Tools.

    ERIC Educational Resources Information Center

    Bright, George W.; Harvey, John G.

    1984-01-01

    Defines games, instructional games, and computer instructional games; discusses several unique capabilities that facilitate game playing and may make computer games more attractive to students than noncomputer alternatives; and examines the potential disadvantages of using instructional computer games on a regular basis. (MBR)

  3. Structure-guided Protein Transition Modeling with a Probabilistic Roadmap Algorithm.

    PubMed

    Maximova, Tatiana; Plaku, Erion; Shehu, Amarda

    2016-07-07

    Proteins are macromolecules in perpetual motion, switching between structural states to modulate their function. A detailed characterization of the precise yet complex relationship between protein structure, dynamics, and function requires elucidating transitions between functionally-relevant states. Doing so challenges both wet and dry laboratories, as protein dynamics involves disparate temporal scales. In this paper we present a novel, sampling-based algorithm to compute transition paths. The algorithm exploits two main ideas. First, it leverages known structures to initialize its search and define a reduced conformation space for rapid sampling. This is key to address the insufficient sampling issue suffered by sampling-based algorithms. Second, the algorithm embeds samples in a nearest-neighbor graph where transition paths can be efficiently computed via queries. The algorithm adapts the probabilistic roadmap framework that is popular in robot motion planning. In addition to efficiently computing lowest-cost paths between any given structures, the algorithm allows investigating hypotheses regarding the order of experimentally-known structures in a transition event. This novel contribution is likely to open up new venues of research. Detailed analysis is presented on multiple-basin proteins of relevance to human disease. Multiscaling and the AMBER ff14SB force field are used to obtain energetically-credible paths at atomistic detail.

  4. Sparsely-synchronized brain rhythm in a small-world neural network

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Yoon; Lim, Woochang

    2013-07-01

    Sparsely-synchronized cortical rhythms, associated with diverse cognitive functions, have been observed in electric recordings of brain activity. At the population level, cortical rhythms exhibit small-amplitude fast oscillations while at the cellular level, individual neurons show stochastic firings sparsely at a much lower rate than the population rate. We study the effect of network architecture on sparse synchronization in an inhibitory population of subthreshold Morris-Lecar neurons (which cannot fire spontaneously without noise). Previously, sparse synchronization was found to occur for cases of both global coupling ( i.e., regular all-to-all coupling) and random coupling. However, a real neural network is known to be non-regular and non-random. Here, we consider sparse Watts-Strogatz small-world networks which interpolate between a regular lattice and a random graph via rewiring. We start from a regular lattice with only short-range connections and then investigate the emergence of sparse synchronization by increasing the rewiring probability p for the short-range connections. For p = 0, the average synaptic path length between pairs of neurons becomes long; hence, only an unsynchronized population state exists because the global efficiency of information transfer is low. However, as p is increased, long-range connections begin to appear, and global effective communication between distant neurons may be available via shorter synaptic paths. Consequently, as p passes a threshold p th (}~ 0.044), sparsely-synchronized population rhythms emerge. However, with increasing p, longer axon wirings become expensive because of their material and energy costs. At an optimal value p* DE (}~ 0.24) of the rewiring probability, the ratio of the synchrony degree to the wiring cost is found to become maximal. In this way, an optimal sparse synchronization is found to occur at a minimal wiring cost in an economic small-world network through trade-off between synchrony and wiring cost.

  5. Optimal guidance with obstacle avoidance for nap-of-the-earth flight

    NASA Technical Reports Server (NTRS)

    Pekelsma, Nicholas J.

    1988-01-01

    The development of automatic guidance is discussed for helicopter Nap-of-the-Earth (NOE) and near-NOE flight. It deals with algorithm refinements relating to automated real-time flight path planning and to mission planning. With regard to path planning, it relates rotorcraft trajectory characteristics to the NOE computation scheme and addresses real-time computing issues and both ride quality issues and pilot-vehicle interfaces. The automated mission planning algorithm refinements include route optimization, automatic waypoint generation, interactive applications, and provisions for integrating the results into the real-time path planning software. A microcomputer based mission planning workstation was developed and is described. Further, the application of Defense Mapping Agency (DMA) digital terrain to both the mission planning workstation and to automatic guidance is both discussed and illustrated.

  6. Mach stem formation in outdoor measurements of acoustic shocks.

    PubMed

    Leete, Kevin M; Gee, Kent L; Neilsen, Tracianne B; Truscott, Tadd T

    2015-12-01

    Mach stem formation during outdoor acoustic shock propagation is investigated using spherical oxyacetylene balloons exploded above pavement. The location of the transition point from regular to irregular reflection and the path of the triple point are experimentally resolved using microphone arrays and a high-speed camera. The transition point falls between recent analytical work for weak irregular reflections and an empirical relationship derived from large explosions.

  7. Obliging Games

    NASA Astrophysics Data System (ADS)

    Chatterjee, Krishnendu; Horn, Florian; Löding, Christof

    Graph games of infinite length provide a natural model for open reactive systems: one player (Eve) represents the controller and the other player (Adam) represents the environment. The evolution of the system depends on the decisions of both players. The specification for the system is usually given as an ω-regular language L over paths and Eve's goal is to ensure that the play belongs to L irrespective of Adam's behaviour.

  8. Energetic and electronic computation of the two-hydrogen atom donation process in catecholic and non-catecholic anthocyanidins.

    PubMed

    Ali, Hussein M; Ali, Isra H

    2018-03-15

    Antioxidant activity of anthocyanidins is greatly affected by the 3-hydroxyl group and/or a catecholic moiety. The two-hydrogen atom donation process is frequently used to explain the high antioxidant activity of polyphenolic compounds leading to the formation of stable diketones e.g. 1,2-quinones. Thermodynamic parameters, HOMO and spin density were computed to identify the favoured path, either through the 3-hydroxyl group or through the catecholic moiety in a series of catecholic and non-catecholic 3-oxy- (and deoxy)-anthocyanidins. DFT calculations showed that the donation process in non-catecholic anthocyanidins depended on the substituents on ring B. Anthocyanidins with 3',5'-diOMe groups showed donation through 3,4'-OH or, otherwise, through 3,5-OH groups. Catecholic 3-oxyanthocyanidins, on the other hand, showed donation through the 3,4'-OH path rather than the catecholic path (4',3'-path). The 3,4'-path was favoured by the formation of planar 3-radicals in the first step and the stabilization of 4'-radicals in the second step by H-bonding with the 3'-OH group. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Importance sampling studies of helium using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Datta, S.; Rejcek, J. M.

    2018-05-01

    In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.

  10. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2017-12-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  11. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  12. A modified PATH algorithm rapidly generates transition states comparable to those found by other well established algorithms

    PubMed Central

    Chandrasekaran, Srinivas Niranj; Das, Jhuma; Dokholyan, Nikolay V.; Carter, Charles W.

    2016-01-01

    PATH rapidly computes a path and a transition state between crystal structures by minimizing the Onsager-Machlup action. It requires input parameters whose range of values can generate different transition-state structures that cannot be uniquely compared with those generated by other methods. We outline modifications to estimate these input parameters to circumvent these difficulties and validate the PATH transition states by showing consistency between transition-states derived by different algorithms for unrelated protein systems. Although functional protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. PMID:26958584

  13. Pheromone Static Routing Strategy for Complex Networks

    NASA Astrophysics Data System (ADS)

    Hu, Mao-Bin; Henry, Y. K. Lau; Ling, Xiang; Jiang, Rui

    2012-12-01

    We adopt the concept of using pheromones to generate a set of static paths that can reach the performance of global dynamic routing strategy [Phys. Rev. E 81 (2010) 016113]. The path generation method consists of two stages. In the first stage, a pheromone is dropped to the nodes by packets forwarded according to the global dynamic routing strategy. In the second stage, pheromone static paths are generated according to the pheromone density. The output paths can greatly improve traffic systems' overall capacity on different network structures, including scale-free networks, small-world networks and random graphs. Because the paths are static, the system needs much less computational resources than the global dynamic routing strategy.

  14. Review of computer simulations of isotope effects on biochemical reactions: From the Bigeleisen equation to Feynman's path integral.

    PubMed

    Wong, Kin-Yiu; Xu, Yuqing; Xu, Liang

    2015-11-01

    Enzymatic reactions are integral components in many biological functions and malfunctions. The iconic structure of each reaction path for elucidating the reaction mechanism in details is the molecular structure of the rate-limiting transition state (RLTS). But RLTS is very hard to get caught or to get visualized by experimentalists. In spite of the lack of explicit molecular structure of the RLTS in experiment, we still can trace out the RLTS unique "fingerprints" by measuring the isotope effects on the reaction rate. This set of "fingerprints" is considered as a most direct probe of RLTS. By contrast, for computer simulations, oftentimes molecular structures of a number of TS can be precisely visualized on computer screen, however, theoreticians are not sure which TS is the actual rate-limiting one. As a result, this is an excellent stage setting for a perfect "marriage" between experiment and theory for determining the structure of RLTS, along with the reaction mechanism, i.e., experimentalists are responsible for "fingerprinting", whereas theoreticians are responsible for providing candidates that match the "fingerprints". In this Review, the origin of isotope effects on a chemical reaction is discussed from the perspectives of classical and quantum worlds, respectively (e.g., the origins of the inverse kinetic isotope effects and all the equilibrium isotope effects are purely from quantum). The conventional Bigeleisen equation for isotope effect calculations, as well as its refined version in the framework of Feynman's path integral and Kleinert's variational perturbation (KP) theory for systematically incorporating anharmonicity and (non-parabolic) quantum tunneling, are also presented. In addition, the outstanding interplay between theory and experiment for successfully deducing the RLTS structures and the reaction mechanisms is demonstrated by applications on biochemical reactions, namely models of bacterial squalene-to-hopene polycyclization and RNA 2'-O-transphosphorylation. For all these applications, we used our recently-developed path-integral method based on the KP theory, called automated integration-free path-integral (AIF-PI) method, to perform ab initio path-integral calculations of isotope effects. As opposed to the conventional path-integral molecular dynamics (PIMD) and Monte Carlo (PIMC) simulations, values calculated from our AIF-PI path-integral method can be as precise as (not as accurate as) the numerical precision of the computing machine. Lastly, comments are made on the general challenges in theoretical modeling of candidates matching the experimental "fingerprints" of RLTS. This article is part of a Special Issue entitled: Enzyme Transition States from Theory and Experiment. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. The Volume of the Regular Octahedron

    ERIC Educational Resources Information Center

    Trigg, Charles W.

    1974-01-01

    Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)

  16. Minimum-fuel, three-dimensional flight paths for jet transports

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Kreindler, E.

    1985-01-01

    A number of studies dealing with fuel minimization are concerned with three-dimensional flight. However, only Neuman and Kreindler (1982) consider cases involving commercial jet transports. In the latter study, only the climb-out and descent portions of complete long-range flight paths below 10,000 ft altitude have been investigated. The present investigation is concerned with the computation of minimum-fuel nonturning and turning flight paths for climb-outs from 2000 to 10,000 ft for long-range flights (greater than 50 n mi), and for complete flight paths of lengths between 5 and 50 n mi.

  17. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  18. Using intervention mapping to promote the receipt of clinical preventive services among women with physical disabilities.

    PubMed

    Suzuki, Rie; Peterson, Jana J; Weatherby, Amanda V; Buckley, David I; Walsh, Emily S; Kailes, June Isaacson; Krahn, Gloria L

    2012-01-01

    This article describes the development of Promoting Access to Health Services (PATHS), an intervention to promote regular use of clinical preventive services by women with physical disabilities. The intervention was developed using intervention mapping (IM), a theory-based logical process that incorporates the six steps of assessment of need, preparation of matrices, selection of theoretical methods and strategies, program design, program implementation, and evaluation. The development process used methods and strategies aligned with the social cognitive theory and the health belief model. PATHS was adapted from the workbook Making Preventive Health Care Work for You, developed by a disability advocate, and was informed by participant input at five points: at inception through consultation by the workbook author, in conceptualization through a town hall meeting, in pilot testing with feedback, in revision of the curriculum through an advisory group, and in implementation by trainers with disabilities. The resulting PATHS program is a 90-min participatory small-group workshop, followed by structured telephone support for 6 months.

  19. Numerical run-out modelling used for reassessment of existing permanent avalanche paths in the Krkonose Mts., Czechia

    NASA Astrophysics Data System (ADS)

    Blahut, Jan; Klimes, Jan; Balek, Jan; Taborik, Petr; Juras, Roman; Pavlasek, Jiri

    2015-04-01

    Run-out modelling of snow avalanches is being widely applied in high mountain areas worldwide. This study presents application of snow avalanche run-out calculation applied to mid-mountain ranges - the Krkonose, Jeseniky and Kralicky Sneznik Mountains. All mentioned mountain ranges lie in the northern part of Czechia, close to the border with Poland. Its highest peak reaches only 1602 m a.s.l. However, climatic conditions and regular snowpack presence are the reason why these mountain ranges experience considerable snow avalanche activity every year, sometimes resulting in injuries or even fatalities. Within the aim of an applied project dealing with snow avalanche hazard prediction a re-assessment of permanent snow avalanche paths has been performed based on extensive statistics covering period from 1961/62 till present. On each avalanche path different avalanches with different return periods were modelled using the RAMMS code. As a result, an up-to-date snow avalanche hazard map was prepared.

  20. 29 CFR 548.500 - Methods of computation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Computation of Overtime Pay § 548.500 Methods of computation. The methods of computing overtime pay on the basic rates for piece... pay at the regular rate. Example 1. Under an employment agreement the basic rate to be used in...

  1. Equilibrium paths analysis of materials with rheological properties by using the chaos theory

    NASA Astrophysics Data System (ADS)

    Bednarek, Paweł; Rządkowski, Jan

    2018-01-01

    The numerical equilibrium path analysis of the material with random rheological properties by using standard procedures and specialist computer programs was not successful. The proper solution for the analysed heuristic model of the material was obtained on the base of chaos theory elements and neural networks. The paper deals with mathematical reasons of used computer programs and also are elaborated the properties of the attractor used in analysis. There are presented results of conducted numerical analysis both in a numerical and in graphical form for the used procedures.

  2. A path model for Whittaker vectors

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Kedem, Rinat; Turmunkh, Bolor

    2017-06-01

    In this paper we construct weighted path models to compute Whittaker vectors in the completion of Verma modules, as well as Whittaker functions of fundamental type, for all finite-dimensional simple Lie algebras, affine Lie algebras, and the quantum algebra U_q(slr+1) . This leads to series expressions for the Whittaker functions. We show how this construction leads directly to the quantum Toda equations satisfied by these functions, and to the q-difference equations in the quantum case. We investigate the critical limit of affine Whittaker functions computed in this way.

  3. Feasible Path Generation Using Bezier Curves for Car-Like Vehicle

    NASA Astrophysics Data System (ADS)

    Latip, Nor Badariyah Abdul; Omar, Rosli

    2017-08-01

    When planning a collision-free path for an autonomous vehicle, the main criteria that have to be considered are the shortest distance, lower computation time and completeness, i.e. a path can be found if one exists. Besides that, a feasible path for the autonomous vehicle is also crucial to guarantee that the vehicle can reach the target destination considering its kinematic constraints such as non-holonomic and minimum turning radius. In order to address these constraints, Bezier curves is applied. In this paper, Bezier curves are modeled and simulated using Matlab software and the feasibility of the resulting path is analyzed. Bezier curve is derived from a piece-wise linear pre-planned path. It is found that the Bezier curves has the capability of making the planned path feasible and could be embedded in a path planning algorithm for an autonomous vehicle with kinematic constraints. It is concluded that the length of segments of the pre-planned path have to be greater than a nominal value, derived from the vehicle wheelbase, maximum steering angle and maximum speed to ensure the path for the autonomous car is feasible.

  4. Grid Visualization Tool

    NASA Technical Reports Server (NTRS)

    Chouinard, Caroline; Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steven

    2005-01-01

    The Grid Visualization Tool (GVT) is a computer program for displaying the path of a mobile robotic explorer (rover) on a terrain map. The GVT reads a map-data file in either portable graymap (PGM) or portable pixmap (PPM) format, representing a gray-scale or color map image, respectively. The GVT also accepts input from path-planning and activity-planning software. From these inputs, the GVT generates a map overlaid with one or more rover path(s), waypoints, locations of targets to be explored, and/or target-status information (indicating success or failure in exploring each target). The display can also indicate different types of paths or path segments, such as the path actually traveled versus a planned path or the path traveled to the present position versus planned future movement along a path. The program provides for updating of the display in real time to facilitate visualization of progress. The size of the display and the map scale can be changed as desired by the user. The GVT was written in the C++ language using the Open Graphics Library (OpenGL) software. It has been compiled for both Sun Solaris and Linux operating systems.

  5. Innovative Science Experiments Using Phoenix

    ERIC Educational Resources Information Center

    Kumar, B. P. Ajith; Satyanarayana, V. V. V.; Singh, Kundan; Singh, Parmanand

    2009-01-01

    A simple, flexible and very low cost hardware plus software framework for developing computer-interfaced science experiments is presented. It can be used for developing computer-interfaced science experiments without getting into the details of electronics or computer programming. For developing experiments this is a middle path between…

  6. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  7. Search Path Mapping: A Versatile Approach for Visualizing Problem-Solving Behavior.

    ERIC Educational Resources Information Center

    Stevens, Ronald H.

    1991-01-01

    Computer-based problem-solving examinations in immunology generate graphic representations of students' search paths, allowing evaluation of how organized and focused their knowledge is, how well their organization relates to critical concepts in immunology, where major misconceptions exist, and whether proper knowledge links exist between content…

  8. Real-time Feynman path integral with Picard–Lefschetz theory and its applications to quantum tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanizaki, Yuya, E-mail: yuya.tanizaki@riken.jp; Theoretical Research Division, Nishina Center, RIKEN, Wako 351-0198; Koike, Takayuki, E-mail: tkoike@ms.u-tokyo.ac.jp

    Picard–Lefschetz theory is applied to path integrals of quantum mechanics, in order to compute real-time dynamics directly. After discussing basic properties of real-time path integrals on Lefschetz thimbles, we demonstrate its computational method in a concrete way by solving three simple examples of quantum mechanics. It is applied to quantum mechanics of a double-well potential, and quantum tunneling is discussed. We identify all of the complex saddle points of the classical action, and their properties are discussed in detail. However a big theoretical difficulty turns out to appear in rewriting the original path integral into a sum of path integralsmore » on Lefschetz thimbles. We discuss generality of that problem and mention its importance. Real-time tunneling processes are shown to be described by those complex saddle points, and thus semi-classical description of real-time quantum tunneling becomes possible on solid ground if we could solve that problem. - Highlights: • Real-time path integral is studied based on Picard–Lefschetz theory. • Lucid demonstration is given through simple examples of quantum mechanics. • This technique is applied to quantum mechanics of the double-well potential. • Difficulty for practical applications is revealed, and we discuss its generality. • Quantum tunneling is shown to be closely related to complex classical solutions.« less

  9. MinePath: Mining for Phenotype Differential Sub-paths in Molecular Pathways

    PubMed Central

    Koumakis, Lefteris; Kartsaki, Evgenia; Chatzimina, Maria; Zervakis, Michalis; Vassou, Despoina; Marias, Kostas; Moustakis, Vassilis; Potamias, George

    2016-01-01

    Pathway analysis methodologies couple traditional gene expression analysis with knowledge encoded in established molecular pathway networks, offering a promising approach towards the biological interpretation of phenotype differentiating genes. Early pathway analysis methodologies, named as gene set analysis (GSA), view pathways just as plain lists of genes without taking into account either the underlying pathway network topology or the involved gene regulatory relations. These approaches, even if they achieve computational efficiency and simplicity, consider pathways that involve the same genes as equivalent in terms of their gene enrichment characteristics. Most recent pathway analysis approaches take into account the underlying gene regulatory relations by examining their consistency with gene expression profiles and computing a score for each profile. Even with this approach, assessing and scoring single-relations limits the ability to reveal key gene regulation mechanisms hidden in longer pathway sub-paths. We introduce MinePath, a pathway analysis methodology that addresses and overcomes the aforementioned problems. MinePath facilitates the decomposition of pathways into their constituent sub-paths. Decomposition leads to the transformation of single-relations to complex regulation sub-paths. Regulation sub-paths are then matched with gene expression sample profiles in order to evaluate their functional status and to assess phenotype differential power. Assessment of differential power supports the identification of the most discriminant profiles. In addition, MinePath assess the significance of the pathways as a whole, ranking them by their p-values. Comparison results with state-of-the-art pathway analysis systems are indicative for the soundness and reliability of the MinePath approach. In contrast with many pathway analysis tools, MinePath is a web-based system (www.minepath.org) offering dynamic and rich pathway visualization functionality, with the unique characteristic to color regulatory relations between genes and reveal their phenotype inclination. This unique characteristic makes MinePath a valuable tool for in silico molecular biology experimentation as it serves the biomedical researchers’ exploratory needs to reveal and interpret the regulatory mechanisms that underlie and putatively govern the expression of target phenotypes. PMID:27832067

  10. MinePath: Mining for Phenotype Differential Sub-paths in Molecular Pathways.

    PubMed

    Koumakis, Lefteris; Kanterakis, Alexandros; Kartsaki, Evgenia; Chatzimina, Maria; Zervakis, Michalis; Tsiknakis, Manolis; Vassou, Despoina; Kafetzopoulos, Dimitris; Marias, Kostas; Moustakis, Vassilis; Potamias, George

    2016-11-01

    Pathway analysis methodologies couple traditional gene expression analysis with knowledge encoded in established molecular pathway networks, offering a promising approach towards the biological interpretation of phenotype differentiating genes. Early pathway analysis methodologies, named as gene set analysis (GSA), view pathways just as plain lists of genes without taking into account either the underlying pathway network topology or the involved gene regulatory relations. These approaches, even if they achieve computational efficiency and simplicity, consider pathways that involve the same genes as equivalent in terms of their gene enrichment characteristics. Most recent pathway analysis approaches take into account the underlying gene regulatory relations by examining their consistency with gene expression profiles and computing a score for each profile. Even with this approach, assessing and scoring single-relations limits the ability to reveal key gene regulation mechanisms hidden in longer pathway sub-paths. We introduce MinePath, a pathway analysis methodology that addresses and overcomes the aforementioned problems. MinePath facilitates the decomposition of pathways into their constituent sub-paths. Decomposition leads to the transformation of single-relations to complex regulation sub-paths. Regulation sub-paths are then matched with gene expression sample profiles in order to evaluate their functional status and to assess phenotype differential power. Assessment of differential power supports the identification of the most discriminant profiles. In addition, MinePath assess the significance of the pathways as a whole, ranking them by their p-values. Comparison results with state-of-the-art pathway analysis systems are indicative for the soundness and reliability of the MinePath approach. In contrast with many pathway analysis tools, MinePath is a web-based system (www.minepath.org) offering dynamic and rich pathway visualization functionality, with the unique characteristic to color regulatory relations between genes and reveal their phenotype inclination. This unique characteristic makes MinePath a valuable tool for in silico molecular biology experimentation as it serves the biomedical researchers' exploratory needs to reveal and interpret the regulatory mechanisms that underlie and putatively govern the expression of target phenotypes.

  11. Mathematical Model and Simulation of Particle Flow around Choanoflagellates Using the Method of Regularized Stokeslets

    NASA Astrophysics Data System (ADS)

    Nararidh, Niti

    2013-11-01

    Choanoflagellates are unicellular organisms whose intriguing morphology includes a set of collars/microvilli emanating from the cell body, surrounding the beating flagellum. We investigated the role of the microvilli in the feeding and swimming behavior of the organism using a three-dimensional model based on the method of regularized Stokeslets. This model allows us to examine the velocity generated around the feeding organism tethered in place, as well as to predict the paths of surrounding free flowing particles. In particular, we can depict the effective capture of nutritional particles and bacteria in the fluid, showing the hydrodynamic cooperation between the cell, flagellum, and microvilli of the organism. Funding Source: Murchison Undergraduate Research Fellowship.

  12. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  13. Distributed processor allocation for launching applications in a massively connected processors complex

    DOEpatents

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  14. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    ERIC Educational Resources Information Center

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  15. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  16. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  17. Computational study on the aminolysis of beta-hydroxy-alpha,beta-unsaturated ester via the favorable path including the formation of alpha-oxo ketene intermediate.

    PubMed

    Jin, Lu; Xue, Ying; Zhang, Hui; Kim, Chan Kyung; Xie, Dai Qian; Yan, Guo Sen

    2008-05-15

    The possible mechanisms of the aminolysis of N-methyl-3-(methoxycarbonyl)-4-hydroxy-2-pyridone (beta-hydroxy-alpha,beta-unsaturated ester) with dimethylamine are investigated at the hybrid density functional theory B3LYP/6-31G(d,p) level in the gas phase. Single-point computations at the B3LYP/6-311++G(d,p) and the Becke88-Becke95 1-parameter model BB1K/6-311++G(d,p) levels are performed for more precise energy predictions. Solvent effects are also assessed by single-point calculations at the integral equation formalism polarized continuum model IEFPCM-B3LYP/6-311++G(d,p) and IEFPCM-BB1K/6-311++G(d,p) levels on the gas-phase optimized geometries. Three possible pathways, the concerted pathway (path A), the stepwise pathway involving tetrahedral intermediates (path B), and the stepwise pathway via alpha-oxo ketene intermediate due to the participation of beta-hydroxy (path C), are taken into account for the title reaction. Moreover, path C includes two sequential processes. The first process is to generate alpha-oxo ketene intermediate via the decomposition of N-methyl-3-(methoxycarbonyl)-4-hydroxy-2-pyridone; the second process is the addition of dimethylamine to alpha-oxo ketene intermediate. Our results indicate that path C is more favorable than paths A and B both in the gas phase and in solvent (heptane). In path C, the first process is the rate-determining step, and the second process is revealed to be a [4+2] pseudopericyclic reaction without the energy barrier. Being independent of the concentration of amine, the first process obeys the first-order rate law.

  18. Computers in My Curriculum? 18 Lesson Plans for Teaching Computer Awareness without a Computer. Adaptable Grades 3-12.

    ERIC Educational Resources Information Center

    Bailey, Suzanne Powers; Jeffers, Marcia

    Eighteen interrelated, sequential lesson plans and supporting materials for teaching computer literacy at the elementary and secondary levels are presented. The activities, intended to be infused into the regular curriculum, do not require the use of a computer. The introduction presents background information on computer literacy, suggests a…

  19. Entanglement by Path Identity.

    PubMed

    Krenn, Mario; Hochrainer, Armin; Lahiri, Mayukh; Zeilinger, Anton

    2017-02-24

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional systems. The two ingredients are (i) superposition of photon pairs with different origins and (ii) aligning photons such that their paths are identical. We explain the experimentally feasible creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces-starting only from nonentangled photon pairs. For two photons, arbitrary high-dimensional entanglement can be created. The idea of generating entanglement by path identity could also apply to quantum entities other than photons. We discovered the technique by analyzing the output of a computer algorithm. This shows that computer designed quantum experiments can be inspirations for new techniques.

  20. Entanglement by Path Identity

    NASA Astrophysics Data System (ADS)

    Krenn, Mario; Hochrainer, Armin; Lahiri, Mayukh; Zeilinger, Anton

    2017-02-01

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional systems. The two ingredients are (i) superposition of photon pairs with different origins and (ii) aligning photons such that their paths are identical. We explain the experimentally feasible creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces—starting only from nonentangled photon pairs. For two photons, arbitrary high-dimensional entanglement can be created. The idea of generating entanglement by path identity could also apply to quantum entities other than photons. We discovered the technique by analyzing the output of a computer algorithm. This shows that computer designed quantum experiments can be inspirations for new techniques.

  1. Path to Market for Compact Modular Fusion Power Cores

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Baerny, Jennifer K.; Mattor, Nathan; Stoulil, Don; Miller, Ronald; Marston, Theodore

    2012-08-01

    The benefits of an energy source whose reactants are plentiful and whose products are benign is hard to measure, but at no time in history has this energy source been more needed. Nuclear fusion continues to promise to be this energy source. However, the path to market for fusion systems is still regularly a matter for long-term (20 + year) plans. This white paper is intended to stimulate discussion of faster commercialization paths, distilling guidance from investors, utilities, and the wider energy research community (including from ARPA-E). There is great interest in a small modular fusion system that can be developed quickly and inexpensively. A simple model shows how compact modular fusion can produce a low cost development path by optimizing traditional systems that burn deuterium and tritium, operating not only at high magnetic field strength, but also by omitting some components that allow for the core to become more compact and easier to maintain. The dominant hurdles to the development of low cost, practical fusion systems are discussed, primarily in terms of the constraints placed on the cost of development stages in the private sector. The main finding presented here is that the bridge from DOE Office of Science to the energy market can come at the Proof of Principle development stage, providing the concept is sufficiently compact and inexpensive that its development allows for a normal technology commercialization path.

  2. Proton and electron mean free paths: The Palmer consensus revisited

    NASA Technical Reports Server (NTRS)

    Bieber, John W.; Matthaeus, William H.; Smith, Charles W.; Wanner, Wolfgang; Kallenrode, May-Britt; Wibberenz, Gerd

    1994-01-01

    We present experimental and theoretical evidence suggesting that the mean free path of cosmic-ray electrons and protons may be fundamentally different at low to intermediate (less than 50 MV) rigidities. The experimental evidence is from Helios observations of solar energetic particles, which show that the mean free path of 1.4 MV electrons is often similar to that of 187 MV protons, even though proton mean free paths continue to decrease comparatively rapidly with decreasing rigidty down to the lowest channels (about 100 MV) observed. The theoretical evidence is from computations of particle scattering in dynamical magnetic turbulence, which predict that electrons will have a larger mean free path than protons of the same rigidity. In the light of these new results, 'consensus' ideas about cosmic-ray mean free paths may require drastic revision.

  3. Influence of Career Information on Choice of Degree Programme among Regular and Self-Sponsored Students in Public Universities, Kenya

    ERIC Educational Resources Information Center

    Gacohi, Jane Njeri; Sindabi, Aggrey M.; Chepchieng, Micah C.

    2017-01-01

    Choosing a degree programme to study in the university is a critical career task that is a major turning point in a student's life which not only is a start to workplace readiness, but also establishes the student in a career path that opens as well as closes life opportunities. Failure to achieve this task may cause dissatisfaction within the…

  4. Energy Index For Aircraft Maneuvers

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R. (Inventor); Lynch, Robert E. (Inventor); Lawrence, Robert E. (Inventor); Amidan, Brett G. (Inventor); Ferryman, Thomas A. (Inventor); Drew, Douglas A. (Inventor); Ainsworth, Robert J. (Inventor); Prothero, Gary L. (Inventor); Romanowski, Tomothy P. (Inventor); Bloch, Laurent (Inventor)

    2006-01-01

    Method and system for analyzing, separately or in combination, kinetic energy and potential energy and/or their time derivatives, measured or estimated or computed, for an aircraft in approach phase or in takeoff phase, to determine if the aircraft is or will be put in an anomalous configuration in order to join a stable approach path or takeoff path. A 3 reference value of kinetic energy andor potential energy (or time derivatives thereof) is provided, and a comparison index .for the estimated energy and reference energy is computed and compared with a normal range of index values for a corresponding aircraft maneuver. If the computed energy index lies outside the normal index range, this phase of the aircraft is identified as anomalous, non-normal or potentially unstable.

  5. STS-41 mission charts, computer-generated and artist concept drawings, photos

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-41 related charts, computer-generated and artist concept drawings, and photos of the Ulysses spacecraft and mission flight path provided by the European Space Agency (ESA). Charts show the Ulysses mission flight path and encounter with Jupiter (45980, 45981) and sun (illustrating cosmic dust, gamma ray burst, magnetic field, x-rays, solar energetic particles, visible corona, interstellar gas, plasma wave, cosmic rays, solar radio noise, and solar wind) (45988). Computer-generated view shows the Ulysses spacecraft (45983). Artist concept illustrates Ulysses spacecraft deploy from the space shuttle payload bay (PLB) with the inertial upper stage (IUS) and payload assist module (PAM-S) visible (45984). Ulysses spacecraft is also shown undergoing preflight testing in the manufacturing facility (45985, 45986, 45987).

  6. Kudi: A free open-source python library for the analysis of properties along reaction paths.

    PubMed

    Vogt-Geisse, Stefan

    2016-05-01

    With increasing computational capabilities, an ever growing amount of data is generated in computational chemistry that contains a vast amount of chemically relevant information. It is therefore imperative to create new computational tools in order to process and extract this data in a sensible way. Kudi is an open source library that aids in the extraction of chemical properties from reaction paths. The straightforward structure of Kudi makes it easy to use for users and allows for effortless implementation of new capabilities, and extension to any quantum chemistry package. A use case for Kudi is shown for the tautomerization reaction of formic acid. Kudi is available free of charge at www.github.com/stvogt/kudi.

  7. Robotic Online Path Planning on Point Cloud.

    PubMed

    Liu, Ming

    2016-05-01

    This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.

  8. Kinematics, controls, and path planning results for a redundant manipulator

    NASA Technical Reports Server (NTRS)

    Gretz, Bruce; Tilley, Scott W.

    1989-01-01

    The inverse kinematics solution, a modal position control algorithm, and path planning results for a 7 degree of freedom manipulator are presented. The redundant arm consists of two links with shoulder and elbow joints and a spherical wrist. The inverse kinematics problem for tip position is solved and the redundant joint is identified. It is also shown that a locus of tip positions exists in which there are kinematic limitations on self-motion. A computationally simple modal position control algorithm has been developed which guarantees a nearly constant closed-loop dynamic response throughout the workspace. If all closed-loop poles are assigned to the same location, the algorithm can be implemented with very little computation. To further reduce the required computation, the modal gains are updated only at discrete time intervals. Criteria are developed for the frequency of these updates. For commanding manipulator movements, a 5th-order spline which minimizes jerk provides a smooth tip-space path. Schemes for deriving a corresponding joint-space trajectory are discussed. Modifying the trajectory to avoid joint torque saturation when a tip payload is added is also considered. Simulation results are presented.

  9. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-06-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  10. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  11. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.

    PubMed

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.

  12. Optical Interconnections for VLSI Computational Systems Using Computer-Generated Holography.

    NASA Astrophysics Data System (ADS)

    Feldman, Michael Robert

    Optical interconnects for VLSI computational systems using computer generated holograms are evaluated in theory and experiment. It is shown that by replacing particular electronic connections with free-space optical communication paths, connection of devices on a single chip or wafer and between chips or modules can be improved. Optical and electrical interconnects are compared in terms of power dissipation, communication bandwidth, and connection density. Conditions are determined for which optical interconnects are advantageous. Based on this analysis, it is shown that by applying computer generated holographic optical interconnects to wafer scale fine grain parallel processing systems, dramatic increases in system performance can be expected. Some new interconnection networks, designed to take full advantage of optical interconnect technology, have been developed. Experimental Computer Generated Holograms (CGH's) have been designed, fabricated and subsequently tested in prototype optical interconnected computational systems. Several new CGH encoding methods have been developed to provide efficient high performance CGH's. One CGH was used to decrease the access time of a 1 kilobit CMOS RAM chip. Another was produced to implement the inter-processor communication paths in a shared memory SIMD parallel processor array.

  13. Converging Towards the Optimal Path to Extinction

    DTIC Science & Technology

    2011-01-01

    the reproductive rate R0 should be greater than but very close to 1. However, most real diseases have R0 larger than 1.5, which translates into a...can analytically find an expression for the action along the optimal path. The expression for the action is a function of k and the reproductive number...the optimal path for a range of values of the reproductive number R0. In contrast to the prior two examples, here the action must be computed

  14. Understanding the symptoms of schizophrenia using visual scan paths.

    PubMed

    Phillips, M L; David, A S

    1994-11-01

    This paper highlights the role of the visual scan path as a physiological marker of information processing, while investigating positive symptomatology in schizophrenia. The current literature is reviewed using computer search facilities (Medline). Schizophrenics either scan or stare extensively, the latter related to negative symptoms. Schizophrenics particularly scan when viewing human faces. Scan paths in schizophrenics are important when viewing meaningful stimuli such as human faces, because of the relationship between abnormal perception of stimuli and symptomatology in these subjects.

  15. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  16. A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding.

    ERIC Educational Resources Information Center

    Zelek, John S.; Bromley, Sam; Asmar, Daniel; Thompson, David

    2003-01-01

    A device that relays navigational information using a portable tactile glove and a wearable computer and camera system was tested with nine adults with visual impairments. Paths traversed by subjects negotiating an obstacle course were not qualitatively different from paths produced with existing wayfinding devices and hitting probabilities were…

  17. Earth-Space Link Attenuation Estimation via Ground Radar Kdp

    NASA Technical Reports Server (NTRS)

    Bolen, Steven M.; Benjamin, Andrew L.; Chandrasekar, V.

    2003-01-01

    A method of predicting attenuation on microwave Earth/spacecraft communication links, over wide areas and under various atmospheric conditions, has been developed. In the area around the ground station locations, a nearly horizontally aimed polarimetric S-band ground radar measures the specific differential phase (Kdp) along the Earth-space path. The specific attenuation along a path of interest is then computed by use of a theoretical model of the relationship between the measured S-band specific differential phase and the specific attenuation at the frequency to be used on the communication link. The model includes effects of rain, wet ice, and other forms of precipitation. The attenuation on the path of interest is then computed by integrating the specific attenuation over the length of the path. This method can be used to determine statistics of signal degradation on Earth/spacecraft communication links. It can also be used to obtain real-time estimates of attenuation along multiple Earth/spacecraft links that are parts of a communication network operating within the radar coverage area, thereby enabling better management of the network through appropriate dynamic routing along the best combination of links.

  18. Ensuring critical event sequences in high consequence computer based systems as inspired by path expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidd, M.E.C.

    1997-02-01

    The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.

  19. Limited-path-length entanglement percolation in quantum complex networks

    NASA Astrophysics Data System (ADS)

    Cuquet, Martí; Calsamiglia, John

    2011-03-01

    We study entanglement distribution in quantum complex networks where nodes are connected by bipartite entangled states. These networks are characterized by a complex structure, which dramatically affects how information is transmitted through them. For pure quantum state links, quantum networks exhibit a remarkable feature absent in classical networks: it is possible to effectively rewire the network by performing local operations on the nodes. We propose a family of such quantum operations that decrease the entanglement percolation threshold of the network and increase the size of the giant connected component. We provide analytic results for complex networks with an arbitrary (uncorrelated) degree distribution. These results are in good agreement with numerical simulations, which also show enhancement in correlated and real-world networks. The proposed quantum preprocessing strategies are not robust in the presence of noise. However, even when the links consist of (noisy) mixed-state links, one can send quantum information through a connecting path with a fidelity that decreases with the path length. In this noisy scenario, complex networks offer a clear advantage over regular lattices, namely, the fact that two arbitrary nodes can be connected through a relatively small number of steps, known as the small-world effect. We calculate the probability that two arbitrary nodes in the network can successfully communicate with a fidelity above a given threshold. This amounts to working out the classical problem of percolation with a limited path length. We find that this probability can be significant even for paths limited to few connections and that the results for standard (unlimited) percolation are soon recovered if the path length exceeds by a finite amount the average path length, which in complex networks generally scales logarithmically with the size of the network.

  20. Metabolic PathFinding: inferring relevant pathways in biochemical networks.

    PubMed

    Croes, Didier; Couche, Fabian; Wodak, Shoshana J; van Helden, Jacques

    2005-07-01

    Our knowledge of metabolism can be represented as a network comprising several thousands of nodes (compounds and reactions). Several groups applied graph theory to analyse the topological properties of this network and to infer metabolic pathways by path finding. This is, however, not straightforward, with a major problem caused by traversing irrelevant shortcuts through highly connected nodes, which correspond to pool metabolites and co-factors (e.g. H2O, NADP and H+). In this study, we present a web server implementing two simple approaches, which circumvent this problem, thereby improving the relevance of the inferred pathways. In the simplest approach, the shortest path is computed, while filtering out the selection of highly connected compounds. In the second approach, the shortest path is computed on the weighted metabolic graph where each compound is assigned a weight equal to its connectivity in the network. This approach significantly increases the accuracy of the inferred pathways, enabling the correct inference of relatively long pathways (e.g. with as many as eight intermediate reactions). Available options include the calculation of the k-shortest paths between two specified seed nodes (either compounds or reactions). Multiple requests can be submitted in a queue. Results are returned by email, in textual as well as graphical formats (available in http://www.scmbb.ulb.ac.be/pathfinding/).

  1. Energetically optimal travel across terrain: visualizations and a new metric of geographic distance with anthropological applications

    NASA Astrophysics Data System (ADS)

    Wood, Brian M.; Wood, Zoë J.

    2006-01-01

    We present a visualization and computation tool for modeling the caloric cost of pedestrian travel across three dimensional terrains. This tool is being used in ongoing archaeological research that analyzes how costs of locomotion affect the spatial distribution of trails and artifacts across archaeological landscapes. Throughout human history, traveling by foot has been the most common form of transportation, and therefore analyses of pedestrian travel costs are important for understanding prehistoric patterns of resource acquisition, migration, trade, and political interaction. Traditionally, archaeologists have measured geographic proximity based on "as the crow flies" distance. We propose new methods for terrain visualization and analysis based on measuring paths of least caloric expense, calculated using well established metabolic equations. Our approach provides a human centered metric of geographic closeness, and overcomes significant limitations of available Geographic Information System (GIS) software. We demonstrate such path computations and visualizations applied to archaeological research questions. Our system includes tools to visualize: energetic cost surfaces, comparisons of the elevation profiles of shortest paths versus least cost paths, and the display of paths of least caloric effort on Digital Elevation Models (DEMs). These analysis tools can be applied to calculate and visualize 1) likely locations of prehistoric trails and 2) expected ratios of raw material types to be recovered at archaeological sites.

  2. Multi-hop path tracing of mobile robot with multi-range image

    NASA Astrophysics Data System (ADS)

    Choudhury, Ramakanta; Samal, Chandrakanta; Choudhury, Umakanta

    2010-02-01

    It is well known that image processing depends heavily upon image representation technique . This paper intends to find out the optimal path of mobile robots for a specified area where obstacles are predefined as well as modified. Here the optimal path is represented by using the Quad tree method. Since there has been rising interest in the use of quad tree, we have tried to use the successive subdivision of images into quadrants from which the quad tree is developed. In the quad tree, obstacles-free area and the partial filled area are represented with different notations. After development of quad tree the algorithm is used to find the optimal path by employing neighbor finding technique, with a view to move the robot from the source to destination. The algorithm, here , permeates through the entire tree, and tries to locate the common ancestor for computation. The computation and the algorithm, aim at easing the ability of the robot to trace the optimal path with the help of adjacencies between the neighboring nodes as well as determining such adjacencies in the horizontal, vertical and diagonal directions. In this paper efforts have been made to determine the movement of the adjacent block in the quad tree and to detect the transition between the blocks equal size and finally generate the result.

  3. Integrated Flight Path Planning System and Flight Control System for Unmanned Helicopters

    PubMed Central

    Jan, Shau Shiun; Lin, Yu Hsiang

    2011-01-01

    This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM). PMID:22164029

  4. Integrated flight path planning system and flight control system for unmanned helicopters.

    PubMed

    Jan, Shau Shiun; Lin, Yu Hsiang

    2011-01-01

    This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM).

  5. Exploring Issues of Quality of Service in a Next Generation Internet Testbed: A Case Study Using PathMaster

    PubMed Central

    Shifman, Mark A.; Sayward, Frederick G.; Mattie, Mark E.; Miller, Perry L.

    2002-01-01

    This case study describes a project that explores issues of quality of service (QoS) relevant to the next-generation Internet (NGI), using the PathMaster application in a testbed environment. PathMaster is a prototype computer system that analyzes digitized cell images from cytology specimens and compares those images against an image database, returning a ranked set of “similar” cell images from the database. To perform NGI testbed evaluations, we used a cluster of nine parallel computation workstations configured as three subclusters using Cisco routers. This architecture provides a local “simulated Internet” in which we explored the following QoS strategies: (1) first-in-first-out queuing, (2) priority queuing, (3) weighted fair queuing, (4) weighted random early detection, and (5) traffic shaping. The study describes the results of using these strategies with a distributed version of the PathMaster system in the presence of different amounts of competing network traffic and discusses certain of the issues that arise. The goal of the study is to help introduce NGI QoS issues to the Medical Informatics community and to use the PathMaster NGI testbed to illustrate concretely certain of the QoS issues that arise. PMID:12223501

  6. Direct simulation of high-vorticity gas flows

    NASA Technical Reports Server (NTRS)

    Bird, G. A.

    1987-01-01

    The computational limitations associated with the molecular dynamics (MD) method and the direct simulation Monte Carlo (DSMC) method are reviewed in the context of the computation of dilute gas flows with high vorticity. It is concluded that the MD method is generally limited to the dense gas case in which the molecular diameter is one-tenth or more of the mean free path. It is shown that the cell size in DSMC calculations should be small in comparison with the mean free path, and that this may be facilitated by a new subcell procedure for the selection of collision partners.

  7. Calculations of atmospheric refraction for spacecraft remote-sensing applications

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1983-01-01

    Analytical solutions to the refraction integrals appropriate for ray trajectories along slant paths through the atmosphere are derived in this paper. This type of geometry is commonly encountered in remote-sensing applications utilizing an occultation technique. The solutions are obtained by evaluating higher-order terms from expansion of the refraction integral and are dependent on the vertical temperature distributions. Refraction parameters such as total refraction angles, air masses, and path lengths can be accurately computed. It is also shown that the method can be used for computing refraction parameters in astronomical refraction geometry for large zenith angles.

  8. Data acquisition and path selection decision making for an autonomous roving vehicle

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Shen, C. N.; Yerazunis, S. W.

    1976-01-01

    Problems related to the guidance of an autonomous rover for unmanned planetary exploration were investigated. Topics included in these studies were: simulation on an interactive graphics computer system of the Rapid Estimation Technique for detection of discrete obstacles; incorporation of a simultaneous Bayesian estimate of states and inputs in the Rapid Estimation Scheme; development of methods for estimating actual laser rangefinder errors and their application to date provided by Jet Propulsion Laboratory; and modification of a path selection system simulation computer code for evaluation of a hazard detection system based on laser rangefinder data.

  9. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  10. Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms

    NASA Astrophysics Data System (ADS)

    Samanta, A.; Todd, L. A.

    A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.

  11. Computer-aided design/computer-aided manufacturing skull base drill.

    PubMed

    Couldwell, William T; MacDonald, Joel D; Thomas, Charles L; Hansen, Bradley C; Lapalikar, Aniruddha; Thakkar, Bharat; Balaji, Alagar K

    2017-05-01

    The authors have developed a simple device for computer-aided design/computer-aided manufacturing (CAD-CAM) that uses an image-guided system to define a cutting tool path that is shared with a surgical machining system for drilling bone. Information from 2D images (obtained via CT and MRI) is transmitted to a processor that produces a 3D image. The processor generates code defining an optimized cutting tool path, which is sent to a surgical machining system that can drill the desired portion of bone. This tool has applications for bone removal in both cranial and spine neurosurgical approaches. Such applications have the potential to reduce surgical time and associated complications such as infection or blood loss. The device enables rapid removal of bone within 1 mm of vital structures. The validity of such a machining tool is exemplified in the rapid (< 3 minutes machining time) and accurate removal of bone for transtemporal (for example, translabyrinthine) approaches.

  12. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  13. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  14. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  15. On Channel-Discontinuity-Constraint Routing in Wireless Networks☆

    PubMed Central

    Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.

    2011-01-01

    Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646

  16. From the physics of interacting polymers to optimizing routes on the London Underground

    PubMed Central

    Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael

    2013-01-01

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198

  17. From the physics of interacting polymers to optimizing routes on the London Underground.

    PubMed

    Yeung, Chi Ho; Saad, David; Wong, K Y Michael

    2013-08-20

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.

  18. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.

    2012-06-05

    An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  19. Predictor laws for pictorial flight displays

    NASA Technical Reports Server (NTRS)

    Grunwald, A. J.

    1985-01-01

    Two predictor laws are formulated and analyzed: (1) a circular path law based on constant accelerations perpendicular to the path and (2) a predictor law based on state transition matrix computations. It is shown that for both methods the predictor provides the essential lead zeros for the path-following task. However, in contrast to the circular path law, the state transition matrix law furnishes the system with additional zeros that entirely cancel out the higher-frequency poles of the vehicle dynamics. On the other hand, the circular path law yields a zero steady-state error in following a curved trajectory with a constant radius. A combined predictor law is suggested that utilizes the advantages of both methods. A simple analysis shows that the optimal prediction time mainly depends on the level of precision required in the path-following task, and guidelines for determining the optimal prediction time are given.

  20. Trajectory generation for an on-road autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Horst, John; Barbera, Anthony

    2006-05-01

    We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.

  1. Recommended number of strides for automatic assessment of gait symmetry and regularity in above-knee amputees by means of accelerometry and autocorrelation analysis

    PubMed Central

    2012-01-01

    Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184

  2. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  3. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  4. Evaluation of a New Backtrack Free Path Planning Algorithm for Manipulators

    NASA Astrophysics Data System (ADS)

    Islam, Md. Nazrul; Tamura, Shinsuke; Murata, Tomonari; Yanase, Tatsuro

    This paper evaluates a newly proposed backtrack free path planning algorithm (BFA) for manipulators. BFA is an exact algorithm, i.e. it is resolution complete. Different from existing resolution complete algorithms, its computation time and memory space are proportional to the number of arms. Therefore paths can be calculated within practical and predetermined time even for manipulators with many arms, and it becomes possible to plan complicated motions of multi-arm manipulators in fully automated environments. The performance of BFA is evaluated for 2-dimensional environments while changing the number of arms and obstacle placements. Its performance under locus and attitude constraints is also evaluated. Evaluation results show that the computation volume of the algorithm is almost the same as the theoretical one, i.e. it increases linearly with the number of arms even in complicated environments. Moreover BFA achieves the constant performance independent of environments.

  5. Flight in low-level wind shear

    NASA Technical Reports Server (NTRS)

    Frost, W.

    1983-01-01

    Results of studies of wind shear hazard to aircraft operation are summarized. Existing wind shear profiles currently used in computer and flight simulator studies are reviewed. The governing equations of motion for an aircraft are derived incorporating the variable wind effects. Quantitative discussions of the effects of wind shear on aircraft performance are presented. These are followed by a review of mathematical solutions to both the linear and nonlinear forms of the governing equations. Solutions with and without control laws are presented. The application of detailed analysis to develop warning and detection systems based on Doppler radar measuring wind speed along the flight path is given. A number of flight path deterioration parameters are defined and evaluated. Comparison of computer-predicted flight paths with those measured in a manned flight simulator is made. Some proposed airborne and ground-based wind shear hazard warning and detection systems are reviewed. The advantages and disadvantages of both types of systems are discussed.

  6. Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Knudson, Matthew D.; Colby, Mitchell; Tumer, Kagan

    2014-01-01

    Dynamic flight environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal flight paths. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance

  7. Computational fluid dynamics analysis of SSME phase 2 and phase 2+ preburner injector element hydrogen flow paths

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    1992-01-01

    Phase 2+ Space Shuttle Main Engine powerheads, E0209 and E0215 degraded their main combustion chamber (MCC) liners at a faster rate than is normal for phase 2 powerheads. One possible cause of the accelerated degradation was a reduction of coolant flow through the MCC. Hardware changes were made to the preburner fuel leg which may have reduced the resistance and, therefore, pulled some of the hydrogen from the MCC coolant leg. A computational fluid dynamics (CFD) analysis was performed to determine hydrogen flow path resistances of the phase 2+ fuel preburner injector elements relative to the phase 2 element. FDNS was implemented on axisymmetric grids with the hydrogen assumed to be incompressible. The analysis was performed in two steps: the first isolated the effect of the different inlet areas and the second modeled the entire injector element hydrogen flow path.

  8. Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Colby, Mitchell; Knudson, Matthew D.; Tumer, Kagan

    2014-01-01

    Dynamic environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal paths through these environments. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially with the number of agents in the system. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance.

  9. A DNA-based molecular motor that can navigate a network of tracks

    NASA Astrophysics Data System (ADS)

    Wickham, Shelley F. J.; Bath, Jonathan; Katsuda, Yousuke; Endo, Masayuki; Hidaka, Kumi; Sugiyama, Hiroshi; Turberfield, Andrew J.

    2012-03-01

    Synthetic molecular motors can be fuelled by the hydrolysis or hybridization of DNA. Such motors can move autonomously and programmably, and long-range transport has been observed on linear tracks. It has also been shown that DNA systems can compute. Here, we report a synthetic DNA-based system that integrates long-range transport and information processing. We show that the path of a motor through a network of tracks containing four possible routes can be programmed using instructions that are added externally or carried by the motor itself. When external control is used we find that 87% of the motors follow the correct path, and when internal control is used 71% of the motors follow the correct path. Programmable motion will allow the development of computing networks, molecular systems that can sort and process cargoes according to instructions that they carry, and assembly lines that can be reconfigured dynamically in response to changing demands.

  10. 29 CFR 541.701 - Customarily and regularly.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Customarily and regularly. 541.701 Section 541.701 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS DEFINING AND DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES...

  11. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    PubMed

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  13. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.

  14. Introduction of a computer-based method for automated planning of reduction paths under consideration of simulated muscular forces.

    PubMed

    Buschbaum, Jan; Fremd, Rainer; Pohlemann, Tim; Kristen, Alexander

    2017-08-01

    Reduction is a crucial step in the surgical treatment of bone fractures. Finding an optimal path for restoring anatomical alignment is considered technically demanding because collisions as well as high forces caused by surrounding soft tissues can avoid desired reduction movements. The repetition of reduction movements leads to a trial-and-error process which causes a prolonged duration of surgery. By planning an appropriate reduction path-an optimal sequence of target-directed movements-these problems should be overcome. For this purpose, a computer-based method has been developed. Using the example of simple femoral shaft fractures, 3D models are generated out of CT images. A reposition algorithm aligns both fragments by reconstructing their broken edges. According to the criteria of a deduced planning strategy, a modified A*-algorithm searches collision-free route of minimal force from the dislocated into the computed target position. Muscular forces are considered using a musculoskeletal reduction model (OpenSim model), and bone collisions are detected by an appropriate method. Five femoral SYNBONE models were broken into different fracture classification types and were automatically reduced from ten randomly selected displaced positions. Highest mean translational and rotational error for achieving target alignment is [Formula: see text] and [Formula: see text]. Mean value and standard deviation of occurring forces are [Formula: see text] for M. tensor fasciae latae and [Formula: see text] for M. semitendinosus over all trials. These pathways are precise, collision-free, required forces are minimized, and thus regarded as optimal paths. A novel method for planning reduction paths under consideration of collisions and muscular forces is introduced. The results deliver additional knowledge for an appropriate tactical reduction procedure and can provide a basis for further navigated or robotic-assisted developments.

  15. Estimation of the interference coupling into cables within electrically large multiroom structures

    NASA Astrophysics Data System (ADS)

    Keghie, J.; Kanyou Nana, R.; Schetelig, B.; Potthast, S.; Dickmann, S.

    2010-10-01

    Communication cables are used to transfer data between components of a system. As a part of the EMC analysis of complex systems, it is necessary to determine which level of interference can be expected at the input of connected devices due to the coupling into the irradiated cable. For electrically large systems consisting of several rooms with cables connecting components located in different rooms, an estimation of the coupled disturbances inside cables using commercial field computation software is often not feasible without several restrictions. In many cases, this is related to the non-availability of computing memory and processing power needed for the computation. In this paper, we are going to show that, starting from a topological analysis of the entire system, weak coupling paths within the system can be can be identified. By neglecting these coupling paths and using the transmission line approach, the original system will be simplified so that a simpler estimation is possible. Using the example of a system which is composed of two rooms, multiple apertures, and a network cable located in both chambers, it is shown that an estimation of the coupled disturbances due to external electromagnetic sources is feasible with this approach. Starting from an incident electromagnetic field, we determine transfer functions describing the coupling means (apertures, cables). Using these transfer functions and the knowledge of the weak coupling paths above, a decision is taken regarding the means for paths that can be neglected during the estimation. The estimation of the coupling into the cable is then made while taking only paths with strong coupling into account. The remaining part of the wiring harness in areas with weak coupling is represented by its input impedance. A comparison with the original network shows a good agreement.

  16. The navigation system of the JPL robot

    NASA Technical Reports Server (NTRS)

    Thompson, A. M.

    1977-01-01

    The control structure of the JPL research robot and the operations of the navigation subsystem are discussed. The robot functions as a network of interacting concurrent processes distributed among several computers and coordinated by a central executive. The results of scene analysis are used to create a segmented terrain model in which surface regions are classified by traversibility. The model is used by a path planning algorithm, PATH, which uses tree search methods to find the optimal path to a goal. In PATH, the search space is defined dynamically as a consequence of node testing. Maze-solving and the use of an associative data base for context dependent node generation are also discussed. Execution of a planned path is accomplished by a feedback guidance process with automatic error recovery.

  17. Intelligent path loss prediction engine design using machine learning in the urban outdoor environment

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Lu, Jingyang; Xu, Yiran; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2018-05-01

    Due to the progressive expansion of public mobile networks and the dramatic growth of the number of wireless users in recent years, researchers are motivated to study the radio propagation in urban environments and develop reliable and fast path loss prediction models. During last decades, different types of propagation models are developed for urban scenario path loss predictions such as the Hata model and the COST 231 model. In this paper, the path loss prediction model is thoroughly investigated using machine learning approaches. Different non-linear feature selection methods are deployed and investigated to reduce the computational complexity. The simulation results are provided to demonstratethe validity of the machine learning based path loss prediction engine, which can correctly determine the signal propagation in a wireless urban setting.

  18. Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.

    PubMed

    Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C

    2017-10-01

    The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).

  19. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  20. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  1. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  2. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesi, Andrew; Lee, Byeongdu

    Herein, a general method to calculate the scattering functions of polyhedra, including both regular and semi-regular polyhedra, is presented. These calculations may be achieved by breaking a polyhedron into sets of congruent pieces, thereby reducing computation time by taking advantage of Fourier transforms and inversion symmetry. Each piece belonging to a set or subunit can be generated by either rotation or translation. Further, general strategies to compute truncated, concave and stellated polyhedra are provided. Using this method, the asymptotic behaviors of the polyhedral scattering functions are compared with that of a sphere. It is shown that, for a regular polyhedron,more » the form factor oscillation at highqis correlated with the face-to-face distance. In addition, polydispersity affects the Porod constant. The ideas presented herein will be important for the characterization of nanomaterials using small-angle scattering.« less

  4. E&V (Evaluation and Validation) Reference Manual, Version 1.0.

    DTIC Science & Technology

    1988-07-01

    references featured in the Reference Manual. G-05097a GENERAL REFERENCE INFORMATION EXTRACTED , FROM * INDEXES AND CROSS REFERENCES CHAPTER 4...at E&V techniques through many different paths, and provides a means to extract useful information along the way. /^c^^s; /r^ ^yr*•**•»» * L...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski . AFWAUAAAF, Wright Patterson AFB, OH 45433-6543. ES-2

  5. Relevancy in Problem Solving: A Computational Framework

    ERIC Educational Resources Information Center

    Kwisthout, Johan

    2012-01-01

    When computer scientists discuss the computational complexity of, for example, finding the shortest path from building A to building B in some town or city, their starting point typically is a formal description of the problem at hand, e.g., a graph with weights on every edge where buildings correspond to vertices, routes between buildings to…

  6. Computer-Assisted Diagnostic Decision Support: History, Challenges, and Possible Paths Forward

    ERIC Educational Resources Information Center

    Miller, Randolph A.

    2009-01-01

    This paper presents a brief history of computer-assisted diagnosis, including challenges and future directions. Some ideas presented in this article on computer-assisted diagnostic decision support systems (CDDSS) derive from prior work by the author and his colleagues (see list in Acknowledgments) on the INTERNIST-1 and QMR projects. References…

  7. Design, Development and Implementation of a Middle School Computer Applications Curriculum.

    ERIC Educational Resources Information Center

    Pina, Anthony A.

    This report documents the design, development, and implementation of computer applications curricula in a pilot program augmenting the regular curriculum for eighth graders at a private middle school. In assessing the needs of the school, a shift in focus was made from computer programming to computer application. The basic objectives of the…

  8. On zero variance Monte Carlo path-stretching schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lux, I.

    1983-08-01

    A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game.

  9. Quantum robots and environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.

    1998-08-01

    Quantum robots and their interactions with environments of quantum systems are described, and their study justified. A quantum robot is a mobile quantum system that includes an on-board quantum computer and needed ancillary systems. Quantum robots carry out tasks whose goals include specified changes in the state of the environment, or carrying out measurements on the environment. Each task is a sequence of alternating computation and action phases. Computation phase activites include determination of the action to be carried out in the next phase, and recording of information on neighborhood environmental system states. Action phase activities include motion of themore » quantum robot and changes in the neighborhood environment system states. Models of quantum robots and their interactions with environments are described using discrete space and time. A unitary step operator T that gives the single time step dynamics is associated with each task. T=T{sub a}+T{sub c} is a sum of action phase and computation phase step operators. Conditions that T{sub a} and T{sub c} should satisfy are given along with a description of the evolution as a sum over paths of completed phase input and output states. A simple example of a task{emdash}carrying out a measurement on a very simple environment{emdash}is analyzed in detail. A decision tree for the task is presented and discussed in terms of the sums over phase paths. It is seen that no definite times or durations are associated with the phase steps in the tree, and that the tree describes the successive phase steps in each path in the sum over phase paths. {copyright} {ital 1998} {ital The American Physical Society}« less

  10. Prospective Optimization with Limited Resources.

    PubMed

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-09-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.

  11. Regularization method for large eddy simulations of shock-turbulence interactions

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2018-05-01

    The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.

  12. The generative power of weighted one-sided and regular sticker systems

    NASA Astrophysics Data System (ADS)

    Siang, Gan Yee; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2014-06-01

    Sticker systems were introduced in 1998 as one of the DNA computing models by using the recombination behavior of DNA molecules. The Watson-Crick complementary principle of DNA molecules is abstractly used in the sticker systems to perform the computation of sticker systems. In this paper, the generative power of weighted one-sided sticker systems and weighted regular sticker systems are investigated. Moreover, the relationship of the families of languages generated by these two variants of sticker systems to the Chomsky hierarchy is also presented.

  13. In Silico Enhancing M. tuberculosis Protein Interaction Networks in STRING To Predict Drug-Resistance Pathways and Pharmacological Risks.

    PubMed

    Mei, Suyu

    2018-05-04

    Bacterial protein-protein interaction (PPI) networks are significant to reveal the machinery of signal transduction and drug resistance within bacterial cells. The database STRING has collected a large number of bacterial pathogen PPI networks, but most of the data are of low quality without being experimentally or computationally validated, thus restricting its further biomedical applications. We exploit the experimental data via four solutions to enhance the quality of M. tuberculosis H37Rv (MTB) PPI networks in STRING. Computational results show that the experimental data derived jointly by two-hybrid and copurification approaches are the most reliable to train an L 2 -regularized logistic regression model for MTB PPI network validation. On the basis of the validated MTB PPI networks, we further study the three problems via breadth-first graph search algorithm: (1) discovery of MTB drug-resistance pathways through searching for the paths between known drug-target genes and drug-resistance genes, (2) choosing potential cotarget genes via searching for the critical genes located on multiple pathways, and (3) choosing essential drug-target genes via analysis of network degree distribution. In addition, we further combine the validated MTB PPI networks with human PPI networks to analyze the potential pharmacological risks of known and candidate drug-target genes from the point of view of system pharmacology. The evidence from protein structure alignment demonstrates that the drugs that act on MTB target genes could also adversely act on human signaling pathways.

  14. Intrinsic viscosity and the electrical polarizability of arbitrarily shaped objects

    NASA Astrophysics Data System (ADS)

    Mansfield, Marc L.; Douglas, Jack F.; Garboczi, Edward J.

    2001-12-01

    The problem of calculating the electric polarizability tensor αe of objects of arbitrary shape has been reformulated in terms of path integration and implemented computationally. The method simultaneously yields the electrostatic capacity C and the equilibrium charge density. These functionals of particle shape are important in many materials science applications, including the conductivity and viscosity of filled materials and suspensions. The method has been validated through comparison with exact results (for the sphere, the circular disk, touching spheres, and tori), it has been found that 106 trajectories yield an accuracy of about four and three significant figures for C and αe, respectively. The method is fast: For simple objects, 106 trajectories require about 1 min on a PC. It is also versatile: Switching from one object to another is easy. Predictions have also been made for regular polygons, polyhedra, and right circular cylinders, since these shapes are important in applications and since numerical calculations of high stated accuracy are available. Finally, the path-integration method has been applied to estimate transport properties of both linear flexible polymers (random walk chains of spheres) and lattice model dendrimer molecules. This requires probing of an ensemble of objects. For linear chains, the distribution function of C and of the trace (αe), are found to be universal in a size coordinate reduced by the chain radius of gyration. For dendrimers, these distribution functions become increasingly sharp with generation number. It has been found that C and αe provide important information about the distribution of molecular size and shape and that they are important for estimating the Stokes friction and intrinsic viscosity of macromolecules.

  15. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2013-01-01

    Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.

  16. Analytical modeling of the structureborne noise path on a small twin-engine aircraft

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III; Stokes, A. Westagard; Garrelick, J. M.; Martini, K. F.

    1988-01-01

    The structureborne noise path of a six passenger twin-engine aircraft is analyzed. Models of the wing and fuselage structures as well as the interior acoustic space of the cabin are developed and used to evaluate sensitivity to structural and acoustic parameters. Different modeling approaches are used to examine aspects of the structureborne path. These approaches are guided by a number of considerations including the geometry of the structures, the frequency range of interest, and the tractability of the computations. Results of these approaches are compared with experimental data.

  17. On static triplet structures in fluids with quantum behavior.

    PubMed

    Sesé, Luis M

    2018-03-14

    The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H 2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H 2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.

  18. On static triplet structures in fluids with quantum behavior

    NASA Astrophysics Data System (ADS)

    Sesé, Luis M.

    2018-03-01

    The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.

  19. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms

    PubMed Central

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002

  20. Processing SPARQL queries with regular expressions in RDF databases

    PubMed Central

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  1. Processing SPARQL queries with regular expressions in RDF databases.

    PubMed

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  2. Dispersion in Spherical Water Drops.

    ERIC Educational Resources Information Center

    Eliason, John C., Jr.

    1989-01-01

    Discusses a laboratory exercise simulating the paths of light rays through spherical water drops by applying principles of ray optics and geometry. Describes four parts: determining the output angles, computer simulation, explorations, model testing, and solutions. Provides a computer program and some diagrams. (YP)

  3. Exploring the spectrum of regularized bosonic string theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambjørn, J., E-mail: ambjorn@nbi.dk; Makeenko, Y., E-mail: makeenko@nbi.dk

    2015-03-15

    We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.

  4. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  5. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  6. A Lightweight Radio Propagation Model for Vehicular Communication in Road Tunnels.

    PubMed

    Qureshi, Muhammad Ahsan; Noor, Rafidah Md; Shamim, Azra; Shamshirband, Shahaboddin; Raymond Choo, Kim-Kwang

    2016-01-01

    Radio propagation models (RPMs) are generally employed in Vehicular Ad Hoc Networks (VANETs) to predict path loss in multiple operating environments (e.g. modern road infrastructure such as flyovers, underpasses and road tunnels). For example, different RPMs have been developed to predict propagation behaviour in road tunnels. However, most existing RPMs for road tunnels are computationally complex and are based on field measurements in frequency band not suitable for VANET deployment. Furthermore, in tunnel applications, consequences of moving radio obstacles, such as large buses and delivery trucks, are generally not considered in existing RPMs. This paper proposes a computationally inexpensive RPM with minimal set of parameters to predict path loss in an acceptable range for road tunnels. The proposed RPM utilizes geometric properties of the tunnel, such as height and width along with the distance between sender and receiver, to predict the path loss. The proposed RPM also considers the additional attenuation caused by the moving radio obstacles in road tunnels, while requiring a negligible overhead in terms of computational complexity. To demonstrate the utility of our proposed RPM, we conduct a comparative summary and evaluate its performance. Specifically, an extensive data gathering campaign is carried out in order to evaluate the proposed RPM. The field measurements use the 5 GHz frequency band, which is suitable for vehicular communication. The results demonstrate that a close match exists between the predicted values and measured values of path loss. In particular, an average accuracy of 94% is found with R2 = 0.86.

  7. Protection of Medical Equipment against Electromagnetic Pulse (EMP): phase I

    DTIC Science & Technology

    1986-06-12

    case condition. The current in the power cord path due to the EMP pin threat is computed frorn: VEMP I(RS+RBI +RB2+RL+RB3 +ZC+RB4+R+RB5 R +RB6 + RB 7...base-emitter, causing darage. 44 I NN4154 100 B-C vEMP _ 2N4401 S B-E 2N4401 270 Figure 7.5 ESA: Patient monitor path to ground. 45 7.1.3 Foot Switch...computed from: VEMP Z Rs. RB) VVBD (7.5) 1800 = l(100 25) + 50 (7.6) The resulting threat current is 14A. The rectifier diode threshold is 3.7A at f

  8. String tightening as a self-organizing phenomenon.

    PubMed

    Banerjee, Bonny

    2007-09-01

    The phenomenon of self-organization has been of special interest to the neural network community throughout the last couple of decades. In this paper, we study a variant of the self-organizing map (SOM) that models the phenomenon of self-organization of the particles forming a string when the string is tightened from one or both of its ends. The proposed variant, called the string tightening self-organizing neural network (STON), can be used to solve certain practical problems, such as computation of shortest homotopic paths, smoothing paths to avoid sharp turns, computation of convex hull, etc. These problems are of considerable interest in computational geometry, robotics path-planning, artificial intelligence (AI) (diagrammatic reasoning), very large scale integration (VLSI) routing, and geographical information systems. Given a set of obstacles and a string with two fixed terminal points in a 2-D space, the STON model continuously tightens the given string until the unique shortest configuration in terms of the Euclidean metric is reached. The STON minimizes the total length of a string on convergence by dynamically creating and selecting feature vectors in a competitive manner. Proof of correctness of this anytime algorithm and experimental results obtained by its deployment have been presented in the paper.

  9. Spectroscopic fingerprints of toroidal nuclear quantum delocalization via ab initio path integral simulations.

    PubMed

    Schütt, Ole; Sebastiani, Daniel

    2013-04-05

    We investigate the quantum-mechanical delocalization of hydrogen in rotational symmetric molecular systems. To this purpose, we perform ab initio path integral molecular dynamics simulations of a methanol molecule to characterize the quantum properties of hydrogen atoms in a representative system by means of their real-space and momentum-space densities. In particular, we compute the spherically averaged momentum distribution n(k) and the pseudoangular momentum distribution n(kθ). We interpret our results by comparing them to path integral samplings of a bare proton in an ideal torus potential. We find that the hydroxyl hydrogen exhibits a toroidal delocalization, which leads to characteristic fingerprints in the line shapes of the momentum distributions. We can describe these specific spectroscopic patterns quantitatively and compute their onset as a function of temperature and potential energy landscape. The delocalization patterns in the projected momentum distribution provide a promising computational tool to address the intriguing phenomenon of quantum delocalization in condensed matter and its spectroscopic characterization. As the momentum distribution n(k) is also accessible through Nuclear Compton Scattering experiments, our results will help to interpret and understand future measurements more thoroughly. Copyright © 2012 Wiley Periodicals, Inc.

  10. Ab initio computational study of reaction mechanism of peptide bond formation on HF/6-31G(d,p) level

    NASA Astrophysics Data System (ADS)

    Siahaan, P.; Lalita, M. N. T.; Cahyono, B.; Laksitorini, M. D.; Hildayani, S. Z.

    2017-02-01

    Peptide plays an important role in modulation of various cell functions. Therefore, formation reaction of the peptide is important for chemical reactions. One way to probe the reaction of peptide synthesis is a computational method. The purpose of this research is to determine the reaction mechanism for peptide bond formation on Ac-PV-NH2 and Ac-VP-NH2 synthesis from amino acid proline and valine by ab initio computational approach. The calculations were carried out by theory and basis set HF/6-31G(d,p) for four mechanisms (path 1 to 4) that proposed in this research. The results show that the highest of the rate determining step between reactant and transition state (TS) for path 1, 2, 3, and 4 are 163.06 kJ.mol-1, 1868 kJ.mol-1, 5685 kJ.mol-1, and 1837 kJ.mol-1. The calculation shows that the most preferred reaction of Ac-PV-NH2 and Ac-VP-NH2 synthesis from amino acid proline and valine are on the path 1 (initiated with the termination of H+ in proline amino acid) that produce Ac-PV-NH2.

  11. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  12. Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kohen, Hamid

    1997-01-01

    This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.

  13. Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing

    DTIC Science & Technology

    2006-11-01

    in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and

  14. An analysis of running skyline load path.

    Treesearch

    Ward W. Carson; Charles N. Mann

    1971-01-01

    This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...

  15. FACTOR - FACTOR II. Departmental Program and Model Documentation 71-3.

    ERIC Educational Resources Information Center

    Wilson, Stanley; Billingsley, Ray

    This computer program is designed to optimize a Cobb-Douglas type of production function. The user of this program may choose isoquants and/or the expansion path for a Cobb-Douglas type of production function with up to nine resources. An expansion path is the combination of quantities of each resource that minimizes the cost at each production…

  16. ABLEPathPlanner library for Umbra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oppel III, Fred J; Xavier, Patrick G.; Gottlieb, Eric Joseph

    Umbra contains a flexible, modular path planner that is used to simulate complex entity behaviors moving within 3D terrain environments that include buildings, barriers, roads, bridges, fences, and a variety of other terrain features (water, vegetation, slope, etc…). The path planning algorithm is a critical component required to execute these tactical behaviors to provide realistic entity movement and provide efficient system computing performance.

  17. User-Customizing HIS Interface by Light Programming Tool: the Case of Redesigning the Nursing Kardex with InfoPath 2003

    PubMed Central

    Chang, Tsuei-Rung; Chang, Polun

    2006-01-01

    Due to lack of IT resources, the End-User Computing strategy seems useful for the front-end users to develop and customize their own information application. We taught the nurses to use the InfoPath 2003 to design their own card-filing Kardex system and observed promising results. PMID:17238497

  18. Finding the way with a noisy brain.

    PubMed

    Cheung, Allen; Vickerstaff, Robert

    2010-11-11

    Successful navigation is fundamental to the survival of nearly every animal on earth, and achieved by nervous systems of vastly different sizes and characteristics. Yet surprisingly little is known of the detailed neural circuitry from any species which can accurately represent space for navigation. Path integration is one of the oldest and most ubiquitous navigation strategies in the animal kingdom. Despite a plethora of computational models, from equational to neural network form, there is currently no consensus, even in principle, of how this important phenomenon occurs neurally. Recently, all path integration models were examined according to a novel, unifying classification system. Here we combine this theoretical framework with recent insights from directed walk theory, and develop an intuitive yet mathematically rigorous proof that only one class of neural representation of space can tolerate noise during path integration. This result suggests many existing models of path integration are not biologically plausible due to their intolerance to noise. This surprising result imposes significant computational limitations on the neurobiological spatial representation of all successfully navigating animals, irrespective of species. Indeed, noise-tolerance may be an important functional constraint on the evolution of neuroarchitectural plans in the animal kingdom.

  19. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  20. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  1. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  2. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca

    2015-04-15

    Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less

  3. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  4. Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Asharabi, R. M.

    2018-01-01

    In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.

  5. Sinus CT scan

    MedlinePlus

    CAT scan - sinus; Computed axial tomography scan - sinus; Computed tomography scan - sinus; CT scan - sinus ... Risks for a CT scan includes: Being exposed to radiation Allergic reaction to contrast dye CT scans expose you to more radiation than regular ...

  6. Dynamical structure of center-of-pressure trajectories with and without functional taping in children with cerebral palsy level I and II of GMFCS.

    PubMed

    Pavão, Silvia Leticia; Ledebt, Annick; Savelsbergh, Geert J P; Rocha, Nelci Adriana C F

    2017-08-01

    Postural control during quiet standing was examined in typical children (TD) and children with cerebral palsy (CP) level I and II of GMFCS. The immediate effect on postural control of functional taping on the thighs was analyzed. We evaluated 43 TD, 17 CP children level I, and 10 CP children level II. Participants were evaluated in two conditions (with and without taping). The trajectories of the center of pressure (COP) were analyzed by means of conventional posturography (sway amplitude, sway-path-length) and dynamic posturography (degree of twisting-and-turning, sway regularity). Both CP groups showed larger sway amplitude than the TD while only the CP level II showed more regular COP trajectories with less twisting-and-turning. Functional taping didn't affect sway amplitude or sway-path-length. TD children exhibited more twisting-and-turning with functional taping, whereas no effects on postural sway dynamics were observed in CP children. Functional taping doesn't result in immediate changes in quiet stance in CP children, whereas in TD it resulted in faster sway corrections. Children level II invest more attention in postural control than level I, and TD. While quiet standing was more automatized in children level I than in level II, both CP groups showed a less stable balance than TD. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A Foothold for Handhelds.

    ERIC Educational Resources Information Center

    Joyner, Amy

    2003-01-01

    Handheld computers provide students tremendous computing and learning power at about a 10th the cost of a regular computer. Describes the evolution of handhelds; provides some examples of their uses; and cites research indicating they are effective classroom tools that can improve efficiency and instruction. A sidebar lists handheld resources.…

  8. 5 CFR 550.707 - Computation of severance pay fund.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... pay for standby duty regularly varies throughout the year, compute the average standby duty premium...), compute the weekly average percentage, and multiply that percentage by the weekly scheduled rate of pay in... hours in a pay status (excluding overtime hours) and multiply that average by the hourly rate of basic...

  9. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  10. North Atlantic teleconnection patterns signature on sea level from satellite altimetry

    NASA Astrophysics Data System (ADS)

    Iglesias, Isabel; Lázaro, Clara; Joana Fernandes, M.; Bastos, Luísa

    2015-04-01

    Presently, satellite altimetry record is long enough to appropriately study inter-annual signals in sea level anomaly and ocean surface circulation, allowing the association of teleconnection patterns of low-frequency variability with the response of sea level. The variability of the Atlantic Ocean at basin-scale is known to be complex in space and time, with the dominant mode occurring on annual timescales. However, interannual and decadal variability have already been documented in sea surface temperature. Both modes are believed to be linked and are known to influence sea level along coastal regions. The analysis of the sea level multiannual variability is thus essential to understand the present climate and its long-term variability. While in the open-ocean sea level anomaly from satellite altimetry currently possesses centimetre-level accuracy, satellite altimetry measurements become invalid or of lower accuracy along the coast due to the invalidity of the wet tropospheric correction (WTC) derived from on-board microwave radiometers. In order to adequately analyse long-term changes in sea level in the coastal regions, satellite altimetry measurements can be recovered by using an improved WTC computed from recent algorithms that combine wet path delays from all available observations (remote sensing scanning imaging radiometers, GNSS stations, microwave radiometers on-board satellite altimetry missions and numerical weather models). In this study, a 20-year (1993-2013) time series of multi-mission satellite altimetry (TOPEX/Poseidon, Jason-1, OSTM/Jason-2, ERS-1/2, ENVISAT, CryoSat-2 and SARAL), are used to characterize the North Atlantic (NA) long-term variability on sea level at basin-scale and analyse its response to several atmospheric teleconnections known to operate on the NA. The altimetry record was generated using an improved coastal WTC computed from either the GNSS-derived path Delay or the Data Combination methodologies developed by University of Porto (Fernandes et al., 2010; Fernandes et al., 2013). Regular 0.25°x0.25° latitude-longitude grids were generated at a 10-day interval for the NA Ocean (60°W-5°W, 5°N-60°N) using optimal interpolation with a realistic space-time correlation function (Lázaro et al., 2013). These grids are used to inspect the response of sea level anomalies to several teleconnection patterns as well as the NA variability on annual and longer timescales. The teleconnection patterns selected are the ones that have influence on the NA basin: North Atlantic Oscillation, East Atlantic pattern, East Atlantic/Western Russia pattern, Scandinavia pattern, Western Mediterranean Oscillation index, El Niño Southern Oscillation, Tropical North Atlantic Index, and Atlantic Multidecadal Oscillation. Acknowledgments: RAIA tec (0688-RAIATEC-1-P) project. The RAIA Coastal Observatory has been funded by the Programa Operativo de Cooperación Transfronteriza España-Portugal (POCTEP 2007-2013). References: Fernandes M.J., C. Lázaro, A.L. Nunes, N. Pires, L. Bastos, V.B. Mendes (2010). GNSS-derived Path Delay: an approach to compute the wet tropospheric correction for coastal altimetry. IEEE Geosci. Rem. Sens Lett., Vol. 7, NO. 3, 596 - 600, doi: 10.1109/LGRS.2010.2042425. Lázaro, C., M. J. Juliano, M. J. Fernandes (2013): Semi-automatic determination of the Azores Current axis using satellite altimetry: application to the study of the current variability during 1995-2006. Advances in Space Research, Vol. 51(11), pp. 2155-2170, doi:10.1016/j.asr.2012.12.021. Fernandes, M. J., A.L. Nunes, C. Lázaro (2013). Analysis and Inter-Calibration of Wet Path Delay Datasets to Compute the Wet Tropospheric Correction for CryoSat-2 over Ocean. Remote Sensing, 5, 4977-5005.

  11. Computer code for predicting coolant flow and heat transfer in turbomachinery

    NASA Technical Reports Server (NTRS)

    Meitner, Peter L.

    1990-01-01

    A computer code was developed to analyze any turbomachinery coolant flow path geometry that consist of a single flow passage with a unique inlet and exit. Flow can be bled off for tip-cap impingement cooling, and a flow bypass can be specified in which coolant flow is taken off at one point in the flow channel and reintroduced at a point farther downstream in the same channel. The user may either choose the coolant flow rate or let the program determine the flow rate from specified inlet and exit conditions. The computer code integrates the 1-D momentum and energy equations along a defined flow path and calculates the coolant's flow rate, temperature, pressure, and velocity and the heat transfer coefficients along the passage. The equations account for area change, mass addition or subtraction, pumping, friction, and heat transfer.

  12. Enhanced Contact Graph Routing (ECGR) MACHETE Simulation Model

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Jennings, Esther H.; Clare, Loren P.

    2013-01-01

    Contact Graph Routing (CGR) for Delay/Disruption Tolerant Networking (DTN) space-based networks makes use of the predictable nature of node contacts to make real-time routing decisions given unpredictable traffic patterns. The contact graph will have been disseminated to all nodes before the start of route computation. CGR was designed for space-based networking environments where future contact plans are known or are independently computable (e.g., using known orbital dynamics). For each data item (known as a bundle in DTN), a node independently performs route selection by examining possible paths to the destination. Route computation could conceivably run thousands of times a second, so computational load is important. This work refers to the simulation software model of Enhanced Contact Graph Routing (ECGR) for DTN Bundle Protocol in JPL's MACHETE simulation tool. The simulation model was used for performance analysis of CGR and led to several performance enhancements. The simulation model was used to demonstrate the improvements of ECGR over CGR as well as other routing methods in space network scenarios. ECGR moved to using earliest arrival time because it is a global monotonically increasing metric that guarantees the safety properties needed for the solution's correctness since route re-computation occurs at each node to accommodate unpredicted changes (e.g., traffic pattern, link quality). Furthermore, using earliest arrival time enabled the use of the standard Dijkstra algorithm for path selection. The Dijkstra algorithm for path selection has a well-known inexpensive computational cost. These enhancements have been integrated into the open source CGR implementation. The ECGR model is also useful for route metric experimentation and comparisons with other DTN routing protocols particularly when combined with MACHETE's space networking models and Delay Tolerant Link State Routing (DTLSR) model.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiefer, H., E-mail: johann.schiefer@kssg.ch; Peters, S.; Plasswilm, L.

    Purpose: For stereotactic radiosurgery, the AAPM Report No. 54 [AAPM Task Group 42 (AAPM, 1995)] requires the overall stability of the isocenter (couch, gantry, and collimator) to be within a 1 mm radius. In reality, a rotating system has no rigid axis and thus no isocenter point which is fixed in space. As a consequence, the isocenter concept is reviewed here. It is the aim to develop a measurement method following the revised definitions. Methods: The mechanical isocenter is defined here by the point which rotates on the shortest path in the room coordinate system. The path is labeled asmore » “isocenter path.” Its center of gravity is assumed to be the mechanical isocenter. Following this definition, an image-based and radiation-free measurement method was developed. Multiple marker pairs in a plane perpendicular to the assumed gantry rotation axis of a linear accelerator are imaged with a smartphone application from several rotation angles. Each marker pair represents an independent measuring system. The room coordinates of the isocenter path and the mechanical isocenter are calculated based on the marker coordinates. The presented measurement method is by this means strictly focused on the mechanical isocenter. Results: The measurement result is available virtually immediately following completion of measurement. When 12 independent measurement systems are evaluated, the standard deviations of the isocenter path points and mechanical isocenter coordinates are 0.02 and 0.002 mm, respectively. Conclusions: The measurement is highly accurate, time efficient, and simple to adapt. It is therefore suitable for regular checks of the mechanical isocenter characteristics of the gantry and collimator rotation axis. When the isocenter path is reproducible and its extent is in the range of the needed geometrical accuracy, it should be taken into account in the planning process. This is especially true for stereotactic treatments and radiosurgery.« less

  14. The Regularity of Optimal Irrigation Patterns

    NASA Astrophysics Data System (ADS)

    Morel, Jean-Michel; Santambrogio, Filippo

    2010-02-01

    A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 < α < 1. The aim of this paper is to prove the regularity of paths (rivers, branches,...) when the irrigated measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.

  15. Six-dimensional regularization of chiral gauge theories

    NASA Astrophysics Data System (ADS)

    Fukaya, Hidenori; Onogi, Tetsuya; Yamamoto, Shota; Yamamura, Ryo

    2017-03-01

    We propose a regularization of four-dimensional chiral gauge theories using six-dimensional Dirac fermions. In our formulation, we consider two different mass terms having domain-wall profiles in the fifth and the sixth directions, respectively. A Weyl fermion appears as a localized mode at the junction of two different domain walls. One domain wall naturally exhibits the Stora-Zumino chain of the anomaly descent equations, starting from the axial U(1) anomaly in six dimensions to the gauge anomaly in four dimensions. Another domain wall implies a similar inflow of the global anomalies. The anomaly-free condition is equivalent to requiring that the axial U(1) anomaly and the parity anomaly are canceled among the six-dimensional Dirac fermions. Since our formulation is based on a massive vector-like fermion determinant, a nonperturbative regularization will be possible on a lattice. Putting the gauge field at the four-dimensional junction and extending it to the bulk using the Yang-Mills gradient flow, as recently proposed by Grabowska and Kaplan, we define the four-dimensional path integral of the target chiral gauge theory.

  16. An Experiment of GMPLS-Based Dispersion Compensation Control over In-Field Fibers

    NASA Astrophysics Data System (ADS)

    Seno, Shoichiro; Horiuchi, Eiichi; Yoshida, Sota; Sugihara, Takashi; Onohara, Kiyoshi; Kamei, Misato; Baba, Yoshimasa; Kubo, Kazuo; Mizuochi, Takashi

    As ROADMs (Reconfigurable Optical Add/Drop Multiplexers) are becoming widely used in metro/core networks, distributed control of wavelength paths by extended GMPLS (Generalized MultiProtocol Label Switching) protocols has attracted much attention. For the automatic establishment of an arbitrary wavelength path satisfying dynamic traffic demands over a ROADM or WXC (Wavelength Cross Connect)-based network, precise determination of chromatic dispersion over the path and optimized assignment of dispersion compensation capabilities at related nodes are essential. This paper reports an experiment over in-field fibers where GMPLS-based control was applied for the automatic discovery of chromatic dispersion, path computation, and wavelength path establishment with dynamic adjustment of variable dispersion compensation. The GMPLS-based control scheme, which the authors called GMPLS-Plus, extended GMPLS's distributed control architecture with attributes for automatic discovery, advertisement, and signaling of chromatic dispersion. In this experiment, wavelength paths with distances of 24km and 360km were successfully established and error-free data transmission was verified. The experiment also confirmed path restoration with dynamic compensation adjustment upon fiber failure.

  17. Two betweenness centrality measures based on Randomized Shortest Paths

    PubMed Central

    Kivimäki, Ilkka; Lebichot, Bertrand; Saramäki, Jari; Saerens, Marco

    2016-01-01

    This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP’s have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice. PMID:26838176

  18. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  19. A taxonomy of integral reaction path analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grcar, Joseph F.; Day, Marcus S.; Bell, John B.

    2004-12-23

    W. C. Gardiner observed that achieving understanding through combustion modeling is limited by the ability to recognize the implications of what has been computed and to draw conclusions about the elementary steps underlying the reaction mechanism. This difficulty can be overcome in part by making better use of reaction path analysis in the context of multidimensional flame simulations. Following a survey of current practice, an integral reaction flux is formulated in terms of conserved scalars that can be calculated in a fully automated way. Conditional analyses are then introduced, and a taxonomy for bidirectional path analysis is explored. Many examplesmore » illustrate the resulting path analysis and uncover some new results about nonpremixed methane-air laminar jets.« less

  20. A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization

    PubMed Central

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977

  1. A DAG scheduling scheme on heterogeneous computing systems using tuple-based chemical reaction optimization.

    PubMed

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.

  2. Analysis Report for Exascale Storage Requirements for Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, Thomas M.

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less

  3. Numerical simulation of a shear-thinning fluid through packed spheres

    NASA Astrophysics Data System (ADS)

    Liu, Hai Long; Moon, Jong Sin; Hwang, Wook Ryol

    2012-12-01

    Flow behaviors of a non-Newtonian fluid in spherical microstructures have been studied by a direct numerical simulation. A shear-thinning (power-law) fluid through both regular and randomly packed spheres has been numerically investigated in a representative unit cell with the tri-periodic boundary condition, employing a rigorous three-dimensional finite-element scheme combined with fictitious-domain mortar-element methods. The present scheme has been validated for the classical spherical packing problems with literatures. The flow mobility of regular packing structures, including simple cubic (SC), body-centered cubic (BCC), face-centered cubic (FCC), as well as randomly packed spheres, has been investigated quantitatively by considering the amount of shear-thinning, the pressure gradient and the porosity as parameters. Furthermore, the mechanism leading to the main flow path in a highly shear-thinning fluid through randomly packed spheres has been discussed.

  4. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  5. Predictability in Pathological Gambling? Applying the Duplication of Purchase Law to the Understanding of Cross-Purchases Between Regular and Pathological Gamblers.

    PubMed

    Lam, Desmond; Mizerski, Richard

    2017-06-01

    The objective of this study is to explore the gambling participations and game purchase duplication of light regular, heavy regular and pathological gamblers by applying the Duplication of Purchase Law. Current study uses data collected by the Australian Productivity Commission for eight different types of games. Key behavioral statistics on light regular, heavy regular, and pathological gamblers were computed and compared. The key finding is that pathological gambling, just like regular gambling, follows the Duplication of Purchase Law, which states that the dominant factor of purchase duplication between two brands is their market shares. This means that gambling between any two games at pathological level, like any regular consumer purchases, exhibits "law-like" regularity based on the pathological gamblers' participation rate of each game. Additionally, pathological gamblers tend to gamble more frequently across all games except lotteries and instant as well as make greater cross-purchases compared to heavy regular gamblers. A better understanding of the behavioral traits between regular (particularly heavy regular) and pathological gamblers can be useful to public policy makers and social marketers in order to more accurately identify such gamblers and better manage the negative impacts of gambling.

  6. 5 CFR 532.255 - Regular appropriated fund wage schedules in foreign areas.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... schedules shall provide rates of pay for nonsupervisory, leader, supervisory, and production facilitating employees. (b) Schedules shall be— (1) Computed on the basis of a simple average of all regular appropriated fund wage area schedules in effect on December 31; and (2) Effective on the first day of the first pay...

  7. 5 CFR 532.255 - Regular appropriated fund wage schedules in foreign areas.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... schedules shall provide rates of pay for nonsupervisory, leader, supervisory, and production facilitating employees. (b) Schedules shall be— (1) Computed on the basis of a simple average of all regular appropriated fund wage area schedules in effect on December 31; and (2) Effective on the first day of the first pay...

  8. Student, Teacher, Professor: Three Perspectives on Online Education

    ERIC Educational Resources Information Center

    Pearcy, Mark

    2014-01-01

    Today, a third of American children regularly use computer tablets, while over 40% use smartphones and 53% regularly use laptops in their home. While this is encouraging there is still considerable debate about the shape and direction technology should take in school, particularly online education making it necessary for educators to change in…

  9. LOKI WIND CORRECTION COMPUTER AND WIND STUDIES FOR LOKI

    DTIC Science & Technology

    which relates burnout deviation of flight path with the distributed wind along the boost trajectory. The wind influence function was applied to...electrical outputs. A complete wind correction computer system based on the influence function and the results of wind studies was designed.

  10. A computational investigation of the finite-time blow-up of the 3D incompressible Euler equations based on the Voigt regularization

    DOE PAGES

    Larios, Adam; Petersen, Mark R.; Titi, Edriss S.; ...

    2017-04-29

    We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less

  11. A computational investigation of the finite-time blow-up of the 3D incompressible Euler equations based on the Voigt regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larios, Adam; Petersen, Mark R.; Titi, Edriss S.

    We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less

  12. Computational Methods to Assess the Production Potential of Bio-Based Chemicals.

    PubMed

    Campodonico, Miguel A; Sukumara, Sumesh; Feist, Adam M; Herrgård, Markus J

    2018-01-01

    Elevated costs and long implementation times of bio-based processes for producing chemicals represent a bottleneck for moving to a bio-based economy. A prospective analysis able to elucidate economically and technically feasible product targets at early research phases is mandatory. Computational tools can be implemented to explore the biological and technical spectrum of feasibility, while constraining the operational space for desired chemicals. In this chapter, two different computational tools for assessing potential for bio-based production of chemicals from different perspectives are described in detail. The first tool is GEM-Path: an algorithm to compute all structurally possible pathways from one target molecule to the host metabolome. The second tool is a framework for Modeling Sustainable Industrial Chemicals production (MuSIC), which integrates modeling approaches for cellular metabolism, bioreactor design, upstream/downstream processes, and economic impact assessment. Integrating GEM-Path and MuSIC will play a vital role in supporting early phases of research efforts and guide the policy makers with decisions, as we progress toward planning a sustainable chemical industry.

  13. Analysis and elimination of a bias in targeted molecular dynamics simulations of conformational transitions: application to calmodulin.

    PubMed

    Ovchinnikov, Victor; Karplus, Martin

    2012-07-26

    The popular targeted molecular dynamics (TMD) method for generating transition paths in complex biomolecular systems is revisited. In a typical TMD transition path, the large-scale changes occur early and the small-scale changes tend to occur later. As a result, the order of events in the computed paths depends on the direction in which the simulations are performed. To identify the origin of this bias, and to propose a method in which the bias is absent, variants of TMD in the restraint formulation are introduced and applied to the complex open ↔ closed transition in the protein calmodulin. Due to the global best-fit rotation that is typically part of the TMD method, the simulated system is guided implicitly along the lowest-frequency normal modes, until the large spatial scales associated with these modes are near the target conformation. The remaining portion of the transition is described progressively by higher-frequency modes, which correspond to smaller-scale rearrangements. A straightforward modification of TMD that avoids the global best-fit rotation is the locally restrained TMD (LRTMD) method, in which the biasing potential is constructed from a number of TMD potentials, each acting on a small connected portion of the protein sequence. With a uniform distribution of these elements, transition paths that lack the length-scale bias are obtained. Trajectories generated by steered MD in dihedral angle space (DSMD), a method that avoids best-fit rotations altogether, also lack the length-scale bias. To examine the importance of the paths generated by TMD, LRTMD, and DSMD in the actual transition, we use the finite-temperature string method to compute the free energy profile associated with a transition tube around a path generated by each algorithm. The free energy barriers associated with the paths are comparable, suggesting that transitions can occur along each route with similar probabilities. This result indicates that a broad ensemble of paths needs to be calculated to obtain a full description of conformational changes in biomolecules. The breadth of the contributing ensemble suggests that energetic barriers for conformational transitions in proteins are offset by entropic contributions that arise from a large number of possible paths.

  14. Exact finite volume expectation values of \\overline{Ψ}Ψ in the massive Thirring model from light-cone lattice correlators

    NASA Astrophysics Data System (ADS)

    Hegedűs, Árpád

    2018-03-01

    In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.

  15. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  16. Self-consistent clustering analysis: an efficient multiscale scheme for inelastic heterogeneous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Z.; Bessa, M. A.; Liu, W.K.

    A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less

  17. Fourier power spectra of the geomagnetic field for circular paths on the Earth's surface.

    USGS Publications Warehouse

    Alldredge, L.R.; Benton, E.R.

    1986-01-01

    The Fourier power spectra of geomagnetic component values, synthesized from spherical harmonic models, have been computed for circular paths on the Earth's surface. They are not found to be more useful than is the spectrum of magnetic energy outside the Earth for the purpose of separating core and crustal sources of the geomagnetic field. The Fourier power spectra of N and E geomagnetic components along nearly polar great circle paths exhibit some unusual characteristics that are explained by the geometric perspective of Fourier series on spheres developed by Yee. -Authors

  18. Optimization of magnet end-winding geometry

    NASA Astrophysics Data System (ADS)

    Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.

    1994-03-01

    A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.

  19. Heterogeneous path ensembles for conformational transitions in semi–atomistic models of adenylate kinase

    PubMed Central

    Bhatt, Divesh; Zuckerman, Daniel M.

    2010-01-01

    We performed “weighted ensemble” path–sampling simulations of adenylate kinase, using several semi–atomistic protein models. The models have an all–atom backbone with various levels of residue interactions. The primary result is that full statistically rigorous path sampling required only a few weeks of single–processor computing time with these models, indicating the addition of further chemical detail should be readily feasible. Our semi–atomistic path ensembles are consistent with previous biophysical findings: the presence of two distinct pathways, identification of intermediates, and symmetry of forward and reverse pathways. PMID:21660120

  20. Smisc - A collection of miscellaneous functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landon Sego, PNNL

    2015-08-31

    A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less

  1. Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong

    2018-12-01

    The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.

  2. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  3. Method for wiring allocation and switch configuration in a multiprocessor environment

    DOEpatents

    Aridor, Yariv [Zichron Ya'akov, IL; Domany, Tamar [Kiryat Tivon, IL; Frachtenberg, Eitan [Jerusalem, IL; Gal, Yoav [Haifa, IL; Shmueli, Edi [Haifa, IL; Stockmeyer, legal representative, Robert E.; Stockmeyer, Larry Joseph [San Jose, CA

    2008-07-15

    A method for wiring allocation and switch configuration in a multiprocessor computer, the method including employing depth-first tree traversal to determine a plurality of paths among a plurality of processing elements allocated to a job along a plurality of switches and wires in a plurality of D-lines, and selecting one of the paths in accordance with at least one selection criterion.

  4. Forging Paths through Hostile Territory: Intersections of Women's Identities Pursuing Post-Secondary Computing Education

    ERIC Educational Resources Information Center

    Ratnabalasuriar, Sheruni

    2012-01-01

    This study explores experiences of women as they pursue post-secondary computing education in various contexts. Using in-depth interviews, the current study employs qualitative methods and draws from an intersectional approach to focus on how the various barriers emerge for women in different types of computing cultures. In-depth interviews with…

  5. Computer-assisted diagnostic decision support: history, challenges, and possible paths forward.

    PubMed

    Miller, Randolph A

    2009-09-01

    This paper presents a brief history of computer-assisted diagnosis, including challenges and future directions. Some ideas presented in this article on computer-assisted diagnostic decision support systems (CDDSS) derive from prior work by the author and his colleagues (see list in Acknowledgments) on the INTERNIST-1 and QMR projects. References indicate the original sources of many of these ideas.

  6. Influence of visual path information on human heading perception during rotation.

    PubMed

    Li, Li; Chen, Jing; Peng, Xiaozhe

    2009-03-31

    How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.

  7. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  8. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE PAGES

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    2017-01-01

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  9. Computed Tomography 3-D Imaging of the Metal Deformation Flow Path in Friction Stir Welding

    NASA Technical Reports Server (NTRS)

    Schneider, Judy; Beshears, Ronald; Nunes, Arthur C., Jr.

    2005-01-01

    In friction stir welding (FSW), a rotating threaded pin tool is inserted into a weld seam and literally stirs the edges of the seam together. To determine optimal processing parameters for producing a defect free weld, a better understanding of the resulting metal deformation flow path is required. Marker studies are the principal method of studying the metal deformation flow path around the FSW pin tool. In our study, we have used computed tomography (CT) scans to reveal the flow pattern of a lead wire embedded in a FSW weld seam. At the welding temperature of aluminum, the lead becomes molten and is carried with the macro-flow of the weld metal. By using CT images, a 3-dimensional (3D) image of the lead flow pattern can be reconstructed. CT imaging was found to be a convenient and comprehensive way of collecting and displaying tracer data. It marks an advance over previous more tedious and ambiguous radiographic/metallographic data collection methods.

  10. Data processing device test apparatus and method therefor

    DOEpatents

    Wilcox, Richard Jacob; Mulig, Jason D.; Eppes, David; Bruce, Michael R.; Bruce, Victoria J.; Ring, Rosalinda M.; Cole, Jr., Edward I.; Tangyunyong, Paiboon; Hawkins, Charles F.; Louie, Arnold Y.

    2003-04-08

    A method and apparatus mechanism for testing data processing devices are implemented. The test mechanism isolates critical paths by correlating a scanning microscope image with a selected speed path failure. A trigger signal having a preselected value is generated at the start of each pattern vector. The sweep of the scanning microscope is controlled by a computer, which also receives and processes the image signals returned from the microscope. The value of the trigger signal is correlated with a set of pattern lines being driven on the DUT. The trigger is either asserted or negated depending the detection of a pattern line failure and the particular line that failed. In response to the detection of the particular speed path failure being characterized, and the trigger signal, the control computer overlays a mask on the image of the device under test (DUT). The overlaid image provides a visual correlation of the failure with the structural elements of the DUT at the level of resolution of the microscope itself.

  11. FY16 ASME High Temperature Code Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swindeman, M. J.; Jetter, R. I.; Sham, T. -L.

    2016-09-01

    One of the objectives of the ASME high temperature Code activities is to develop and validate both improvements and the basic features of Section III, Division 5, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to be used to assess whether or not a specific component under specified loading conditions will satisfy the elevated temperature design requirements for Class A components in Section III, Division 5, Subsection HB, Subpart B (HBB). There are many features and alternative paths of varying complexity in HBB. The initial focus of this task is amore » basic path through the various options for a single reference material, 316H stainless steel. However, the program will be structured for eventual incorporation all the features and permitted materials of HBB. Since this task has recently been initiated, this report focuses on the description of the initial path forward and an overall description of the approach to computer program development.« less

  12. Twenty Five Years in Cheminformatics - A Career Path ...

    EPA Pesticide Factsheets

    Antony Williams is a Computational Chemist at the US Environmental Protection Agency in the National Center for Computational Toxicology. He has been involved in cheminformatics and the dissemination of chemical information for over twenty-five years. He has worked for a Fortune 500 company (Eastman Kodak), in two successful start-ups (ACD/Labs and ChemSpider), for the Royal Society of Chemistry (in publishing) and, now, at the EPA. Throughout his career path he has experienced multiple diverse work cultures and focused his efforts on understanding the needs of his employers and the often unrecognized needs of a larger community. Antony will provide a short overview of his career path and discuss the various decisions that helped motivate his change in career from professional spectroscopist to website host and innovator, to working for one of the world's foremost scientific societies and now for one of the most impactful government organizations in the world. Invited Presentation at ACS Spring meeting at CINF: Careers in Chemical Information session

  13. Comparison of workload measures on computer-generated primary flight displays

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Abbott, Terence S.

    1987-01-01

    Four Air Force pilots were used as subjects to assess a battery of subjective and physiological workload measures in a flight simulation environment in which two computer-generated primary flight display configurations were evaluated. A high- and low-workload task was created by manipulating flight path complexity. Both SWAT and the NASA-TLX were shown to be effective in differentiating the high and low workload path conditions. Physiological measures were inconclusive. A battery of workload measures continues to be necessary for an understanding of the data. Based on workload, opinion, and performance data, it is fruitful to pursue research with a primary flight display and a horizontal situation display integrated into a single display.

  14. Theoretical characterization of the minimum energy path for hydrogen atom addition to N2 - Implications for the unimolecular lifetime of HN2

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.; Duchovic, Ronald J.; Rohlfing, Celeste Mcmichael

    1989-01-01

    Results are reported from CASSCF externally contracted CI ab initio computations of the minimum-energy path for the addition of H to N2. The theoretical basis and numerical implementation of the computations are outlined, and the results are presented in extensive tables and graphs and characterized in detail. The zero-point-corrected barrier for HN2 dissociation is estimated as 8.5 kcal/mol, and the lifetime of the lowest-lying quasi-bound vibrational state of HN2 is found to be between 88 psec and 5.8 nsec (making experimental observation of this species very difficult).

  15. Ray-tracing method for creeping waves on arbitrarily shaped nonuniform rational B-splines surfaces.

    PubMed

    Chen, Xi; He, Si-Yuan; Yu, Ding-Feng; Yin, Hong-Cheng; Hu, Wei-Dong; Zhu, Guo-Qiang

    2013-04-01

    An accurate creeping ray-tracing algorithm is presented in this paper to determine the tracks of creeping waves (or creeping rays) on arbitrarily shaped free-form parametric surfaces [nonuniform rational B-splines (NURBS) surfaces]. The main challenge in calculating the surface diffracted fields on NURBS surfaces is due to the difficulty in determining the geodesic paths along which the creeping rays propagate. On one single parametric surface patch, the geodesic paths need to be computed by solving the geodesic equations numerically. Furthermore, realistic objects are generally modeled as the union of several connected NURBS patches. Due to the discontinuity of the parameter between the patches, it is more complicated to compute geodesic paths on several connected patches than on one single patch. Thus, a creeping ray-tracing algorithm is presented in this paper to compute the geodesic paths of creeping rays on the complex objects that are modeled as the combination of several NURBS surface patches. In the algorithm, the creeping ray tracing on each surface patch is performed by solving the geodesic equations with a Runge-Kutta method. When the creeping ray propagates from one patch to another, a transition method is developed to handle the transition of the creeping ray tracing across the border between the patches. This creeping ray-tracing algorithm can meet practical requirements because it can be applied to the objects with complex shapes. The algorithm can also extend the applicability of NURBS for electromagnetic and optical applications. The validity and usefulness of the algorithm can be verified from the numerical results.

  16. A Lightweight Radio Propagation Model for Vehicular Communication in Road Tunnels

    PubMed Central

    Shamim, Azra; Shamshirband, Shahaboddin; Raymond Choo, Kim-Kwang

    2016-01-01

    Radio propagation models (RPMs) are generally employed in Vehicular Ad Hoc Networks (VANETs) to predict path loss in multiple operating environments (e.g. modern road infrastructure such as flyovers, underpasses and road tunnels). For example, different RPMs have been developed to predict propagation behaviour in road tunnels. However, most existing RPMs for road tunnels are computationally complex and are based on field measurements in frequency band not suitable for VANET deployment. Furthermore, in tunnel applications, consequences of moving radio obstacles, such as large buses and delivery trucks, are generally not considered in existing RPMs. This paper proposes a computationally inexpensive RPM with minimal set of parameters to predict path loss in an acceptable range for road tunnels. The proposed RPM utilizes geometric properties of the tunnel, such as height and width along with the distance between sender and receiver, to predict the path loss. The proposed RPM also considers the additional attenuation caused by the moving radio obstacles in road tunnels, while requiring a negligible overhead in terms of computational complexity. To demonstrate the utility of our proposed RPM, we conduct a comparative summary and evaluate its performance. Specifically, an extensive data gathering campaign is carried out in order to evaluate the proposed RPM. The field measurements use the 5 GHz frequency band, which is suitable for vehicular communication. The results demonstrate that a close match exists between the predicted values and measured values of path loss. In particular, an average accuracy of 94% is found with R2 = 0.86. PMID:27031989

  17. The Path of Carbon in Photosynthesis XII. Some Temperature Effects

    DOE R&D Accomplishments Database

    Ouellet, C.

    1951-06-25

    The photosynthetic assimilation of radioactive carbon dioxide for two-minute periods by Scenedesmus has bee studied at temperatures ranging from 25? to 44? C. All labeled intermediates cease to be formed at about 45? C. With rising temperature, the radioactivity reaching the sugar phosphate reservoirs decreases regularly while there is a sharp maximum in sucrose at 37? C. and a less pronounced one in malic and aspartic acids about 40? C. A tentative interpretation of these effects is offered.

  18. Fast Adaptive Least Trimmed Squares for Robust Evaluation of Quality of Experience

    DTIC Science & Technology

    2014-07-01

    fact that not every Internet user is trustworthy . In other words, due to the lack of supervision when subjects perform experiments in crowdsourcing, they...21], [22], etc. However, a major challenge of crowdsourcing QoE evaluation is that not every Internet user is trustworthy . That is, some raters try...regularization paths of the LASSO problem could provide us an order on samples tending to be outliers. Such an approach is inspired by Huber’s celebrated work on

  19. E&V (Evaluation and Validation) Reference Manual, Version 1.1

    DTIC Science & Technology

    1988-10-20

    E&V. This model will allow the user to arrive at E&V techniques through many different paths, and provides a means to extract useful information...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski , AFWAL/AAAF, Wright Patterson AFB, OH 45433-6543. ES-2 E&V...1, 1-3 illustrate the types of infor- mation to be extracted from each document. Chapter 2 provides a more detailed description of the structure and

  20. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method

    PubMed Central

    2013-01-01

    Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158

  1. A computational approach to identify cellular heterogeneity and tissue-specific gene regulatory networks.

    PubMed

    Jambusaria, Ankit; Klomp, Jeff; Hong, Zhigang; Rafii, Shahin; Dai, Yang; Malik, Asrar B; Rehman, Jalees

    2018-06-07

    The heterogeneity of cells across tissue types represents a major challenge for studying biological mechanisms as well as for therapeutic targeting of distinct tissues. Computational prediction of tissue-specific gene regulatory networks may provide important insights into the mechanisms underlying the cellular heterogeneity of cells in distinct organs and tissues. Using three pathway analysis techniques, gene set enrichment analysis (GSEA), parametric analysis of gene set enrichment (PGSEA), alongside our novel model (HeteroPath), which assesses heterogeneously upregulated and downregulated genes within the context of pathways, we generated distinct tissue-specific gene regulatory networks. We analyzed gene expression data derived from freshly isolated heart, brain, and lung endothelial cells and populations of neurons in the hippocampus, cingulate cortex, and amygdala. In both datasets, we found that HeteroPath segregated the distinct cellular populations by identifying regulatory pathways that were not identified by GSEA or PGSEA. Using simulated datasets, HeteroPath demonstrated robustness that was comparable to what was seen using existing gene set enrichment methods. Furthermore, we generated tissue-specific gene regulatory networks involved in vascular heterogeneity and neuronal heterogeneity by performing motif enrichment of the heterogeneous genes identified by HeteroPath and linking the enriched motifs to regulatory transcription factors in the ENCODE database. HeteroPath assesses contextual bidirectional gene expression within pathways and thus allows for transcriptomic assessment of cellular heterogeneity. Unraveling tissue-specific heterogeneity of gene expression can lead to a better understanding of the molecular underpinnings of tissue-specific phenotypes.

  2. Path Integrals for Electronic Densities, Reactivity Indices, and Localization Functions in Quantum Systems

    PubMed Central

    Putz, Mihai V.

    2009-01-01

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr’s quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions – all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems. PMID:20087467

  3. Path integrals for electronic densities, reactivity indices, and localization functions in quantum systems.

    PubMed

    Putz, Mihai V

    2009-11-10

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr's quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions - all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems.

  4. Topics on data transmission problem in software definition network

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Liang, Li; Xu, Tianwei; Gan, Jianhou

    2017-08-01

    In normal computer networks, the data transmission between two sites go through the shortest path between two corresponding vertices. However, in the setting of software definition network (SDN), it should monitor the network traffic flow in each site and channel timely, and the data transmission path between two sites in SDN should consider the congestion in current networks. Hence, the difference of available data transmission theory between normal computer network and software definition network is that we should consider the prohibit graph structures in SDN, and these forbidden subgraphs represent the sites and channels in which data can't be passed by the serious congestion. Inspired by theoretical analysis of an available data transmission in SDN, we consider some computational problems from the perspective of the graph theory. Several results determined in the paper imply the sufficient conditions of data transmission in SDN in the various graph settings.

  5. Real-time fuzzy inference based robot path planning

    NASA Technical Reports Server (NTRS)

    Pacini, Peter J.; Teichrow, Jon S.

    1990-01-01

    This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.

  6. Method and tool for network vulnerability analysis

    DOEpatents

    Swiler, Laura Painton [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM

    2006-03-14

    A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."

  7. Explanation of the computer listings of Faraday factors for INTASAT users

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.; Llewellyn, S. K.; Bent, R. B.; Schmid, P. E.

    1974-01-01

    Using a simplified form of the Appleton-Hartree formula for the phase refractive index, a relationship was obtained between the Faraday rotation angle along the angular path and the total electron content along the vertical path, intersecting the angular at the height of maximum electron density. Using the second mean value theorem of integration, the function B cosine theta second chi was removed from under the integral sign and replaced by a 'mean' value. The mean value factors were printed on the computer listing for 39 stations receiving signals from the INTASAT satellite during the specified time period. The data is presented by station and date. Graphs are included to demonstrate the variation of the Faraday factor with local time and season, with magnetic latitude, elevation and azimuth angles. Other topics discussed include a description of the bent ionospheric model, the earth's magnetic field model, and the sample computer listing.

  8. Application of Extended Kalman Filter in Persistant Scatterer Interferometry to Enhace the Accuracy of Unwrapping Process

    NASA Astrophysics Data System (ADS)

    Tavakkoli Estahbanat, A.; Dehghani, M.

    2017-09-01

    In interferometry technique, phases have been modulated between 0-2π. Finding the number of integer phases missed when they were wrapped is the main goal of unwrapping algorithms. Although the density of points in conventional interferometry is high, this is not effective in some cases such as large temporal baselines or noisy interferograms. Due to existing noisy pixels, not only it does not improve results, but also it leads to some unwrapping errors during interferogram unwrapping. In PS technique, because of the sparse PS pixels, scientists are confronted with a problem to unwrap phases. Due to the irregular data separation, conventional methods are sterile. Unwrapping techniques are divided in to path-independent and path-dependent in the case of unwrapping paths. A region-growing method which is a path-dependent technique has been used to unwrap PS data. In this paper an idea of EKF has been generalized on PS data. This algorithm is applied to consider the nonlinearity of PS unwrapping problem as well as conventional unwrapping problem. A pulse-pair method enhanced with singular value decomposition (SVD) has been used to estimate spectral shift from interferometric power spectral density in 7*7 local windows. Furthermore, a hybrid cost-map is used to manage the unwrapping path. This algorithm has been implemented on simulated PS data. To form a sparse dataset, A few points from regular grid are randomly selected and the RMSE of results and true unambiguous phases in presented to validate presented approach. The results of this algorithm and true unwrapped phases were completely identical.

  9. Measuring, Enabling and Comparing Modularity, Regularity and Hierarchy in Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2005-01-01

    For computer-automated design systems to scale to complex designs they must be able to produce designs that exhibit the characteristics of modularity, regularity and hierarchy - characteristics that are found both in man-made and natural designs. Here we claim that these characteristics are enabled by implementing the attributes of combination, control-flow and abstraction in the representation. To support this claim we use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy enabled and show that the best performance happens when all three of these attributes are enabled. We also define metrics for modularity, regularity and hierarchy in design encodings and demonstrate that high fitness values are achieved with high values of modularity, regularity and hierarchy and that there is a positive correlation between increases in fitness and increases in modularity. regularity and hierarchy.

  10. Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II

    DTIC Science & Technology

    2016-09-01

    of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined

  11. Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I

    DTIC Science & Technology

    2016-09-01

    which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision

  12. When Do Memory Limitations Lead to Regularization? An Experimental and Computational Investigation

    ERIC Educational Resources Information Center

    Perfors, Amy

    2012-01-01

    The Less is More hypothesis suggests that one reason adults and children differ in their ability to learn language is that they also differ in other cognitive capacities. According to one version of this hypothesis, children's relatively poor memory may make them more likely to regularize inconsistent input (Hudson Kam & Newport, 2005, 2009). This…

  13. 29 CFR 778.115 - Employees working at two or more rates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Overtime Pay Requirements Principles for Computing Overtime Pay Based on the âregular Rateâ § 778.115... different types of work for which different nonovertime rates of pay (of not less than the applicable minimum wage) have been established, his regular rate for that week is the weighted average of such rates...

  14. Renormalization group theory for percolation in time-varying networks.

    PubMed

    Karschau, Jens; Zimmerling, Marco; Friedrich, Benjamin M

    2018-05-22

    Motivated by multi-hop communication in unreliable wireless networks, we present a percolation theory for time-varying networks. We develop a renormalization group theory for a prototypical network on a regular grid, where individual links switch stochastically between active and inactive states. The question whether a given source node can communicate with a destination node along paths of active links is equivalent to a percolation problem. Our theory maps the temporal existence of multi-hop paths on an effective two-state Markov process. We show analytically how this Markov process converges towards a memoryless Bernoulli process as the hop distance between source and destination node increases. Our work extends classical percolation theory to the dynamic case and elucidates temporal correlations of message losses. Quantification of temporal correlations has implications for the design of wireless communication and control protocols, e.g. in cyber-physical systems such as self-organized swarms of drones or smart traffic networks.

  15. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  16. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  17. Computer Programming Effects in Elementary: Perceptions and Career Aspirations in STEM

    ERIC Educational Resources Information Center

    Tran, Yune

    2018-01-01

    The development of elementary-aged students' STEM and computer science (CS) literacy is critical in this evolving technological landscape, thus, promoting success for college, career, and STEM/CS professional paths. Research has suggested that elementary-aged students need developmentally appropriate STEM integrated opportunities in the classroom;…

  18. ICCE/ICCAI 2000 Full & Short Papers (Multimedia and Hypermedia in Education).

    ERIC Educational Resources Information Center

    2000

    This document contains the full and short papers on multimedia and hypermedia in education from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: learner-centered navigation path planning in world Wide Web-based learning; the relation…

  19. Path planning for planetary rover using extended elevation map

    NASA Technical Reports Server (NTRS)

    Nakatani, Ichiro; Kubota, Takashi; Yoshimitsu, Tetsuo

    1994-01-01

    This paper describes a path planning method for planetary rovers to search for paths on planetary surfaces. The planetary rover is required to travel safely over a long distance for many days over unfamiliar terrain. Hence it is very important how planetary rovers process sensory information in order to understand the planetary environment and to make decisions based on that information. As a new data structure for informational mapping, an extended elevation map (EEM) has been introduced, which includes the effect of the size of the rover. The proposed path planning can be conducted in such a way as if the rover were a point while the size of the rover is automatically taken into account. The validity of the proposed methods is verified by computer simulations.

  20. Relativistic Zeroth-Order Regular Approximation Combined with Nonhybrid and Hybrid Density Functional Theory: Performance for NMR Indirect Nuclear Spin-Spin Coupling in Heavy Metal Compounds.

    PubMed

    Moncho, Salvador; Autschbach, Jochen

    2010-01-12

    A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.

Top