Science.gov

Sample records for solving large-scale stochastic

  1. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  2. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  3. Solving large scale traveling salesman problems by chaotic neurodynamics.

    PubMed

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches. PMID:12022514

  4. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  5. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  6. Solving large scale structure in ten easy steps with COLA

    SciTech Connect

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J. E-mail: matiasz@ias.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  7. Topology of large-scale engineering problem-solving networks

    NASA Astrophysics Data System (ADS)

    Braha, Dan; Bar-Yam, Yaneer

    2004-01-01

    The last few years have led to a series of discoveries that uncovered statistical properties that are common to a variety of diverse real-world social, information, biological, and technological networks. The goal of the present paper is to investigate the statistical properties of networks of people engaged in distributed problem solving and discuss their significance. We show that problem-solving networks have properties (sparseness, small world, scaling regimes) that are like those displayed by information, biological, and technological networks. More importantly, we demonstrate a previously unreported difference between the distribution of incoming and outgoing links of directed networks. Specifically, the incoming link distributions have sharp cutoffs that are substantially lower than those of the outgoing link distributions (sometimes the outgoing cutoffs are not even present). This asymmetry can be explained by considering the dynamical interactions that take place in distributed problem solving and may be related to differences between each actor’s capacity to process information provided by others and the actor’s capacity to transmit information over the network. We conjecture that the asymmetric link distribution is likely to hold for other human or nonhuman directed networks when nodes represent information processing and using elements.

  8. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. PMID:26474934

  9. Tensor-Krylov methods for solving large-scale systems of nonlinear equations.

    SciTech Connect

    Bader, Brett William

    2004-08-01

    This paper develops and investigates iterative tensor methods for solving large-scale systems of nonlinear equations. Direct tensor methods for nonlinear equations have performed especially well on small, dense problems where the Jacobian matrix at the solution is singular or ill-conditioned, which may occur when approaching turning points, for example. This research extends direct tensor methods to large-scale problems by developing three tensor-Krylov methods that base each iteration upon a linear model augmented with a limited second-order term, which provides information lacking in a (nearly) singular Jacobian. The advantage of the new tensor-Krylov methods over existing large-scale tensor methods is their ability to solve the local tensor model to a specified accuracy, which produces a more accurate tensor step. The performance of these methods in comparison to Newton-GMRES and tensor-GMRES is explored on three Navier-Stokes fluid flow problems. The numerical results provide evidence that tensor-Krylov methods are generally more robust and more efficient than Newton-GMRES on some important and difficult problems. In addition, the results show that the new tensor-Krylov methods and tensor- GMRES each perform better in certain situations.

  10. Analysis of some large-scale nonlinear stochastic dynamic systems with subspace-EPC method

    NASA Astrophysics Data System (ADS)

    Er, GuoKang; Iu, VaiPan

    2011-09-01

    The probabilistic solutions to some nonlinear stochastic dynamic (NSD) systems with various polynomial types of nonlinearities in displacements are analyzed with the subspace-exponential polynomial closure (subspace-EPC) method. The space of the state variables of the large-scale nonlinear stochastic dynamic system excited by Gaussian white noises is separated into two subspaces. Both sides of the Fokker-Planck-Kolmogorov (FPK) equation corresponding to the NSD system are then integrated over one of the subspaces. The FPK equation for the joint probability density function of the state variables in the other subspace is formulated. Therefore, the FPK equations in low dimensions are obtained from the original FPK equation in high dimensions and the FPK equations in low dimensions are solvable with the exponential polynomial closure method. Examples about multi-degree-offreedom NSD systems with various polynomial types of nonlinearities in displacements are given to show the effectiveness of the subspace-EPC method in these cases.

  11. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    SciTech Connect

    Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  12. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    NASA Astrophysics Data System (ADS)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  13. Quantifying the colour-dependent stochasticity of large-scale structure

    NASA Astrophysics Data System (ADS)

    Patej, Anna; Eisenstein, Daniel

    2016-08-01

    We address the question of whether massive red and blue galaxies trace the same large-scale structure at z ˜ 0.6 using the CMASS sample of galaxies from Data Release 12 of the Sloan Digital Sky Survey III. After splitting the catalogue into subsamples of red and blue galaxies using a simple colour cut, we measure the clustering of both subsamples and construct the correlation coefficient, r, using two statistics. The correlation coefficient quantifies the stochasticity between the two subsamples, which we examine over intermediate scales (20 ≲ R ≲ 100 h-1 Mpc). We find that on these intermediate scales, the correlation coefficient is consistent with 1; in particular, we find r > 0.95 taking into account both statistics and r > 0.974 using the favoured statistic.

  14. Quantifying the Colour-Dependent Stochasticity of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Patej, Anna; Eisenstein, Daniel

    2016-03-01

    We address the question of whether massive red and blue galaxies trace the same large-scale structure at z ˜ 0.6 using the CMASS sample of galaxies from Data Release 12 of the Sloan Digital Sky Survey III. After splitting the catalog into subsamples of red and blue galaxies using a simple colour cut, we measure the clustering of both subsamples and construct the correlation coefficient, r, using two statistics. The correlation coefficient quantifies the stochasticity between the two subsamples, which we examine over intermediate scales (20 ≲ R ≲ 100 h-1Mpc). We find that on these intermediate scales, the correlation coefficient is consistent with 1; in particular, we find r > 0.95 taking into account both statistics and r > 0.974 using the favored statistic.

  15. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    SciTech Connect

    Lenormand, R.; Thiele, M.R.

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  16. Three-Stage Tabu Search for Solving Large-Scale Flow Shop Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuedong; Tian, Yajie; Sannomiya, Nobuo

    Tabu search is a meta-heuristic approach designed skillfully for finding a suboptimal solution of combinatorial optimization problems. In this paper the tabu search with three stages is proposed for solving large-scale flow shop scheduling problems. In order to obtain a better suboptimal solution in a short computation time, three different candidate lists are used to determine the incumbent solution in the respective search stages. The candidate lists are constructed by restricting the moving of each job. Test problems with four kinds of job data are examined. Based on analyzing the relationship between the candidate list and the suboptimal solution for each job data, a common parameter is given to construct the candidate list during the search process. Comparison of the computation result is made with the genetic algorithm and the basic tabu search, from which it is shown that the proposed tabu search outperforms two others.

  17. From Self-consistency to SOAR: Solving Large Scale NonlinearEigenvalue Problems

    SciTech Connect

    Bai, Zhaojun; Yang, Chao

    2006-02-01

    What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcase some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.

  18. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    NASA Astrophysics Data System (ADS)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.

  19. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    NASA Astrophysics Data System (ADS)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  20. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  1. Solving Large-scale Spatial Optimization Problems in Water Resources Management through Spatial Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Wang, J.; Cai, X.

    2007-12-01

    A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators

  2. Solving stochastic epidemiological models using computer algebra

    NASA Astrophysics Data System (ADS)

    Hincapie, Doracelly; Ospina, Juan

    2011-06-01

    Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.

  3. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  4. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  5. On large-scale nonlinear programming techniques for solving optimal control problems

    SciTech Connect

    Faco, J.L.D.

    1994-12-31

    The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.

  6. Solving Large-Scale Computational Problems Using Insights from Statistical Physics

    SciTech Connect

    Selman, Bart

    2012-02-29

    Many challenging problems in computer science and related fields can be formulated as constraint satisfaction problems. Such problems consist of a set of discrete variables and a set of constraints between those variables, and represent a general class of so-called NP-complete problems. The goal is to find a value assignment to the variables that satisfies all constraints, generally requiring a search through and exponentially large space of variable-value assignments. Models for disordered systems, as studied in statistical physics, can provide important new insights into the nature of constraint satisfaction problems. Recently, work in this area has resulted in the discovery of a new method for solving such problems, called the survey propagation (SP) method. With SP, we can solve problems with millions of variables and constraints, an improvement of two orders of magnitude over previous methods.

  7. Solving large-scale real-world telecommunication problems using a grid-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Luna, Francisco; Nebro, Antonio; Alba, Enrique; Durillo, Juan

    2008-11-01

    This article analyses the use of a grid-based genetic algorithm (GrEA) to solve a real-world instance of a problem from the telecommunication domain. The problem, known as automatic frequency planning (AFP), is used in a global system for mobile communications (GSM) networks to assign a number of fixed frequencies to a set of GSM transceivers located in the antennae of a cellular phone network. Real data instances of the AFP are very difficult to solve owing to the NP-hard nature of the problem, so combining grid computing and metaheuristics turns out to be a way to provide satisfactory solutions in a reasonable amount of time. GrEA has been deployed on a grid with up to 300 processors to solve an AFP instance of 2612 transceivers. The results not only show that significant running time reductions are achieved, but that the search capability of GrEA clearly outperforms that of the equivalent non-grid algorithm.

  8. On stochastic control system design methods for weakly coupled large scale linear systems.

    NASA Technical Reports Server (NTRS)

    Kwong, R.; Chong, C.-Y.; Athans, M.

    1972-01-01

    This paper considers the problem of decentralized control of two weakly coupled linear stochastic systems, using quadratic performance indices. The basic idea is to have each controller control independently his own system, based upon noisy measurements of his own output. To compensate for the effects of weak coupling upon the resultant performance, fake white plant noise is introduced to each system. The appropriate intensity of the fake plant noise is obtained through the solution of an off-line deterministic matrix optimal control problem. The effects of this design method upon the overall coupled system performance are analyzed as a function of the degree of intersystem coupling.

  9. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Girolami, M.

    2014-11-01

    We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint

  10. A modified priority list-based MILP method for solving large-scale unit commitment problems

    SciTech Connect

    Ke, Xinda; Lu, Ning; Wu, Di; Kintner-Meyer, Michael CW

    2015-07-26

    This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problem over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.

  11. Solving Man-Induced Large-Scale Conservation Problems: The Spanish Imperial Eagle and Power Lines

    PubMed Central

    López-López, Pascual; Ferrer, Miguel; Madero, Agustín; Casado, Eva; McGrady, Michael

    2011-01-01

    Background Man-induced mortality of birds caused by electrocution with poorly-designed pylons and power lines has been reported to be an important mortality factor that could become a major cause of population decline of one of the world rarest raptors, the Spanish imperial eagle (Aquila adalberti). Consequently it has resulted in an increasing awareness of this problem amongst land managers and the public at large, as well as increased research into the distribution of electrocution events and likely mitigation measures. Methodology/Principal Findings We provide information of how mitigation measures implemented on a regional level under the conservation program of the Spanish imperial eagle have resulted in a positive shift of demographic trends in Spain. A 35 years temporal data set (1974–2009) on mortality of Spanish imperial eagle was recorded, including population censuses, and data on electrocution and non-electrocution of birds. Additional information was obtained from 32 radio-tracked young eagles and specific field surveys. Data were divided into two periods, before and after the approval of a regional regulation of power line design in 1990 which established mandatory rules aimed at minimizing or eliminating the negative impacts of power lines facilities on avian populations. Our results show how population size and the average annual percentage of population change have increased between the two periods, whereas the number of electrocuted birds has been reduced in spite of the continuous growing of the wiring network. Conclusions Our results demonstrate that solving bird electrocution is an affordable problem if political interest is shown and financial investment is made. The combination of an adequate spatial planning with a sustainable development of human infrastructures will contribute positively to the conservation of the Spanish imperial eagle and may underpin population growth and range expansion, with positive side effects on other endangered

  12. Stochastic dynamics from the fractional Fokker-Planck-Kolmogorov equation: Large-scale behavior of the turbulent transport coefficient

    NASA Astrophysics Data System (ADS)

    Milovanov, Alexander V.

    2001-04-01

    The formulation of the fractional Fokker-Planck-Kolmogorov (FPK) equation [Physica D 76, 110 (1994)] has led to important advances in the description of the stochastic dynamics of Hamiltonian systems. Here, the long-time behavior of the basic transport processes obeying the fractional FPK equation is analyzed. A derivation of the large-scale turbulent transport coefficient for a Hamiltonian system with 112 degrees of freedom is proposed in connection with the fractal structure of the particle chaotic trajectories. The principal transport regimes (i.e., a diffusion-type process, ballistic motion, subdiffusion in the limit of the frozen Hamiltonian, and behavior associated with self-organized criticality) are obtained as partial cases of the generalized transport law. A comparison with recent numerical and experimental studies is given.

  13. Stochastic dynamics from the fractional Fokker-Planck-Kolmogorov equation: large-scale behavior of the turbulent transport coefficient.

    PubMed

    Milovanov, A V

    2001-04-01

    The formulation of the fractional Fokker-Planck-Kolmogorov (FPK) equation [Physica D 76, 110 (1994)] has led to important advances in the description of the stochastic dynamics of Hamiltonian systems. Here, the long-time behavior of the basic transport processes obeying the fractional FPK equation is analyzed. A derivation of the large-scale turbulent transport coefficient for a Hamiltonian system with 11 / 2 degrees of freedom is proposed in connection with the fractal structure of the particle chaotic trajectories. The principal transport regimes (i.e., a diffusion-type process, ballistic motion, subdiffusion in the limit of the frozen Hamiltonian, and behavior associated with self-organized criticality) are obtained as partial cases of the generalized transport law. A comparison with recent numerical and experimental studies is given. PMID:11308983

  14. Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP

    NASA Technical Reports Server (NTRS)

    Gupta, V. K.; Zillmer, S. D.; Allison, R. E.

    1986-01-01

    The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix.

  15. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGESBeta

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  16. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  17. Locally weighted histogram analysis and stochastic solution for large-scale multi-state free energy estimation

    NASA Astrophysics Data System (ADS)

    Tan, Zhiqiang; Xia, Junchao; Zhang, Bin W.; Levy, Ronald M.

    2016-01-01

    The weighted histogram analysis method (WHAM) including its binless extension has been developed independently in several different contexts, and widely used in chemistry, physics, and statistics, for computing free energies and expectations from multiple ensembles. However, this method, while statistically efficient, is computationally costly or even infeasible when a large number, hundreds or more, of distributions are studied. We develop a locally WHAM (local WHAM) from the perspective of simulations of simulations (SOS), using generalized serial tempering (GST) to resample simulated data from multiple ensembles. The local WHAM equations based on one jump attempt per GST cycle can be solved by optimization algorithms orders of magnitude faster than standard implementations of global WHAM, but yield similarly accurate estimates of free energies to global WHAM estimates. Moreover, we propose an adaptive SOS procedure for solving local WHAM equations stochastically when multiple jump attempts are performed per GST cycle. Such a stochastic procedure can lead to more accurate estimates of equilibrium distributions than local WHAM with one jump attempt per cycle. The proposed methods are broadly applicable when the original data to be "WHAMMED" are obtained properly by any sampling algorithm including serial tempering and parallel tempering (replica exchange). To illustrate the methods, we estimated absolute binding free energies and binding energy distributions using the binding energy distribution analysis method from one and two dimensional replica exchange molecular dynamics simulations for the beta-cyclodextrin-heptanoate host-guest system. In addition to the computational advantage of handling large datasets, our two dimensional WHAM analysis also demonstrates that accurate results similar to those from well-converged data can be obtained from simulations for which sampling is limited and not fully equilibrated.

  18. Comparative study of large scale simulation of underground explosions inalluvium and in fractured granite using stochastic characterization

    NASA Astrophysics Data System (ADS)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2014-12-01

    This work describes a methodology used for large scale modeling of wave propagation fromunderground explosions conducted at the Nevada Test Site (NTS) in two different geological settings:fractured granitic rock mass and in alluvium deposition. We show that the discrete nature of rockmasses as well as the spatial variability of the fabric of alluvium is very important to understand groundmotions induced by underground explosions. In order to build a credible conceptual model of thesubsurface we integrated the geological, geomechanical and geophysical characterizations conductedduring recent test at the NTS as well as historical data from the characterization during the undergroundnuclear test conducted at the NTS. Because detailed site characterization is limited, expensive and, insome instances, impossible we have numerically investigated the effects of the characterization gaps onthe overall response of the system. We performed several computational studies to identify the keyimportant geologic features specific to fractured media mainly the joints; and those specific foralluvium porous media mainly the spatial variability of geological alluvium facies characterized bytheir variances and their integral scales. We have also explored common key features to both geologicalenvironments such as saturation and topography and assess which characteristics affect the most theground motion in the near-field and in the far-field. Stochastic representation of these features based onthe field characterizations have been implemented in Geodyn and GeodynL hydrocodes. Both codeswere used to guide site characterization efforts in order to provide the essential data to the modelingcommunity. We validate our computational results by comparing the measured and computed groundmotion at various ranges. This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344.

  19. Modelling large-scale spatial variability of soil properties with sequential stochastic simulation conditioned by universal kriging in a Hungarian study site

    NASA Astrophysics Data System (ADS)

    Szatmári, Gábor; Barta, Károly; Pásztor, László

    2015-04-01

    Modelling of large-scale spatial variability of soil properties is a promising subject in soil science, as well as in general environmental research, since the resulted model(s) can be applied to solve various problems. In addition to "purely" map an environmental element, the spatial uncertainty of the map product can deduced, specific areas could be identified and/or delineated (contaminated or endangered regions, plots for fertilization, etc.). Geostatistics, which can be regarded as a subset of statistics specialized in analysis and interpretation of geographically referenced data, offer a huge amount of tools to solve these tasks. Numerous spatial modeling methods have been developed in the past decades based on the regionalized variable theory. One of these techniques is sequential stochastic simulation, which can be conditioned with universal kriging (also referred to as regression kriging). As opposed to universal kriging (UK), sequential simulation conditioned with universal kriging (SSUK) provides not just one but several alternative and equally probable "maps", i.e. realizations. The realizations reproduce the global statistics (e.g. sample histogram, variogram), i.e. they reflect/model the reality in a certain global (and not local!) sense. In this paper we present and test SSUK developed in R-code and its utilizations in a water erosion affected study area. Furthermore, we compare the results from UK and SSUK. For this purpose, two soil variables were selected: soil organic matter (SOM) content and rooting depth (RD). SSUK approach is illustrated with a legacy soil dataset from a study area endangered by water erosion in Central Hungary. Legacy soil data was collected in the end of the 1980s in the framework of the National Land Evaluation Programme. Spatially exhaustive covariates were derived from a digital elevation model and from the land-use-map of the study area. SSUK was built upon a UK prediction system for both variables and 200 realizations

  20. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    SciTech Connect

    Li, Tong; Gu, YuanTong

    2014-04-15

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.

  1. Large-Scale Studies on the Transferability of General Problem-Solving Skills and the Pedagogic Potential of Physics

    ERIC Educational Resources Information Center

    Mashood, K. K.; Singh, Vijay A.

    2013-01-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…

  2. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper . PMID:27122320

  3. Numerical solution of nonlinear algebraic equations in stiff ODE solving (1986--89)---Quasi-Newton updating for large scale nonlinear systems (1989--90)

    SciTech Connect

    Walker, H.F.

    1990-01-01

    During the 1986--1989 project period, two major areas of research developed into which most of the work fell: matrix-free'' methods for solving linear systems, by which we mean iterative methods that require only the action of the coefficient matrix on vectors and not the coefficient matrix itself, and Newton-like methods for underdetermined nonlinear systems. In the 1990 project period of the renewal grant, a third major area of research developed: inexact Newton and Newton iterative methods and their applications to large-scale nonlinear systems, especially those arising in discretized problems. An inexact Newton method is any method in which each step reduces the norm of the local linear model of the function of interest. A Newton iterative method is any implementation of Newton's method in which the linear systems that characterize Newton steps (the Newton equations'') are solved only approximately using an iterative linear solver. Newton iterative methods are properly considered special cases of inexact Newton methods. We describe the work in these areas and in other areas in this paper.

  4. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  5. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    SciTech Connect

    Heydari, M.H.; Hooshmandasl, M.R.; Cattani, C.; Maalek Ghaini, F.M.

    2015-02-15

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Error analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.

  6. Solving the Langevin equation with stochastic algebraically correlated noise

    NASA Astrophysics Data System (ADS)

    Płoszajczak, M.; Srokowski, T.

    1997-05-01

    The long time tail in the velocity and force autocorrelation function has been found recently in molecular dynamics simulations of peripheral collisions of ions. Simulation of those slowly decaying correlations in the stochastic transport theory requires the development of new methods of generating stochastic force of arbitrarily long correlation times. In this paper we propose a Markovian process, the multidimensional kangaroo process, which permits the description of various algebraically correlated stochastic processes.

  7. A wavelet-based computational method for solving stochastic Itô–Volterra integral equations

    SciTech Connect

    Mohammadi, Fakhrodin

    2015-10-01

    This paper presents a computational method based on the Chebyshev wavelets for solving stochastic Itô–Volterra integral equations. First, a stochastic operational matrix for the Chebyshev wavelets is presented and a general procedure for forming this matrix is given. Then, the Chebyshev wavelets basis along with this stochastic operational matrix are applied for solving stochastic Itô–Volterra integral equations. Convergence and error analysis of the Chebyshev wavelets basis are investigated. To reveal the accuracy and efficiency of the proposed method some numerical examples are included.

  8. Stochastic causality, criticality, and non-locality in brain networks. Comment on "Foundational perspectives on causality in large-scale brain networks" by M. Mannino and S.L. Bressler

    NASA Astrophysics Data System (ADS)

    Kozma, Robert; Hu, Sanqing

    2015-12-01

    For millennia, causality served as a powerful guiding principle to our understanding of natural processes, including the functioning of our body, mind, and brain. The target paper presents an impressive vista of the field of causality in brain networks, starting from philosophical issues, expanding on neuroscience effects, and addressing broad engineering and societal aspects as well. The authors conclude that the concept of stochastic causality is more suited to characterize the experimentally observed complex dynamical processes in large-scale brain networks, rather than the more traditional view of deterministic causality. We strongly support this conclusion and provide two additional examples that may enhance and complement this review: (i) a generalization of the Wiener-Granger Causality (WGC) to fit better the complexity of brain networks; (ii) employment of criticality as a key concept highly relevant to interpreting causality and non-locality in large-scale brain networks.

  9. Stochastic emergence of inflaton fluctuations in a SdS primordial universe with large-scale repulsive gravity from a 5D vacuum

    NASA Astrophysics Data System (ADS)

    Reyes, L. M.; Madriz Aguilar, J. E.; Bellini, M.

    2011-06-01

    We develop a stochastic approach to study scalar-field fluctuations of the inflaton field in an early inflationary universe with a black hole (BH), which is described by an effective 4D Schwarzschild-de Sitter (SdS) metric. This effective 4D metric is the induced metric on a 4D hypersurface, here representing our universe, which is obtained from a 5D Ricci-flat SdS static metric, after implementing a planar coordinate transformation. On this background, we found that at the end of the inflation, the squared fluctuations of the inflaton field are not exactly scale independent and result sensitive to the mass of the BH.

  10. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  11. Solving Parker's transport equation with stochastic differential equations on GPUs

    NASA Astrophysics Data System (ADS)

    Dunzlaff, P.; Strauss, R. D.; Potgieter, M. S.

    2015-07-01

    The numerical solution of transport equations for energetic charged particles in space is generally very costly in terms of time. Besides the use of multi-core CPUs and computer clusters in order to decrease the computation times, high performance calculations on graphics processing units (GPUs) have become available during the last years. In this work we introduce and describe a GPU-accelerated implementation of Parker's equation using Stochastic Differential Equations (SDEs) for the simulation of the transport of energetic charged particles with the CUDA toolkit, which is the focus of this work. We briefly discuss the set of SDEs arising from Parker's transport equation and their application to boundary value problems such as that of the Jovian magnetosphere. We compare the runtimes of the GPU code with a CPU version of the same algorithm. Compared to the CPU implementation (using OpenMP and eight threads) we find a performance increase of about a factor of 10-60, depending on the assumed set of parameters. Furthermore, we benchmark our simulation using the results of an existing SDE implementation of Parker's transport equation.

  12. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  13. PySP : modeling and solving stochastic mixed-integer programs in Python.

    SciTech Connect

    Woodruff, David L.; Watson, Jean-Paul

    2010-08-01

    Although stochastic programming is a powerful tool for modeling decision-making under uncertainty, various impediments have historically prevented its widespread use. One key factor involves the ability of non-specialists to easily express stochastic programming problems as extensions of deterministic models, which are often formulated first. A second key factor relates to the difficulty of solving stochastic programming models, particularly the general mixed-integer, multi-stage case. Intricate, configurable, and parallel decomposition strategies are frequently required to achieve tractable run-times. We simultaneously address both of these factors in our PySP software package, which is part of the COIN-OR Coopr open-source Python project for optimization. To formulate a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree with associated uncertain parameters in the Pyomo open-source algebraic modeling language. Given these two models, PySP provides two paths for solution of the corresponding stochastic program. The first alternative involves writing the extensive form and invoking a standard deterministic (mixed-integer) solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for approximating general multi-stage, mixed-integer stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.

  14. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of

  15. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  16. Stochastic pattern transitions in large scale swarms

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira; Lindley, Brandon; Mier-Y-Teran, Luis

    2013-03-01

    We study the effects of time dependent noise and discrete, randomly distributed time delays on the dynamics of a large coupled system of self-propelling particles. Bifurcation analysis on a mean field approximation of the system reveals that the system possesses patterns with certain universal characteristics that depend on distinguished moments of the time delay distribution. We show both theoretically and numerically that although bifurcations of simple patterns, such as translations, change stability only as a function of the first moment of the time delay distribution, more complex bifurcating patterns depend on all of the moments of the delay distribution. In addition, we show that for sufficiently large values of the coupling strength and/or the mean time delay, there is a noise intensity threshold, dependent on the delay distribution width, that forces a transition of the swarm from a misaligned state into an aligned state. We show that this alignment transition exhibits hysteresis when the noise intensity is taken to be time dependent. Research supported by the Office of Naval Research

  17. Numerical solution of nonlinear algebraic equations in stiff ODE solving (1986--89)---Quasi-Newton updating for large scale nonlinear systems (1989--90). Final report, 1986--1990

    SciTech Connect

    Walker, H.F.

    1990-12-31

    During the 1986--1989 project period, two major areas of research developed into which most of the work fell: ``matrix-free`` methods for solving linear systems, by which we mean iterative methods that require only the action of the coefficient matrix on vectors and not the coefficient matrix itself, and Newton-like methods for underdetermined nonlinear systems. In the 1990 project period of the renewal grant, a third major area of research developed: inexact Newton and Newton iterative methods and their applications to large-scale nonlinear systems, especially those arising in discretized problems. An inexact Newton method is any method in which each step reduces the norm of the local linear model of the function of interest. A Newton iterative method is any implementation of Newton`s method in which the linear systems that characterize Newton steps (the ``Newton equations``) are solved only approximately using an iterative linear solver. Newton iterative methods are properly considered special cases of inexact Newton methods. We describe the work in these areas and in other areas in this paper.

  18. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  19. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  20. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  1. GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations II: Dynamics and stochastic simulations

    NASA Astrophysics Data System (ADS)

    Antoine, Xavier; Duboscq, Romain

    2015-08-01

    GPELab is a free Matlab toolbox for modeling and numerically solving large classes of systems of Gross-Pitaevskii equations that arise in the physics of Bose-Einstein condensates. The aim of this second paper, which follows (Antoine and Duboscq, 2014), is to first present the various pseudospectral schemes available in GPELab for computing the deterministic and stochastic nonlinear dynamics of Gross-Pitaevskii equations (Antoine, et al., 2013). Next, the corresponding GPELab functions are explained in detail. Finally, some numerical examples are provided to show how the code works for the complex dynamics of BEC problems.

  2. Digital program for solving the linear stochastic optimal control and estimation problem

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.; Lehtinen, B.

    1975-01-01

    A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.

  3. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    SciTech Connect

    Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas

    2014-12-10

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.

  4. Using genetic algorithm to solve a new multi-period stochastic optimization model

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  5. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  6. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    SciTech Connect

    Heydari, M.H.; Hooshmandasl, M.R.; Maalek Ghaini, F.M.; Cattani, C.

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show the accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.

  7. Solving Stochastic Flexible Flow Shop Scheduling Problems with a Decomposition-Based Approach

    NASA Astrophysics Data System (ADS)

    Wang, K.; Choi, S. H.

    2010-06-01

    Real manufacturing is dynamic and tends to suffer a lot of uncertainties. Research on production scheduling under uncertainty has recently received much attention. Although various approaches have been developed for scheduling under uncertainty, this problem is still difficult to tackle by any single approach, because of its inherent difficulties. This chapter describes a decomposition-based approach (DBA) for makespan minimisation of a flexible flow shop (FFS) scheduling problem with stochastic processing times. The DBA decomposes an FFS into several machine clusters which can be solved more easily by different approaches. A neighbouring K-means clustering algorithm is developed to firstly group the machines of an FFS into an appropriate number of machine clusters, based on a weighted cluster validity index. A back propagation network (BPN) is then adopted to assign either the Shortest Processing Time (SPT) Algorithm or the Genetic Algorithm (GA) to generate a sub-schedule for each machine cluster. After machine grouping and approach assignment, an overall schedule is generated by integrating the sub-schedules of the machine clusters. Computation results reveal that the DBA is superior to SPT and GA alone for FFS scheduling under stochastic processing times, and that it can be easily adapted to schedule FFS under other uncertainties.

  8. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  9. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  10. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  11. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  12. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  13. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  14. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  15. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  16. The ESPAT tool: a general-purpose DSS shell for solving stochastic optimization problems in complex river-aquifer systems

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2015-04-01

    Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or

  17. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  18. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  19. Solving the problem of imaging resolution: stochastic multi-scale image fusion

    NASA Astrophysics Data System (ADS)

    Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril

    2016-04-01

    Structural features of porous materials define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, gas exchange between biologically active soil root zone and atmosphere, etc.) and solute transport. To characterize soil and rock microstructure X-ray microtomography is extremely useful. However, as any other imaging technique, this one also has a significant drawback - a trade-off between sample size and resolution. The latter is a significant problem for multi-scale complex structures, especially such as soils and carbonates. Other imaging techniques, for example, SEM/FIB-SEM or X-ray macrotomography can be helpful in obtaining higher resolution or wider field of view. The ultimate goal is to create a single dataset containing information from all scales or to characterize such multi-scale structure. In this contribution we demonstrate a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images representing macro, micro and nanoscale spatial information on porous media structure. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Potential practical applications of this method are abundant in soil science, hydrology and petroleum engineering, as well as other geosciences. This work was partially supported by RSF grant 14-17-00658 (X-ray microtomography study of shale

  20. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  1. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  2. A self-adaptive memeplexes robust search scheme for solving stochastic demands vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Chen, Xianshun; Feng, Liang; Ong, Yew Soon

    2012-07-01

    In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.

  3. On the decentralized control of large-scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chong, C.

    1973-01-01

    The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.

  4. Reinforcement Learning in Large Scale Systems Using State Generalization and Multi-Agent Techniques

    NASA Astrophysics Data System (ADS)

    Kimura, Hajime; Aoki, Kei; Kobayashi, Shigenobu

    This paper introduces several problems in reinforcement learning of industrial applications, and shows some techniques to overcome it. Reinforcement learning is known as on-line learning of an input-output mapping through a process of trial and error interactions with its uncertain environment, however, the trial and error will cause fatal damages in real applications. We introduce a planning method, based on reinforcement learning in the simulator. It can be seen as a stochastic approximation of dynamic programming in Markov decision processes. But in large problems, simple grid-tiling to quantize state space for tabular Q-learning is still infeasible. We introduce a generalization technique to approximate value functions in continuous state space, and a multiagent architecture to solve large scale problems. The efficiency of these techniques are shown through experiments in a sewage water-flow control system.

  5. Turbulent large-scale structure effects on wake meandering

    NASA Astrophysics Data System (ADS)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  6. Stochastic Set-Based Particle Swarm Optimization Based on Local Exploration for Solving the Carpool Service Problem.

    PubMed

    Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia

    2016-08-01

    The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP. PMID:26890944

  7. Solving the chemical master equation by a fast adaptive finite state projection based on the stochastic simulation algorithm.

    PubMed

    Sidje, R B; Vo, H D

    2015-11-01

    The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included. PMID:26319118

  8. Quantum Noise in Large-Scale Coherent Nonlinear Photonic Circuits

    NASA Astrophysics Data System (ADS)

    Santori, Charles; Pelc, Jason S.; Beausoleil, Raymond G.; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-06-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasiprobability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total and functions as a four-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important property for scalability.

  9. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  10. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  11. Scaling and Criticality in Large-Scale Neuronal Activity

    NASA Astrophysics Data System (ADS)

    Linkenkaer-Hansen, K.

    The human brain during wakeful rest spontaneously generates large-scale neuronal network oscillations at around 10 and 20 Hz that can be measured non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). In this chapter, spontaneous oscillations are viewed as the outcome of a self-organizing stochastic process. The aim is to introduce the general prerequisites for stochastic systems to evolve to the critical state and to explain their neurophysiological equivalents. I review the recent evidence that the theory of self-organized criticality (SOC) may provide a unifying explanation for the large variability in amplitude, duration, and recurrence of spontaneous network oscillations, as well as the high susceptibility to perturbations and the long-range power-law temporal correlations in their amplitude envelope.

  12. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  13. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  14. Relic vector field and CMB large scale anomalies

    SciTech Connect

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  15. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  16. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    SciTech Connect

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  17. THE LOSS OF ACCURACY OF STOCHASTIC COLLOCATION METHOD IN SOLVING NONLINEAR DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S

    2013-01-01

    n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.

  18. Decomposition and coordination of large-scale operations optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Ruoyu

    Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.

  19. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  20. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  1. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  2. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  3. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  4. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  5. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  6. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  7. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  8. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  9. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  10. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain. PMID:26429630

  11. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  12. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  13. Novel algorithm of large-scale simultaneous linear equations.

    PubMed

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-02-24

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented. PMID:21386384

  14. New methods for large scale local and global optimization

    NASA Astrophysics Data System (ADS)

    Byrd, Richard; Schnabel, Robert

    1994-07-01

    We have pursued all three topics described in the proposal during this research period. A large amount of effort has gone into the development of large scale global optimization methods for molecular configuration problems. We have developed new general purpose methods that combine efficient stochastic global optimization techniques with several new, more deterministic techniques that account for most of the computational effort, and the success, of the methods. We have applied our methods to Lennard-Jones problems with up to 75 atoms, to water clusters with up to 31, molecules, and polymers with up to 58 amino acids. The results appear to be the best so far by general purpose optimization methods, and appear to be leading to some interesting chemistry issues. Our research on the second topic, tensor methods, has addressed several areas. We have designed and implemented tensor methods for large sparse systems of nonlinear equations and nonlinear least squares, and have obtained excellent test results on a wide range of problems. We have also developed new tensor methods for nonlinearly constrained optimization problem, and have obtained promising theoretical and preliminary computational results. Finally, on the third topic, limited memory methods for large scale optimization, we have developed and implemented new, extremely efficient limited memory methods for bound constrained problems, and new limited memory trust regions methods, both using our-recently developed compact representations for quasi-Newton matrices. Computational test results for both methods are promising.

  15. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  16. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  17. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  18. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  19. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  20. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  1. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  2. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  3. Nonthermal Components in the Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco

    2004-12-01

    I address the issue of nonthermal processes in the large scale structure of the universe. After reviewing the properties of cosmic shocks and their role as particle accelerators, I discuss the main observational results, from radio to γ-ray and describe the processes that are thought be responsible for the observed nonthermal emissions. Finally, I emphasize the important role of γ-ray astronomy for the progress in the field. Non detections at these photon energies have already allowed us important conclusions. Future observations will tell us more about the physics of the intracluster medium, shocks dissipation and CR acceleration.

  4. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  5. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  6. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  7. Large-scale Globally Propagating Coronal Waves

    NASA Astrophysics Data System (ADS)

    Warmuth, Alexander

    2015-09-01

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  8. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  9. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  10. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  11. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  12. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  13. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  14. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  15. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  16. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  17. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  18. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  19. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  20. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  1. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  2. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  3. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  4. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  5. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  6. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-02-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  7. A competitive swarm optimizer for large scale optimization.

    PubMed

    Cheng, Ran; Jin, Yaochu

    2015-02-01

    In this paper, a novel competitive swarm optimizer (CSO) for large scale optimization is proposed. The algorithm is fundamentally inspired by the particle swarm optimization but is conceptually very different. In the proposed CSO, neither the personal best position of each particle nor the global best position (or neighborhood best positions) is involved in updating the particles. Instead, a pairwise competition mechanism is introduced, where the particle that loses the competition will update its position by learning from the winner. To understand the search behavior of the proposed CSO, a theoretical proof of convergence is provided, together with empirical analysis of its exploration and exploitation abilities showing that the proposed CSO achieves a good balance between exploration and exploitation. Despite its algorithmic simplicity, our empirical results demonstrate that the proposed CSO exhibits a better overall performance than five state-of-the-art metaheuristic algorithms on a set of widely used large scale optimization problems and is able to effectively solve problems of dimensionality up to 5000. PMID:24860047

  8. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  9. Efficient multiobjective optimization scheme for large scale structures

    NASA Astrophysics Data System (ADS)

    Grandhi, Ramana V.; Bharatram, Geetha; Venkayya, V. B.

    1992-09-01

    This paper presents a multiobjective optimization algorithm for an efficient design of large scale structures. The algorithm is based on generalized compound scaling techniques to reach the intersection of multiple functions. Multiple objective functions are treated similar to behavior constraints. Thus, any number of objectives can be handled in the formulation. Pseudo targets on objectives are generated at each iteration in computing the scale factors. The algorithm develops a partial Pareto set. This method is computationally efficient due to the fact that it does not solve many single objective optimization problems in reaching the Pareto set. The computational efficiency is compared with other multiobjective optimization methods, such as the weighting method and the global criterion method. Trusses, plate, and wing structure design cases with stress and frequency considerations are presented to demonstrate the effectiveness of the method.

  10. Radially dependent large-scale dynamos in global cylindrical shear flows and the local cartesian limit

    NASA Astrophysics Data System (ADS)

    Ebrahimi, F.; Blackman, E. G.

    2016-06-01

    For cylindrical differentially rotating plasmas, we study large-scale magnetic field generation from finite amplitude non-axisymmetric perturbations by comparing numerical simulations with quasi-linear analytic theory. When initiated with a vertical magnetic field of either zero or finite net flux, our global cylindrical simulations exhibit the magnetorotational instability (MRI) and large-scale dynamo growth of radially alternating mean fields, averaged over height and azimuth. This dynamo growth is explained by our analytic calculations of a non-axisymmetric fluctuation-induced electromotive force that is sustained by azimuthal shear of the fluctuating fields. The standard `Ω effect' (shear of the mean field by differential rotation) is unimportant. For the MRI case, we express the large-scale dynamo field as a function of differential rotation. The resulting radially alternating large-scale fields may have implications for angular momentum transport in discs and corona. To connect with previous work on large-scale dynamos with local linear shear and identify the minimum conditions needed for large-scale field growth, we also solve our equations in local Cartesian coordinates. We find that large-scale dynamo growth in a linear shear flow without rotation can be sustained by shear plus non-axisymmetric fluctuations - even if not helical, a seemingly previously unidentified distinction. The linear shear flow dynamo emerges as a more restricted version of our more general new global cylindrical calculations.

  11. Gravity and large-scale nonlocal bias

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Scoccimarro, Román; Sheth, Ravi K.

    2012-04-01

    For Gaussian primordial fluctuations the relationship between galaxy and matter overdensities, bias, is most often assumed to be local at the time of observation in the large-scale limit. This hypothesis is however unstable under time evolution, we provide proofs under several (increasingly more realistic) sets of assumptions. In the simplest toy model galaxies are created locally and linearly biased at a single formation time, and subsequently move with the dark matter (no velocity bias) conserving their comoving number density (no merging). We show that, after this formation time, the bias becomes unavoidably nonlocal and nonlinear at large scales. We identify the nonlocal gravitationally induced fields in which the galaxy overdensity can be expanded, showing that they can be constructed out of the invariants of the deformation tensor (Galileons), the main signature of which is a quadrupole field in second-order perturbation theory. In addition, we show that this result persists if we include an arbitrary evolution of the comoving number density of tracers. We then include velocity bias, and show that new contributions appear; these are related to the breaking of Galilean invariance of the bias relation, a dipole field being the signature at second order. We test these predictions by studying the dependence of halo overdensities in cells of fixed dark matter density: measurements in simulations show that departures from the mean bias relation are strongly correlated with the nonlocal gravitationally induced fields identified by our formalism, suggesting that the halo distribution at the present time is indeed more closely related to the mass distribution at an earlier rather than present time. However, the nonlocality seen in the simulations is not fully captured by assuming local bias in Lagrangian space. The effects on nonlocal bias seen in the simulations are most important for the most biased halos, as expected from our predictions. Accounting for these

  12. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  13. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  14. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  15. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  16. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  17. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  18. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  19. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  20. On the dynamics underlying the emergence of large scale structures in barotropic beta-plane turbulence

    NASA Astrophysics Data System (ADS)

    Bakas, Nikolaos; Constantinou, Navid; Ioannou, Petros

    2016-04-01

    Planetary turbulent flows are observed to self-organize into large scale structures such as zonal jets and coherent vortices. In this work, the eddy-mean flow dynamics underlying the formation of both zonal and nonzonal coherent structures in a barotropic turbulent flow is investigated within the statistical framework of stochastic structural stability theory (S3T). Previous studies have shown that the coherent structures emerge due to the instability of the homogeneous turbulent flow in the statistical dynamical S3T system and that the statistical predictions of S3T are reflected in direct numerical simulations. In this work, the dynamics underlying this S3T statistical instability are studied. It is shown that, for weak planetary vorticity gradient beta, both zonal jets and non-zonal large-scale structures form from upgradient momentum fluxes due to shearing of the eddies by the emerging flow. For large beta, the dynamics of the S3T instability differs for zonal and non-zonal flows. Shearing of the eddies by the mean flow continues to be the mechanism for the emergence of zonal jets while non-zonal large-scale flows emerge from resonant and near-resonant triad interactions between the large-scale flow and the stochastically forced eddies.

  1. Large-Scale Stratospheric Transport Processes

    NASA Technical Reports Server (NTRS)

    Plumb, R. Alan

    2001-01-01

    The paper discusses the following: 1. The Brewer-Dobson circulation: tropical upwelling. 2. Mixing into polar vortices. 3. The latitudinal structure of "age" in the stratosphere. 4. The subtropical "tracer edges". 5. Transport in the lower troposphere. 6. Tracer modeling during SOLVE. 7. 3D modeling of "mean age". 8. Models and measurements II.

  2. Detectability of Large-Scale Solar Subsurface Flows

    NASA Astrophysics Data System (ADS)

    Woodard, M.

    2014-04-01

    The accuracy of helioseismic measurement is limited by the stochastic nature of solar oscillations. In this article I use a Gaussian statistical model of the global seismic wave field of the Sun to investigate the noise limitations of direct-modeling analysis of convection-zone-scale flows. The theoretical analysis of noise is based on hypothetical data that cover the entire photosphere, including the portions invisible from the Earth. Noise estimates are derived for measurements of the flow-dependent couplings of global-oscillation modes and for combinations of coupling measurements that isolate vector-spherical-harmonic components of the flow velocity. For current helioseismic observations, which sample only a fraction of the photosphere, the inferred detection limits are best regarded as optimistic limits. The flow-velocity fields considered in this work are assumed to be decomposable into vector-spherical-harmonic functions of degree less than five. The problem of measuring the general velocity field is shown to be similar enough to the well-studied problem of measuring differential rotation to permit rough estimates of flow-detection thresholds to be gleaned from past helioseismic analysis. I estimate that, with existing and anticipated helioseismic datasets, large-scale flow-velocity amplitudes of a few tens of should be detectable near the base of the convection zone.

  3. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  4. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  5. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  6. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  7. Nearly incompressible fluids: hydrodynamics and large scale inhomogeneity.

    PubMed

    Hunana, P; Zank, G P; Shaikh, D

    2006-08-01

    incompressible equations for higher order fluctuation components are derived and it is shown that they converge to the usual homogeneous nearly incompressible equations in the limit of no large-scale background. We use a time and length scale separation procedure to obtain wave equations for the acoustic pressure and velocity perturbations propagating on fast-time-short-wavelength scales. On these scales, the pseudosound relation, used to relate density and pressure fluctuations, is also obtained. In both cases, the speed of propagation (sound speed) depends on background variables and therefore varies spatially. For slow-time scales, a simple pseudosound relation cannot be obtained and density and pressure fluctuations are implicitly related through a relation which can be solved only numerically. Subject to some simplifications, a generalized inhomogeneous pseudosound relation is derived. With this paper, we extend the theory of nearly incompressible hydrodynamics to flows, including the solar wind, which include large-scale inhomogeneities (in this case radially symmetric and in equilibrium). PMID:17025534

  8. Simulation and Optimization of Large Scale Subsurface Environmental Impacts; Investigations, Remedial Design and Long Term Monitoring

    SciTech Connect

    Deschaine, L.M.

    2008-07-01

    The global impact to human health and the environment from large scale chemical / radionuclide releases is well documented. Examples are the wide spread release of radionuclides from the Chernobyl nuclear reactors, the mobilization of arsenic in Bangladesh, the formation of Environmental Protection Agencies in the United States, Canada and Europe, and the like. The fiscal costs of addressing and remediating these issues on a global scale are astronomical, but then so are the fiscal and human health costs of ignoring them. An integrated methodology for optimizing the response(s) to these issues is needed. This work addresses development of optimal policy design for large scale, complex, environmental issues. It discusses the development, capabilities, and application of a hybrid system of algorithms that optimizes the environmental response. It is important to note that 'optimization' does not singularly refer to cost minimization, but to the effective and efficient balance of cost, performance, risk, management, and societal priorities along with uncertainty analysis. This tool integrates all of these elements into a single decision framework. It provides a consistent approach to designing optimal solutions that are tractable, traceable, and defensible. The system is modular and scalable. It can be applied either as individual components or in total. By developing the approach in a complex systems framework, a solution methodology represents a significant improvement over the non-optimal 'trial and error' approach to environmental response(s). Subsurface environmental processes are represented by linear and non-linear, elliptic and parabolic equations. The state equations solved using numerical methods include multi-phase flow (water, soil gas, NAPL), and multicomponent transport (radionuclides, heavy metals, volatile organics, explosives, etc.). Genetic programming is used to generate the simulators either when simulation models do not exist, or to extend the

  9. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model

  10. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  11. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  12. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  13. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  14. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  15. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  16. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  17. Large-scale Stratospheric Transport Processes

    NASA Technical Reports Server (NTRS)

    Plumb, R. Alan

    2003-01-01

    The PI has undertaken a theoretical analysis of the existence and nature of compact tracer-tracer relationships of the kind observed in the stratosphere, augmented with three-dimensional model simulations of stratospheric tracers (the latter being an extension of modeling work the group did during the SOLVE experiment). This work achieves a rigorous theoretical basis for the existence and shape of these relationships, as well as a quantitative theory of their width and evolution, in terms of the joint tracer-tracer PDF distribution. A paper on this work is almost complete and will soon be submitted to Rev. Geophys. We have analyzed lower stratospheric water in simulations with an isentropic-coordinate version of the MATCH transport model which we recently helped to develop. The three-dimensional structure of lower stratospheric water, in particular, attracted our attention: dry air is, below about 400K potential temperature, localized in the regions of the west Pacific and equatorial South America. We have been analyzing air trajectories to determine how air passes through the tropopause cold trap. This work is now being completed, and a paper will be submitted to Geophys. Res. Lett. before the end of summer. We are continuing to perform experiments with the 'MATCH' CTM, in both sigma- and entropy-coordinate forms. We earlier found (in collaboration with Dr Natalie Mahowald, and as part of an NSF-funded project) that switching to isentropic coordinates made a substantial improvement to the simulation of the age of stratospheric air. We are now running experiments with near-tropopause sources in both versions of the model, to see if and to what extent the simulation of stratosphere-troposphere transport is dependent on the model coordinate. Personnel Research is supervised by the PI, Prof. Alan Plumb. Mr William Heres conducts the tracer modeling work and performs other modeling tasks. Two graduate students, Ms Irene Lee and Mr Michael Ring, have been participating

  18. HTS cables open the window for large-scale renewables

    NASA Astrophysics Data System (ADS)

    Geschiere, A.; Willén, D.; Piga, E.; Barendregt, P.

    2008-02-01

    In a realistic approach to future energy consumption, the effects of sustainable power sources and the effects of growing welfare with increased use of electricity need to be considered. These factors lead to an increased transfer of electric energy over the networks. A dominant part of the energy need will come from expanded large-scale renewable sources. To use them efficiently over Europe, large energy transits between different countries are required. Bottlenecks in the existing infrastructure will be avoided by strengthening the network. For environmental reasons more infrastructure will be built underground. Nuon is studying the HTS technology as a component to solve these challenges. This technology offers a tremendously large power transport capacity as well as the possibility to reduce short circuit currents, making integration of renewables easier. Furthermore, power transport will be possible at lower voltage levels, giving the opportunity to upgrade the existing network while re-using it. This will result in large cost savings while reaching the future energy challenges. In a 6 km backbone structure in Amsterdam Nuon wants to install a 50 kV HTS Triax cable for a significant increase of the transport capacity, while developing its capabilities. Nevertheless several barriers have to be overcome.

  19. Parallel block schemes for large scale least squares computations

    SciTech Connect

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.

  20. Large-scale magnetic fields, dark energy, and QCD

    SciTech Connect

    Urban, Federico R.; Zhitnitsky, Ariel R.

    2010-08-15

    Cosmological magnetic fields are being observed with ever increasing correlation lengths, possibly reaching the size of superclusters, therefore disfavoring the conventional picture of generation through primordial seeds later amplified by galaxy-bound dynamo mechanisms. In this paper we put forward a fundamentally different approach that links such large-scale magnetic fields to the cosmological vacuum energy. In our scenario the dark energy is due to the Veneziano ghost (which solves the U(1){sub A} problem in QCD). The Veneziano ghost couples through the triangle anomaly to the electromagnetic field with a constant which is unambiguously fixed in the standard model. While this interaction does not produce any physical effects in Minkowski space, it triggers the generation of a magnetic field in an expanding universe at every epoch. The induced energy of the magnetic field is thus proportional to cosmological vacuum energy: {rho}{sub EM{approx_equal}}B{sup 2{approx_equal}}(({alpha}/4{pi})){sup 2{rho}}{sub DE}, {rho}{sub DE} hence acting as a source for the magnetic energy {rho}{sub EM}. The corresponding numerical estimate leads to a magnitude in the nG range. There are two unique and distinctive predictions of our proposal: an uninterrupted active generation of Hubble size correlated magnetic fields throughout the evolution of the Universe; the presence of parity violation on the enormous scales 1/H, which apparently has been already observed in CMB. These predictions are entirely rooted into the standard model of particle physics.

  1. Implicit solvers for large-scale nonlinear problems

    SciTech Connect

    Keyes, D E; Reynolds, D; Woodward, C S

    2006-07-13

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications.

  2. Importance-truncated large-scale shell model

    NASA Astrophysics Data System (ADS)

    Stumpf, Christina; Braun, Jonas; Roth, Robert

    2016-02-01

    We propose an importance-truncation scheme for the large-scale nuclear shell model that extends its range of applicability to larger valence spaces and midshell nuclei. It is based on a perturbative measure for the importance of individual basis states that acts as an additional truncation for the many-body model space in which the eigenvalue problem of the Hamiltonian is solved numerically. Through a posteriori extrapolations of all observables to vanishing importance threshold, the full shell-model results can be recovered. In addition to simple threshold extrapolations, we explore extrapolations based on the energy variance. We apply the importance-truncated shell model for the study of 56Ni in the p f valence space and of 60Zn and 64Ge in the p f g9 /2 space. We demonstrate the efficiency and accuracy of the approach, which pave the way for future applications of valence-space interactions derived in ab initio approaches in larger valence spaces.

  3. Scalable pattern recognition for large-scale scientific data mining

    SciTech Connect

    Kamath, C.; Musick, R.

    1998-03-23

    Our ability to generate data far outstrips our ability to explore and understand it. The true value of this data lies not in its final size or complexity, but rather in our ability to exploit the data to achieve scientific goals. The data generated by programs such as ASCI have such a large scale that it is impractical to manually analyze, explore, and understand it. As a result, useful information is overlooked, and the potential benefits of increased computational and data gathering capabilities are only partially realized. The difficulties that will be faced by ASCI applications in the near future are foreshadowed by the challenges currently facing astrophysicists in making full use of the data they have collected over the years. For example, among other difficulties, astrophysicists have expressed concern that the sheer size of their data restricts them to looking at very small, narrow portions at any one time. This narrow focus has resulted in the loss of ``serendipitous`` discoveries which have been so vital to progress in the area in the past. To solve this problem, a new generation of computational tools and techniques is needed to help automate the exploration and management of large scientific data. This whitepaper proposes applying and extending ideas from the area of data mining, in particular pattern recognition, to improve the way in which scientists interact with large, multi-dimensional, time-varying data.

  4. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  5. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    NASA Astrophysics Data System (ADS)

    Diwadkar, Amit; Vaidya, Umesh

    2016-04-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies.

  6. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links.

    PubMed

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  7. Performance modeling and analysis of consumer classes in large scale systems

    NASA Astrophysics Data System (ADS)

    Al-Shukri, Sh.; Lenin, R. B.; Ramaswamy, S.; Anand, A.; Narasimhan, V. L.; Abraham, J.; Varadan, Vijay

    2009-03-01

    Peer-to-Peer (P2P) networks have been used efficiently as building blocks as overlay networks for large-scale distributed network applications with Internet Protocol (IP) based bottom layer networks. With large scale Wireless Sensor Networks (WSNs) becoming increasingly realistic, it is important to overlay networks with WSNs in the bottom layer. The suitable mathematical (stochastic) model that can model the overlay network over WSNs is Queuing Networks with Multi-Class customers. In this paper, we discuss how these mathematical network models can be simulated using the object oriented simulation package OMNeT++. We discuss the Graphical User Interface (GUI) which is developed to accept the input parameter files and execute the simulation using this interface. We compare the simulation results with analytical formulas available in the literature for these mathematical models.

  8. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  9. INTERNATIONAL WORKSHOP ON LARGE-SCALE REFORESTATION: PROCEEDINGS

    EPA Science Inventory

    The purpose of the workshop was to identify major operational and ecological considerations needed to successfully conduct large-scale reforestation projects throughout the forested regions of the world. Large-scale" for this workshop means projects where, by human effort, approx...

  10. Using Large-Scale Assessment Scores to Determine Student Grades

    ERIC Educational Resources Information Center

    Miller, Tess

    2013-01-01

    Many Canadian provinces provide guidelines for teachers to determine students' final grades by combining a percentage of students' scores from provincial large-scale assessments with their term scores. This practice is thought to hold students accountable by motivating them to put effort into completing the large-scale assessment, thereby…

  11. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  12. Large-Scale Weather Generator for Downscaling Precipitation

    NASA Astrophysics Data System (ADS)

    Thober, Stephan; Samaniego, Luis; Bardossy, Andras

    2013-04-01

    Well parametrized distributed precipitation-runoff models are able to correctly quantify hydrological state variables (e.g. streamflow, soil moisture, among others) for the past decades. In order to estimate future risks associated with hydrometeorological extremes, it is necessary to incorporate information about the future weather and climate. A common approach is to downscale Regional Climate Model (RCM) projections. Therefore, various statistical downscaling schemes, utilizing diverse mathematical methods, have been developed. One kind of statistical downscaling technique is the so called Weather Generator (WG). These algorithms provide meteorological time series as the realization of a stochastic process. First, single- and multi-site models were developed. Recently, however WG at sub-daily scales and on gridded spatial resolution have captured the interest because of the new development in distributed hydrological modelling. A standard approach for a multi-site WG is to sample a multivariate normal process for all locations. Doing so, it is necessary to calculate the Cholesky factor of the cross-covariance matrix to guarantee a spatially consistent sampling. In general, gridded WGs are an extension of multi-site WGs to larger domains (i.e. >10000 grid cells). On these large grids, it is not possible to accurately determine the Cholesky factor and further enhancements are required. In this work, a framework for a WG is proposed, which provides meteorological time-series on a large scale grid, e.g. 4 km grid of Germany. It employs a sequential Gaussian simulation method, conditioning the value of a grid cell only on a neighborhood, not on the whole field. This methodology is incorporated into a multi-scale downscaling scheme, which is able to provide precipitation data sets at different spatial and temporal resolutions, ranging from 4 km to 32 km, and from days to months, respectively. This framework uses a copula approach for spatial downscaling, exploiting

  13. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  14. Numerical Technology for Large-Scale Computational Electromagnetics

    SciTech Connect

    Sharpe, R; Champagne, N; White, D; Stowell, M; Adams, R

    2003-01-30

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems are solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.

  15. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  16. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  17. Interpretation of large-scale deviations from the Hubble flow

    NASA Astrophysics Data System (ADS)

    Grinstein, B.; Politzer, H. David; Rey, S.-J.; Wise, Mark B.

    1987-03-01

    The theoretical expectation for large-scale streaming velocities relative to the Hubble flow is expressed in terms of statistical correlation functions. Only for objects that trace the mass would these velocities have a simple cosmological interpretation. If some biasing effects the objects' formation, then nonlinear gravitational evolution is essential to predicting the expected large-scale velocities, which also depend on the nature of the biasing.

  18. Large scale suppression of scalar power on a spatial condensation

    NASA Astrophysics Data System (ADS)

    Kouwn, Seyen; Kwon, O.-Kab; Oh, Phillial

    2015-03-01

    We consider a deformed single-field inflation model in terms of three SO(3) symmetric moduli fields. We find that spatially linear solutions for the moduli fields induce a phase transition during the early stage of the inflation and the suppression of scalar power spectrum at large scales. This suppression can be an origin of anomalies for large-scale perturbation modes in the cosmological observation.

  19. Iterative methods for large scale static analysis of structures on a scalable multiprocessor supercomputer

    NASA Technical Reports Server (NTRS)

    Sobh, Nahil Atef

    1992-01-01

    A parallel Preconditioned Conjugate Gradient (PCG) iterative solver has been developed and implemented on the iPSC-860 scalable hypercube. This new implementation makes use of the Parallel Automated Runtime Toolkit at ICASE (PARTI) primitives to efficiently program irregular communications patterns that exist in general sparse matrices and in particular in the finite element sparse stiffness matrices. The iterative PCG has been used to solve the finite element equations that result from discretizing large scale aerospace structures. In particular, the static response of the High Speed Civil Transport (HSCT) finite element model is solved on the iPSC-860.

  20. Accelerated Block Preconditioned Gradient method for large scale wave functions calculations in Density Functional Theory

    SciTech Connect

    Fattebert, J.-L.

    2010-01-20

    An Accelerated Block Preconditioned Gradient (ABPG) method is proposed to solve electronic structure problems in Density Functional Theory. This iterative algorithm is designed to solve directly the non-linear Kohn-Sham equations for accurate discretization schemes involving a large number of degrees of freedom. It makes use of an acceleration scheme similar to what is known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of convergence for large scale applications using a finite difference discretization and multigrid preconditioning.

  1. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  2. Towards a self-consistent halo model for the nonlinear large-scale structure

    NASA Astrophysics Data System (ADS)

    Schmidt, Fabian

    2016-03-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.

  3. Large-scale pattern formation in active particles suspensions: from interacting microtubules to swimming bacteria

    NASA Astrophysics Data System (ADS)

    Aranson, Igor

    2006-03-01

    We consider two biological systems of active particles exhibiting large-scale collective behavior: microtubules interacting with molecular motors and hydrodynamically entrained swimming bacteria. Starting from a generic stochastic microscopic model of inelastically colliding polar rods with an anisotropic interaction kernel, we derive set of equations for the local rods concentration and orientation. Above certain critical density of rods the model exhibits orientational instability and onset of large-scale coherence. For the microtubules and molecular motors system we demonstrate that the orientational instability leads to the formation of vortices and asters seen in recent experiments. Similar approach is applied to colonies of swimming bacteria Bacillus subtilis confined in thin fluid film. The model is formulated in term of two-dimensional equations for local density and orientation of bacteria coupled to the low Reynolds number Navier-Stokes equation for the fluid flow velocity. The collective swimming of bacteria is represented by additional source term in the Navier-Stokes equation. We demonstrate that this system exhibits formation of dynamic large-scale patterns with the typical scale determined by the density of bacteria.

  4. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  5. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  6. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  7. Unsaturated Hydraulic Conductivity for Evaporation in Large scale Heterogeneous Soils

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zhu, J.

    2014-12-01

    In this study we aim to provide some practical guidelines of how the commonly used simple averaging schemes (arithmetic, geometric, or harmonic mean) perform in simulating large scale evaporation in a large scale heterogeneous landscape. Previous studies on hydraulic property upscaling focusing on steady state flux exchanges illustrated that an effective hydraulic property is usually more difficult to define for evaporation. This study focuses on upscaling hydraulic properties of large scale transient evaporation dynamics using the idea of the stream tube approach. Specifically, the two main objectives are: (1) if the three simple averaging schemes (i.e., arithmetic, geometric and harmonic means) of hydraulic parameters are appropriate in representing large scale evaporation processes, and (2) how the applicability of these simple averaging schemes depends on the time scale of evaporation processes in heterogeneous soils. Multiple realizations of local evaporation processes are carried out using HYDRUS-1D computational code (Simunek et al, 1998). The three averaging schemes of soil hydraulic parameters were used to simulate the cumulative flux exchange, which is then compared with the large scale average cumulative flux. The sensitivity of the relative errors to the time frame of evaporation processes is also discussed.

  8. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  9. Classification of large-scale stellar spectra based on the non-linearly assembling learning machine

    NASA Astrophysics Data System (ADS)

    Liu, Zhongbao; Song, Lipeng; Zhao, Wenjuan

    2016-02-01

    An important problem to be solved of traditional classification methods is they cannot deal with large-scale classification because of very high time complexity. In order to solve above problem, inspired by the thinking of collaborative management, the non-linearly assembling learning machine (NALM) is proposed and used in the large-scale stellar spectral classification. In NALM, the large-scale dataset is firstly divided into several subsets, and then the traditional classifiers such as support vector machine (SVM) runs on the subset, finally, the classification results on each subset are assembled and the overall classification decision is obtained. In comparative experiments, we investigate the performance of NALM in the stellar spectral subclasses classification compared with SVM. We apply SVM and NALM respectively to classify the four subclasses of K-type spectra, three subclasses of F-type spectra and three subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS). The comparative experiment results show that the performance of NALM is much better than SVM in view of the classification accuracy and the computation time.

  10. Sparse LSSVM in Primal Using Cholesky Factorization for Large-Scale Problems.

    PubMed

    Zhou, Shuisheng

    2016-04-01

    For support vector machine (SVM) learning, least squares SVM (LSSVM), derived by duality LSSVM (D-LSSVM), is a widely used model, because it has an explicit solution. One obvious limitation of the model is that the solution lacks sparseness, which limits it from training large-scale problems efficiently. In this paper, we derive an equivalent LSSVM model in primal space LSSVM (P-LSSVM) by the representer theorem and prove that P-LSSVM can be solved exactly at some sparse solutions for problems with low-rank kernel matrices. Two algorithms are proposed for finding the sparse (approximate) solution of P-LSSVM by Cholesky factorization. One is based on the decomposition of the kernel matrix K as P P(T) with the best low-rank matrix P approximately by pivoting Cholesky factorization. The other is based on solving P-LSSVM by approximating the Cholesky factorization of the Hessian matrix with rank-one update scheme. For linear learning problems, theoretical analysis and experimental results support that P-LSSVM can give the sparsest solutions in all SVM learners. Experimental results on some large-scale nonlinear training problems show that our algorithms, based on P-LSSVM, can converge to acceptable test accuracies at very sparse solutions with a sparsity level <1%, and even as little as 0.01%. Hence, our algorithms are a better choice for large-scale training problems. PMID:25966482

  11. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  12. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  13. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  14. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  15. Large-scale current systems in the dayside Venus ionosphere

    NASA Technical Reports Server (NTRS)

    Luhmann, J. G.; Elphic, R. C.; Brace, L. H.

    1981-01-01

    The occasional observation of large-scale horizontal magnetic fields within the dayside ionosphere of Venus by the flux gate magnetometer on the Pioneer Venus orbiter suggests the presence of large-scale current systems. Using the measured altitude profiles of the magnetic field and the electron density and temperature, together with the previously reported neutral atmosphere density and composition, it is found that the local ionosphere can be described at these times by a simple steady state model which treats the unobserved quantities, such as the electric field, as parameters. When the model is appropriate, the altitude profiles of the ion and electron velocities and the currents along the satellite trajectory can be inferred. These results elucidate the configurations and sources of the ionospheric current systems which produce the observed large-scale magnetic fields, and in particular illustrate the effect of ion-neutral coupling in the determination of the current system at low altitudes.

  16. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  17. A survey on routing protocols for large-scale wireless sensor networks.

    PubMed

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and

  18. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  19. The evolution of large-scale magnetic fields in the ionosphere of Venus

    NASA Astrophysics Data System (ADS)

    Cravens, T. E.; Shinagawa, H.; Nagy, A. F.

    1984-03-01

    Large-scale magnetic fields are often observed in the ionosphere of Venus by the magnetometer on the Pioneer Venus Orbiter, especially near the subsolar point or when the solar wind dynamic pressure is high. An equation for the time evolution of the magnetic field is derived which includes both a term representing the time rate of change of the field due to the convection of magnetic flux by plasma motions, and a magnetic diffusion/dissipation term. The ionospheric plasma velocities required by these equations were obtained by numerically solving the momentum equation. Numerical solutions to the magnetic field equation indicate that large-scale magnetic fields, which are not being actively maintained, decay with time scales ranging from tens of minutes to several hours. The vertical convection of magnetic flux enables magnetic field structures deep within the ionosphere to persist longer than would otherwise be expected. This vertical convection also explains the shape of these structures.

  20. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  1. The Evolution of Baryons in Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Snedden, Ali; Arielle Phillips, Lara; Mathews, Grant James; Coughlin, Jared; Suh, In-Saeng; Bhattacharya, Aparna

    2015-01-01

    The environments of galaxies play a critical role in their formation and evolution. We study these environments using cosmological simulations with star formation and supernova feedback included. From these simulations, we parse the large scale structure into clusters, filaments and voids using a segmentation algorithm adapted from medical imaging. We trace the star formation history, gas phase and metal evolution of the baryons in the intergalactic medium as function of structure. We find that our algorithm reproduces the baryon fraction in the intracluster medium and that the majority of star formation occurs in cold, dense filaments. We present the consequences this large scale environment has for galactic halos and galaxy evolution.

  2. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  3. Large-scale ER-damper for seismic protection

    NASA Astrophysics Data System (ADS)

    McMahon, Scott; Makris, Nicos

    1997-05-01

    A large scale electrorheological (ER) damper has been designed, constructed, and tested. The damper consists of a main cylinder and a piston rod that pushes an ER-fluid through a number of stationary annular ducts. This damper is a scaled- up version of a prototype ER-damper which has been developed and extensively studied in the past. In this paper, results from comprehensive testing of the large-scale damper are presented, and the proposed theory developed for predicting the damper response is validated.

  4. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  5. Large-scale liquid scintillation detectors for solar neutrinos

    NASA Astrophysics Data System (ADS)

    Benziger, Jay B.; Calaprice, Frank P.

    2016-04-01

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed.

  6. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  7. Clearing and Labeling Techniques for Large-Scale Biological Tissues.

    PubMed

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-06-30

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  8. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  9. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  10. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  11. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  12. Large-scale search for dark-matter axions

    SciTech Connect

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J.; McBride, J.; Peng, H.; Rosenberg, L.J.; Xin, H.; Laveigne, J.; Sikivie, P.; Sullivan, N.S.; Tanner, D.B.; Moltz, D.M.; Powell, J.; Clarke, J.; Nezrick, F.A.; Turner, M.S.; Golubev, N.A.; Kravchuk, L.V.

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  13. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  14. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  15. Large-scale search for dark-matter axions

    SciTech Connect

    Kinion, D; van Bibber, K

    2000-08-30

    We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.

  16. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  17. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  18. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  19. Individual Skill Differences and Large-Scale Environmental Learning

    ERIC Educational Resources Information Center

    Fields, Alexa W.; Shelton, Amy L.

    2006-01-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited…

  20. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  1. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  2. Large Scale Field Campaign Contributions to Soil Moisture Remote Sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large-scale field experiments have been an essential component of soil moisture remote sensing for over two decades. They have provided test beds for both the technology and science necessary to develop and refine satellite mission concepts. The high degree of spatial variability of soil moisture an...

  3. Large-scale V/STOL testing. [in wind tunnels

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.; Aiken, T. N.; Aoyagi, K.; Falarski, M. D.

    1977-01-01

    Several facets of large-scale testing of V/STOL aircraft configurations are discussed with particular emphasis on test experience in the Ames 40- by 80-foot wind tunnel. Examples of powered-lift test programs are presented in order to illustrate tradeoffs confronting the planner of V/STOL test programs. It is indicated that large-scale V/STOL wind-tunnel testing can sometimes compete with small-scale testing in the effort required (overall test time) and program costs because of the possibility of conducting a number of different tests with a single large-scale model where several small-scale models would be required. The benefits of both high- and full-scale Reynolds numbers, more detailed configuration simulation, and number and type of onboard measurements increase rapidly with scale. Planning must be more detailed at large scale in order to balance the trade-offs between the increased costs, as number of measurements and model configuration variables increase and the benefits of larger amounts of information coming out of one test.

  4. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  5. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  6. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  7. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  8. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  9. Probabilistic Cuing in Large-Scale Environmental Search

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.

    2010-01-01

    Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…

  10. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  11. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  12. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  13. Newton iterative methods for large scale nonlinear systems

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-01-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  14. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  15. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  16. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  17. Polymers in 2D Turbulence: Suppression of Large Scale Fluctuations

    NASA Astrophysics Data System (ADS)

    Amarouchene, Y.; Kellay, H.

    2002-08-01

    Small quantities of a long chain molecule or polymer affect two-dimensional turbulence in unexpected ways. Their presence inhibits the transfers of energy to large scales causing their suppression in the energy density spectrum. This also leads to the change of the spectral properties of a passive scalar which turns out to be highly sensitive to the presence of energy transfers.

  18. Creating a Large-Scale, Third Generation, Distance Education Course.

    ERIC Educational Resources Information Center

    Weller, Martin James

    2000-01-01

    Outlines the course development of an introductory large-scale distance education course offered via the World Wide Web at the Open University in the United Kingdom. Topics include developing appropriate student skills; maintaining quality control; facilitating easy updating of material; ensuring student interaction; and making materials…

  19. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  20. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  1. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  2. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  3. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  4. Response of Tradewind Cumuli to Large-Scale Processes.

    NASA Astrophysics Data System (ADS)

    Soong, S.-T.; Ogura, Y.

    1980-09-01

    The two-dimensional slab-symmetric numerical cloud model used by Soong and Ogura (1973) for studying the evolution of an isolated cumulus cloud is extended to investigate the statistical properties of cumulus clouds which would be generated under a given large-scale forcing composed of the horizontal advection of temperature and water vapor mixing ratio, vertical velocity, sea surface temperature and radiative cooling. Random disturbances of small amplitude are introduced into the model at low levels to provide random motion for cloud formation.The model is applied to a case of suppressed weather conditions during BOMEX for the period 22-23 June 1969 when a nearly steady state prevailed. The composited temperature and mixing ratio profiles of these two days are used as initial conditions and the time-independent large-scale forcing terms estimated from the observations are applied to the model. The result of numerical integration shows that a number of small clouds start developing after 1 h. Some of them decay quickly, but some of them develop and reach the tradewind inversion. After a few hours of simulation, the vertical profiles of the horizontally averaged temperature and moisture are found to deviate only slightly from the observed profiles, indicating that the large-scale effect and the feedback effects of clouds on temperature and mixing ratio reach an equilibrium state. The three major components of the cloud feedback effect, i.e., condensation, evaporation and vertical fluxes associated with the clouds, are determined from the model output. The vertical profiles of vertical heat and moisture fluxes in the subcloud layer in the model are found to be in general agreement with the observations.Sensitivity tests of the model are made for different magnitudes of the large-scale vertical velocity. The most striking result is that the temperature and humidity in the cloud layer below the inversion do not change significantly in spite of a relatively large

  5. Two new methods for solving large scale least squares in geodetic surveying computations

    NASA Astrophysics Data System (ADS)

    Murigande, Ch.; Toint, Ph. L.; Paquet, P.

    1986-12-01

    This paper considers the solution of linear least squares problems arising in space geodesy, with a special application to multistation adjustment by a short arc method based on Doppler observations. The widely used second-order regression algorithm due to Brown (1976) for reducing the normal equations system is briefly recalled. Then two algorithms which avoid the use of the normal equations are proposed. The first one is a direct method that applies orthogonal transformations to the observation matrix directly, in order to reduce it to upper triangular form. The solution is then obtained by back-substitution. The second method is iterative and uses a preconditioned conjugate gradient technique. A comparison of the three procedures is provided on data of the second European Doppler Observation Campaign.

  6. Development of large-scale acoustic waveguides for liquid-level measurements

    SciTech Connect

    Kirkpatrick, J.F.; Kuzniak, W.C.

    1987-01-01

    Large-scale magnetostrictive ultrasonic waveguides are being developed and tested for liquid-level measurement. The use of inexpensive, commercially available, nickel tubing provides a homogeneous waveguide with nondispersive transmission properties and good independence of torsional and extensional wave modes. Because the entire waveguide is magnetostrictive, acoustic excitation and sensing is possible at any point along the length of the waveguide. The problems of establishing and maintaining circumferential fields for torsional wave generation have been solved by electromagnetic field generation. Prototype devices have been built and tested which exhibit a linear relationship between either torsional amplitude or phase velocity and depth of immersion.

  7. Constructing perturbation theory kernels for large-scale structure in generalized cosmologies

    NASA Astrophysics Data System (ADS)

    Taruya, Atsushi

    2016-07-01

    We present a simple numerical scheme for perturbation theory (PT) calculations of large-scale structure. Solving the evolution equations for perturbations numerically, we construct the PT kernels as building blocks of statistical calculations, from which the power spectrum and/or correlation function can be systematically computed. The scheme is especially applicable to the generalized structure formation including modified gravity, in which the analytic construction of PT kernels is intractable. As an illustration, we show several examples for power spectrum calculations in f (R ) gravity and Λ CDM models.

  8. Operational Optimization of Large-Scale Parallel-Unit SWRO Desalination Plant Using Differential Evolution Algorithm

    PubMed Central

    Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  9. Some notions of decentralization and coordination in large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Chong, C. Y.

    1975-01-01

    Some notions of decentralization and coordination in the control of large-scale dynamic systems are discussed. Decentralization and coordination have always been important concepts in the study of large systems. Roughly speaking decentralization is the process of dividing a large problem into subproblems so that it can be handled more easily. Coordination is the manipulation of the subproblem so that the original problem is solved. The various types of decentralization and coordination that have been used to control dynamic systems are discussed. The emphasis was to distinguish between on-line and off-line operations to understand the results available by indicating the aspects of the problem which are decentralized.

  10. Operational optimization of large-scale parallel-unit SWRO desalination plant using differential evolution algorithm.

    PubMed

    Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  11. Large-scale quantification of CVD graphene surface coverage

    NASA Astrophysics Data System (ADS)

    Ambrosi, Adriano; Bonanni, Alessandra; Sofer, Zdeněk; Pumera, Martin

    2013-02-01

    The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale fabrication of graphene by the CVD process.The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale

  12. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    NASA Astrophysics Data System (ADS)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  13. Continuation and bifurcation analysis of large-scale dynamical systems with LOCA.

    SciTech Connect

    Salinger, Andrew Gerhard; Phipps, Eric Todd; Pawlowski, Roger Patrick

    2010-06-01

    Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.

  14. Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996

    SciTech Connect

    Walker, H.F.

    1997-09-01

    The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.

  15. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  16. Effect of weak rotation on large-scale circulation cessations in turbulent convection.

    PubMed

    Assaf, Michael; Angheluta, Luiza; Goldenfeld, Nigel

    2012-08-17

    We investigate the effect of weak rotation on the large-scale circulation (LSC) of turbulent Rayleigh-Bénard convection, using the theory for cessations in a low-dimensional stochastic model of the flow previously studied. We determine the cessation frequency of the LSC as a function of rotation, and calculate the statistics of the amplitude and azimuthal velocity fluctuations of the LSC as a function of the rotation rate for different Rayleigh numbers. Furthermore, we show that the tails of the reorientation PDF remain unchanged for rotating systems, while the distribution of the LSC amplitude and correspondingly the cessation frequency are strongly affected by rotation. Our results are in close agreement with experimental observations. PMID:23006374

  17. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  18. Primordial non-Gaussianity in the bispectra of large-scale structure

    SciTech Connect

    Tasinato, Gianmassimo; Tellarini, Matteo; Ross, Ashley J.; Wands, David E-mail: matteo.tellarini@port.ac.uk E-mail: david.wands@port.ac.uk

    2014-03-01

    The statistics of large-scale structure in the Universe can be used to probe non-Gaussianity of the primordial density field, complementary to existing constraints from the cosmic microwave background. In particular, the scale dependence of halo bias, which affects the halo distribution at large scales, represents a promising tool for analyzing primordial non-Gaussianity of local form. Future observations, for example, may be able to constrain the trispectrum parameter g{sub NL} that is difficult to study and constrain using the CMB alone. We investigate how galaxy and matter bispectra can distinguish between the two non-Gaussian parameters f{sub NL} and g{sub NL}, whose effects give nearly degenerate contributions to the power spectra. We use a generalization of the univariate bias approach, making the hypothesis that the number density of halos forming at a given position is a function of the local matter density contrast and of its local higher-order statistics. Using this approach, we calculate the halo-matter bispectra and analyze their properties. We determine a connection between the sign of the halo bispectrum on large scales and the parameter g{sub NL}. We also construct a combination of halo and matter bispectra that is sensitive to f{sub NL}, with little contamination from g{sub NL}. We study both the case of single and multiple sources to the primordial gravitational potential, discussing how to extend the concept of stochastic halo bias to the case of bispectra. We use a specific halo mass-function to calculate numerically the bispectra in appropriate squeezed limits, confirming our theoretical findings.

  19. Quadratic partial eigenvalue assignment in large-scale stochastic dynamic systems for resilient and economic design

    SciTech Connect

    Das, Sonjoy; Goswami, Kundan; Datta, Biswa N.

    2014-12-10

    Failure of structural systems under dynamic loading can be prevented via active vibration control which shifts the damped natural frequencies of the systems away from the dominant range of loading spectrum. The damped natural frequencies and the dynamic load typically show significant variations in practice. A computationally efficient methodology based on quadratic partial eigenvalue assignment technique and optimization under uncertainty has been formulated in the present work that will rigorously account for these variations and result in an economic and resilient design of structures. A novel scheme based on hierarchical clustering and importance sampling is also developed in this work for accurate and efficient estimation of probability of failure to guarantee the desired resilience level of the designed system. Numerical examples are presented to illustrate the proposed methodology.

  20. Using stochastically-generated subcolumns to represent cloud structure in a large-scale model

    SciTech Connect

    Pincus, R; Hemler, R; Klein, S A

    2005-12-08

    A new method for representing subgrid-scale cloud structure, in which each model column is decomposed into a set of subcolumns, has been introduced into the Geophysical Fluid Dynamics Laboratory's global climate model AM2. Each subcolumn in the decomposition is homogeneous but the ensemble reproduces the initial profiles of cloud properties including cloud fraction, internal variability (if any) in cloud condensate, and arbitrary overlap assumptions that describe vertical correlations. These subcolumns are used in radiation and diagnostic calculations, and have allowed the introduction of more realistic overlap assumptions. This paper describes the impact of these new methods for representing cloud structure in instantaneous calculations and long-term integrations. Shortwave radiation computed using subcolumns and the random overlap assumption differs in the global annual average by more than 4 W/m{sup 2} from the operational radiation scheme in instantaneous calculations; much of this difference is counteracted by a change in the overlap assumption to one in which overlap varies continuously with the separation distance between layers. Internal variability in cloud condensate, diagnosed from the mean condensate amount and cloud fraction, has about the same effect on radiative fluxes as does the ad hoc tuning accounting for this effect in the operational radiation scheme. Long simulations with the new model configuration show little difference from the operational model configuration, while statistical tests indicate that the model does not respond systematically to the sampling noise introduced by the approximate radiative transfer techniques introduced to work with the subcolumns.

  1. Stochastic inflation lattice simulations - Ultra-large scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.

    1991-01-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.

  2. Quadratic partial eigenvalue assignment in large-scale stochastic dynamic systems for resilient and economic design

    NASA Astrophysics Data System (ADS)

    Das, S.; Goswami, K.; Datta, B. N.

    2016-05-01

    Failure of structural systems under dynamic loading can be prevented via active vibration control which shifts the damped natural frequencies of the systems away from the dominant range of a loading spectrum. The damped natural frequencies and the dynamic load typically show significant variations in practice. A computationally efficient methodology based on quadratic partial eigenvalue assignment technique and optimization under uncertainty has been formulated in the present work that will rigorously account for these variations and result in economic and resilient design of structures. A novel scheme based on hierarchical clustering and importance sampling is also developed in this work for accurate and efficient estimation of probability of failure to guarantee the desired resilience level of the designed system. Finally the most robust set of feedback matrices is selected from the set of probabilistically characterized optimal closed-loop system to implement the new methodology for design of active controlled structures. Numerical examples are presented to illustrate the proposed methodology.

  3. Stochastic and recursive calibration for operational, large-scale, agricultural land and water use management models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Kimball, J. S.; Jencso, K. G.

    2015-12-01

    Managing the impact of climatic cycles on agricultural production, on land allocation, and on the state of active and projected water sources is challenging. This is because in addition to the uncertainties associated with climate projections, it is difficult to anticipate how farmers will respond to climatic change or to economic and policy incentives. Some sophisticated decision support systems available to water managers consider farmers' adaptive behavior but they are data intensive and difficult to apply operationally over large regions. Satellite-based observational technologies, in conjunction with models and assimilation methods, create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents at seasonal scales.We present an integrated modeling framework that can be driven by satellite remote sensing to enable robust regional assessment and prediction of climatic and policy impacts on agricultural production, water resources, and management decisions. The core of this framework is a widely used model of agricultural production and resource allocation adapted to be used in conjunction with remote sensing inputs to quantify the amount of land and water farmers allocate for each crop they choose to grow on a seasonal basis in response to reduced or enhanced access to water due to climatic or policy restrictions. A recursive Bayesian update method is used to adjust the model parameters by assimilating information on crop acreage, production, and crop evapotranspiration as a proxy for water use that can be estimated from high spatial resolution satellite remote sensing. The data assimilation framework blends new and old information to avoid over-calibration to the specific conditions of a single year and permits the updating of parameters to track gradual changes in the agricultural system.This integrated framework provides an operational means of monitoring and forecasting what crops will be grown and how farmers will allocate land and water under expected adverse conditions, and the resulting consequences for other water users. The Bayesian update framework constitutes an efficient method for the identification of the production function parameters and provides valuable information on the associated uncertainty of the forecasts.

  4. Quadratic partial eigenvalue assignment in large-scale stochastic dynamic systems for resilient and economic design

    NASA Astrophysics Data System (ADS)

    Das, Sonjoy; Goswami, Kundan; Datta, Biswa N.

    2014-12-01

    Failure of structural systems under dynamic loading can be prevented via active vibration control which shifts the damped natural frequencies of the systems away from the dominant range of loading spectrum. The damped natural frequencies and the dynamic load typically show significant variations in practice. A computationally efficient methodology based on quadratic partial eigenvalue assignment technique and optimization under uncertainty has been formulated in the present work that will rigorously account for these variations and result in an economic and resilient design of structures. A novel scheme based on hierarchical clustering and importance sampling is also developed in this work for accurate and efficient estimation of probability of failure to guarantee the desired resilience level of the designed system. Numerical examples are presented to illustrate the proposed methodology.

  5. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  6. Reconstruction of large-scale gene regulatory networks using Bayesian model averaging.

    PubMed

    Kim, Haseong; Gelenbe, Erol

    2012-09-01

    Gene regulatory networks provide the systematic view of molecular interactions in a complex living system. However, constructing large-scale gene regulatory networks is one of the most challenging problems in systems biology. Also large burst sets of biological data require a proper integration technique for reliable gene regulatory network construction. Here we present a new reverse engineering approach based on Bayesian model averaging which attempts to combine all the appropriate models describing interactions among genes. This Bayesian approach with a prior based on the Gibbs distribution provides an efficient means to integrate multiple sources of biological data. In a simulation study with maximum of 2000 genes, our method shows better sensitivity than previous elastic-net and Gaussian graphical models, with a fixed specificity of 0.99. The study also shows that the proposed method outperforms the other standard methods for a DREAM dataset generated by nonlinear stochastic models. In brain tumor data analysis, three large-scale networks consisting of 4422 genes were built using the gene expression of non-tumor, low and high grade tumor mRNA expression samples, along with DNA-protein binding affinity information. We found that genes having a large variation of degree distribution among the three tumor networks are the ones that see most involved in regulatory and developmental processes, which possibly gives a novel insight concerning conventional differentially expressed gene analysis. PMID:22987132

  7. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  8. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  9. Generating Large-Scale Longitudinal Data Resources for Aging Research

    PubMed Central

    Hofer, Scott M.

    2011-01-01

    Objectives. The need for large studies and the types of large-scale data resources (LSDRs) are discussed along with their general scientific utility, role in aging research, and affordability. The diversification of approaches to large-scale data resourcing is described in order to facilitate their use in aging research. Methods. The need for LSDRs is discussed in terms of (a) large sample size; (b) longitudinal design; (c) as platforms for additional investigator-initiated research projects; and (d) broad-based access to core genetic, biological, and phenotypic data. Discussion. It is concluded that a “lite-touch, lo-tech, lo-cost” approach to LSDRs is a viable strategy for the development of LSDRs and would enhance the likelihood of LSDRs being established which are dedicated to the wide range of important aging-related issues. PMID:21743049

  10. Lagrangian space consistency relation for large scale structure

    NASA Astrophysics Data System (ADS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  11. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  12. The Large Scale Synthesis of Aligned Plate Nanostructures.

    PubMed

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ' phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  13. Large-scale processes in the solar nebula

    NASA Technical Reports Server (NTRS)

    Boss, A. P.

    1994-01-01

    Theoretical models of the structure of a minimum mass solar nebula should be able to provide the physical context to help evaluate the efficacy of any mechanism proposed for the formation of chondrules or Ca, Al-rich inclusions (CAI's). These models generally attempt to use the equations of radiative hydrodynamics to calculate the large-scale structure of the solar nebula throughout the planet-forming region. In addition, it has been suggested that chondrules and CAI's (=Ch&CAI's) may have been formed as a direct result of large-scale nebula processing such as passage of material through high-temperature regions associated with the global structure of the nebula. In this report we assess the status of global models of solar nebula structure and of various related mechanisms that have been suggested for Ch and CAI formation.

  14. Planar Doppler Velocimetry for Large-Scale Wind Tunnel Applications

    NASA Technical Reports Server (NTRS)

    McKenzie, Robert L.

    1998-01-01

    Planar Doppler Velocimetry (PDV) concepts using a pulsed laser are described and the obtainable minimum resolved velocities in large-scale wind tunnels are evaluated. Velocity-field measurements are shown to be possible at ranges of tens of meters and with single pulse resolutions as low as 2 m/s. Velocity measurements in the flow of a low-speed, turbulent jet are reported that demonstrate the ability of PDV to acquire both average velocity fields and their fluctuation amplitudes, using procedures that are compatible with large-scale facility operations. The advantages of PDV over current Laser Doppler Anemometry and Particle Image Velocimetry techniques appear to be significant for applications to large facilities.

  15. Transparent and Flexible Large-scale Graphene-based Heater

    NASA Astrophysics Data System (ADS)

    Kang, Junmo; Lee, Changgu; Kim, Young-Jin; Choi, Jae-Boong; Hong, Byung Hee

    2011-03-01

    We report the application of transparent and flexible heater with high optical transmittance and low sheet resistance using graphene films, showing outstanding thermal and electrical properties. The large-scale graphene films were grown on Cu foil by chemical vapor deposition methods, and transferred to transparent substrates by multiple stacking. The wet chemical doping process enhanced the electrical properties, showing a sheet resistance as low as 35 ohm/sq with 88.5 % transmittance. The temperature response usually depends on the dimension and the sheet resistance of the graphene-based heater. We show that a 4x4 cm2 heater can reach 80& circ; C within 40 seconds and large-scale (9x9 cm2) heater shows uniformly heating performance, which was measured using thermocouple and infra-red camera. These heaters would be very useful for defogging systems and smart windows.

  16. Large Scale Diffuse X-ray Emission from Abell 3571

    NASA Technical Reports Server (NTRS)

    Molnar, Sandor M.; White, Nicholas E. (Technical Monitor)

    2001-01-01

    Observations of the Luman alpha forest suggest that there are many more baryons at high redshift than we can find in the Universe nearby. The largest known concentration of baryons in the nearby Universe is the Shapley supercluster. We scanned the Shapley supercluster to search for large scale diffuse emission with the Rossi X-ray Timing Explorer (RXTE), and found some evidence for such emission. Large scale diffuse emission may be associated to the supercluster, or the clusters of galaxies within the supercluster. In this paper we present results of scans near Abell 3571. We found that the sum of a cooling flow and an isothermal beta model adequately describes the X-ray emission from the cluster. Our results suggest that diffuse emission from A3571 extends out to about two virial radii. We briefly discuss the importance of the determination of the cut off radius of the beta model.

  17. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  18. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  19. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  20. Large-scale quantum effects in biological systems

    NASA Astrophysics Data System (ADS)

    Mesquita, Marcus V.; Vasconcellos, Áurea R.; Luzzi, Roberto; Mascarenhas, Sergio

    Particular aspects of large-scale quantum effects in biological systems, such as biopolymers and also microtubules in the cytoskeleton of neurons which can have relevance in brain functioning, are discussed. The microscopic (quantum mechanical) and macroscopic (quantum statistical mechanical) aspects, and the emergence of complex behavior, are described. This phenomena consists of the large-scale coherent process of Fröhlich-Bose-Einstein condensation in open and sufficiently far-from-equilibrium biopolymers. Associated with this phenomenon is the presence of Schrödinger-Davydov solitons, which propagate, undistorted and undamped, when embedded in the Fröhlich-Bose-Einstein condensate, thus allowing for the transmission of signals at long distances, involving a question relevant to bioenergetics.

  1. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  2. Large-scale objective phenotyping of 3D facial morphology

    PubMed Central

    Hammond, Peter; Suttie, Michael

    2012-01-01

    Abnormal phenotypes have played significant roles in the discovery of gene function, but organized collection of phenotype data has been overshadowed by developments in sequencing technology. In order to study phenotypes systematically, large-scale projects with standardized objective assessment across populations are considered necessary. The report of the 2006 Human Variome Project meeting recommended documentation of phenotypes through electronic means by collaborative groups of computational scientists and clinicians using standard, structured descriptions of disease-specific phenotypes. In this report, we describe progress over the past decade in 3D digital imaging and shape analysis of the face, and future prospects for large-scale facial phenotyping. Illustrative examples are given throughout using a collection of 1107 3D face images of healthy controls and individuals with a range of genetic conditions involving facial dysmorphism. PMID:22434506

  3. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  4. Electron drift in a large scale solid xenon

    DOE PAGESBeta

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  5. Large-scale micropropagation system of plant cells.

    PubMed

    Honda, Hiroyuki; Kobayashi, Takeshi

    2004-01-01

    Plant micropropagation is an efficient method of propagating disease-free, genetically uniform and massive amounts of plants in vitro. The scale-up of the whole process for plant micropropagation should be established by an economically feasible technology for large-scale production of them in appropriate bioreactors. It is necessary to design suitable bioreactor configuration which can provide adequate mixing and mass transfer while minimizing the intensity of shear stress and hydrodynamic pressure. Automatic selection of embryogenic calli and regenerated plantlets using image analysis system should be associated with the system. The aim of this chapter is to identify the problems related to large-scale plant micropropagation via somatic embryogenesis, and to summarize the micropropagation technology and computer-aided image analysis. Viscous additive supplemented culture, which is including the successful results obtained by us for callus regeneration, is also introduced. PMID:15453194

  6. Individual skill differences and large-scale environmental learning.

    PubMed

    Fields, Alexa W; Shelton, Amy L

    2006-05-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited exposure and tested on judgments about relative locations of objects. They also performed a series of spatial and nonspatial component skill tests. With limited learning, performance after route encoding was worse than performance after survey encoding. Furthermore, performance after route and survey encoding appeared to be preferentially linked to perspective and object-based transformations, respectively. Together, the results provide clues to how different skills might be engaged by different individuals for the same goal of learning a large-scale environment. PMID:16719662

  7. Large-scale flow generation in turbulent convection

    PubMed Central

    Krishnamurti, Ruby; Howard, Louis N.

    1981-01-01

    In a horizontal layer of fluid heated from below and cooled from above, cellular convection with horizontal length scale comparable to the layer depth occurs for small enough values of the Rayleigh number. As the Rayleigh number is increased, cellular flow disappears and is replaced by a random array of transient plumes. Upon further increase, these plumes drift in one direction near the bottom and in the opposite direction near the top of the layer with the axes of plumes tilted in such a way that horizontal momentum is transported upward via the Reynolds stress. With the onset of this large-scale flow, the largest scale of motion has increased from that comparable to the layer depth to a scale comparable to the layer width. The conditions for occurrence and determination of the direction of this large-scale circulation are described. Images PMID:16592996

  8. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  9. Large-scale behavior and statistical equilibria in rotating flows

    NASA Astrophysics Data System (ADS)

    Mininni, P. D.; Dmitruk, P.; Matthaeus, W. H.; Pouquet, A.

    2011-01-01

    We examine long-time properties of the ideal dynamics of three-dimensional flows, in the presence or not of an imposed solid-body rotation and with or without helicity (velocity-vorticity correlation). In all cases, the results agree with the isotropic predictions stemming from statistical mechanics. No accumulation of excitation occurs in the large scales, although, in the dissipative rotating case, anisotropy and accumulation, in the form of an inverse cascade of energy, are known to occur. We attribute this latter discrepancy to the linearity of the term responsible for the emergence of inertial waves. At intermediate times, inertial energy spectra emerge that differ somewhat from classical wave-turbulence expectations and with a trace of large-scale excitation that goes away for long times. These results are discussed in the context of partial two dimensionalization of the flow undergoing strong rotation as advocated by several authors.

  10. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  11. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  12. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  13. Large-scale Alfvén vortices

    NASA Astrophysics Data System (ADS)

    Onishchenko, O. G.; Pokhotelov, O. A.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-01

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  14. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  15. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  16. Cosmic string and formation of large scale structure.

    NASA Astrophysics Data System (ADS)

    Fang, L.-Z.; Xiang, S.-P.

    Cosmic string formed due to phase transition in the early universe may be the cause of galaxy formation and clustering. The advantage of string model is that it can give a consistent explanation of all observed results related to large scale structure, such as correlation functions of galaxies, clusters and superclusters, the existence of voids and/or bubbles, anisotropy of cosmic background radiation. A systematic review on string model has been done.

  17. Large-scale Alfvén vortices

    SciTech Connect

    Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  18. Climate: large-scale warming is not urban.

    PubMed

    Parker, David E

    2004-11-18

    Controversy has persisted over the influence of urban warming on reported large-scale surface-air temperature trends. Urban heat islands occur mainly at night and are reduced in windy conditions. Here we show that, globally, temperatures over land have risen as much on windy nights as on calm nights, indicating that the observed overall warming is not a consequence of urban development. PMID:15549087

  19. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  20. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  1. PARTICLE ACCELERATION BY COLLISIONLESS SHOCKS CONTAINING LARGE-SCALE MAGNETIC-FIELD VARIATIONS

    SciTech Connect

    Guo, F.; Jokipii, J. R.; Kota, J. E-mail: jokipii@lpl.arizona.ed

    2010-12-10

    Diffusive shock acceleration at collisionless shocks is thought to be the source of many of the energetic particles observed in space. Large-scale spatial variations of the magnetic field have been shown to be important in understanding observations. The effects are complex, so here we consider a simple, illustrative model. Here we solve numerically the Parker transport equation for a shock in the presence of large-scale sinusoidal magnetic-field variations. We demonstrate that the familiar planar-shock results can be significantly altered as a consequence of large-scale, meandering magnetic lines of force. Because the perpendicular diffusion coefficient {kappa}{sub perpendicular} is generally much smaller than the parallel diffusion coefficient {kappa}{sub ||}, the energetic charged particles are trapped and preferentially accelerated along the shock front in the regions where the connection points of magnetic field lines intersecting the shock surface converge, and thus create the 'hot spots' of the accelerated particles. For the regions where the connection points separate from each other, the acceleration to high energies will be suppressed. Further, the particles diffuse away from the 'hot spot' regions and modify the spectra of downstream particle distribution. These features are qualitatively similar to the recent Voyager observations in the Heliosheath. These results are potentially important for particle acceleration at shocks propagating in turbulent magnetized plasmas as well as those which contain large-scale nonplanar structures. Examples include anomalous cosmic rays accelerated by the solar wind termination shock, energetic particles observed in propagating heliospheric shocks, galactic cosmic rays accelerated by supernova blast waves, etc.

  2. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner

  3. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  4. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  5. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  6. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. PMID:25731989

  7. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, C.P.; Olden, J.D.; Lytle, D.A.; Melis, T.S.; Schmidt, J.C.; Bray, E.N.; Freeman, Mary C.; Gido, K.B.; Hemphill, N.P.; Kennard, M.J.; McMullen, L.E.; Mims, M.C.; Pyron, M.; Robinson, C.T.; Williams, J.G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems. ?? 2011 by American Institute of Biological Sciences. All rights reserved.

  8. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  9. Large-scale flow generation by inhomogeneous helicity

    NASA Astrophysics Data System (ADS)

    Yokoi, N.; Brandenburg, A.

    2016-03-01

    The effect of kinetic helicity (velocity-vorticity correlation) on turbulent momentum transport is investigated. The turbulent kinetic helicity (pseudoscalar) enters the Reynolds stress (mirror-symmetric tensor) expression in the form of a helicity gradient as the coupling coefficient for the mean vorticity and/or the angular velocity (axial vector), which suggests the possibility of mean-flow generation in the presence of inhomogeneous helicity. This inhomogeneous helicity effect, which was previously confirmed at the level of a turbulence- or closure-model simulation, is examined with the aid of direct numerical simulations of rotating turbulence with nonuniform helicity sustained by an external forcing. The numerical simulations show that the spatial distribution of the Reynolds stress is in agreement with the helicity-related term coupled with the angular velocity, and that a large-scale flow is generated in the direction of angular velocity. Such a large-scale flow is not induced in the case of homogeneous turbulent helicity. This result confirms the validity of the inhomogeneous helicity effect in large-scale flow generation and suggests that a vortex dynamo is possible even in incompressible turbulence where there is no baroclinicity effect.

  10. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  11. Large-scale quantification of CVD graphene surface coverage.

    PubMed

    Ambrosi, Adriano; Bonanni, Alessandra; Sofer, Zdeněk; Pumera, Martin

    2013-03-21

    The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale fabrication of graphene by the CVD process. PMID:23396554

  12. Photorealistic large-scale urban city model reconstruction.

    PubMed

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite). PMID:19423889

  13. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  14. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  15. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  16. Large-scale data mining pilot project in human genome

    SciTech Connect

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  17. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  18. New Large-scale Control Strategies for Turbulent Boundary Layers

    NASA Astrophysics Data System (ADS)

    Schoppa, Wade; Hussain, Fazle

    1997-11-01

    Using direct numerical simulations of turbulent channel flow, we present robust strategies for drag reduction by prevention of streamwise vortex formation near the wall. Instability of lifted, vortex-free low-speed streaks is shown to generate new streamwise vortices, which dominate near-wall turbulence phenomena. The newly-found instability mechanism initiates streak waviness in the (x,z) plane which leads to ωx sheets. Streak waviness induces positive partial u/partial x (i.e. positive VISA) which causes these sheets to then collapse via stretching (rather than roll up) into streamwise vortices. Significantly, the 3D features of the (instantaneous) instability-generated vortices agree well with the coherent structures educed (i.e. ensemble-averaged) from fully turbulent flow, suggesting the prevalence of this instability mechanism. The new control via large-scale streak manipulation exploits this crucial role of streak instability in vortex generation. An x-independent forcing with a z wavelength of 4 streak spacings, with an amplitude of only 5% of the centerline velocity, produces a significant sustained drag reduction: 20% for imposed counterrotating large-scale swirls and 50% for colliding spanwise wall jet-like forcing. These results suggest promising drag reduction strategies, involving large-scale (hence more durable) actuation and requiring no wall sensors or feedback logic.

  19. Lateral stirring of large-scale tracer fields by altimetry

    NASA Astrophysics Data System (ADS)

    Dencausse, Guillaume; Morrow, Rosemary; Rogé, Marine; Fleury, Sara

    2014-01-01

    Ocean surface fronts and filaments have a strong impact on the global ocean circulation and biogeochemistry. Surface Lagrangian advection with time-evolving altimetric geostrophic velocities can be used to simulate the submesoscale front and filament structures in large-scale tracer fields. We study this technique in the Southern Ocean region south of Tasmania, a domain marked by strong meso- to submesoscale features such as the fronts of the Antarctic Circumpolar Current (ACC). Starting with large-scale surface tracer fields that we stir with altimetric velocities, we determine `advected' fields which compare well with high-resolution in situ or satellite tracer data. We find that fine scales are best represented in a statistical sense after an optimal advection time of ˜2 weeks, with enhanced signatures of the ACC fronts and better spectral energy. The technique works best in moderate to high EKE regions where lateral advection dominates. This technique may be used to infer the distribution of unresolved small scales in any physical or biogeochemical surface tracer that is dominated by lateral advection. Submesoscale dynamics also impact the subsurface of the ocean, and the Lagrangian advection at depth shows promising results. Finally, we show that climatological tracer fields computed from the advected large-scale fields display improved fine-scale mean features, such as the ACC fronts, which can be useful in the context of ocean modelling.

  20. Large scale anisotropy of UHECRs for the Telescope Array

    SciTech Connect

    Kido, E.

    2011-09-22

    The origin of Ultra High Energy Cosmic Rays (UHECRs) is one of the most interesting questions in astroparticle physics. Despite of the efforts by other previous measurements, there is no consensus of both of the origin and the mechanism of UHECRs generation and propagation yet. In this context, Telescope Array (TA) experiment is expected to play an important role as the largest detector in the northern hemisphere which consists of an array of surface particle detectors (SDs) and fluorescence detectors (FDs) and other important calibration devices. We searched for large scale anisotropy using SD data of TA. UHECRs are expected to be restricted in GZK horizon when the composition of UHECRs is proton, so the observed arrival directions are expected to exhibit local large scale anisotropy if UHECR sources are some astrophysical objects. We used the SD data set from 11 May 2008 to 7 September 2010 to search for large-scale anisotropy. The discrimination power between LSS and isotropy is not enough yet, but the statistics in TA is expected to discriminate between those in about 95% confidence level on average in near future.

  1. Large-scale functional connectivity networks in the rodent brain.

    PubMed

    Gozzi, Alessandro; Schwarz, Adam J

    2016-02-15

    Resting-state functional Magnetic Resonance Imaging (rsfMRI) of the human brain has revealed multiple large-scale neural networks within a hierarchical and complex structure of coordinated functional activity. These distributed neuroanatomical systems provide a sensitive window on brain function and its disruption in a variety of neuropathological conditions. The study of macroscale intrinsic connectivity networks in preclinical species, where genetic and environmental conditions can be controlled and manipulated with high specificity, offers the opportunity to elucidate the biological determinants of these alterations. While rsfMRI methods are now widely used in human connectivity research, these approaches have only relatively recently been back-translated into laboratory animals. Here we review recent progress in the study of functional connectivity in rodent species, emphasising the ability of this approach to resolve large-scale brain networks that recapitulate neuroanatomical features of known functional systems in the human brain. These include, but are not limited to, a distributed set of regions identified in rats and mice that may represent a putative evolutionary precursor of the human default mode network (DMN). The impact and control of potential experimental and methodological confounds are also critically discussed. Finally, we highlight the enormous potential and some initial application of connectivity mapping in transgenic models as a tool to investigate the neuropathological underpinnings of the large-scale connectional alterations associated with human neuropsychiatric and neurological conditions. We conclude by discussing the translational potential of these methods in basic and applied neuroscience. PMID:26706448

  2. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  3. Upscaling of elastic properties for large scale geomechanical simulations

    NASA Astrophysics Data System (ADS)

    Chalon, F.; Mainguy, M.; Longuemare, P.; Lemonnier, P.

    2004-09-01

    Large scale geomechanical simulations are being increasingly used to model the compaction of stress dependent reservoirs, predict the long term integrity of under-ground radioactive waste disposals, and analyse the viability of hot-dry rock geothermal sites. These large scale simulations require the definition of homogenous mechanical properties for each geomechanical cell whereas the rock properties are expected to vary at a smaller scale. Therefore, this paper proposes a new methodology that makes possible to define the equivalent mechanical properties of the geomechanical cells using the fine scale information given in the geological model. This methodology is implemented on a synthetic reservoir case and two upscaling procedures providing the effective elastic properties of the Hooke's law are tested. The first upscaling procedure is an analytical method for perfectly stratified rock mass, whereas the second procedure computes lower and upper bounds of the equivalent properties with no assumption on the small scale heterogeneity distribution. Both procedures are applied to one geomechanical cell extracted from the reservoir structure. The results show that the analytical and numerical upscaling procedures provide accurate estimations of the effective parameters. Furthermore, a large scale simulation using the homogenized properties of each geomechanical cell calculated with the analytical method demonstrates that the overall behaviour of the reservoir structure is well reproduced for two different loading cases. Copyright

  4. Large-scale magnetic topologies of early M dwarfs

    NASA Astrophysics Data System (ADS)

    Donati, J.-F.; Morin, J.; Petit, P.; Delfosse, X.; Forveille, T.; Aurière, M.; Cabanac, R.; Dintrans, B.; Fares, R.; Gastine, T.; Jardine, M. M.; Lignières, F.; Paletou, F.; Ramirez Velez, J. C.; Théado, S.

    2008-10-01

    We present here additional results of a spectropolarimetric survey of a small sample of stars ranging from spectral type M0 to M8 aimed at investigating observationally how dynamo processes operate in stars on both sides of the full convection threshold (spectral type M4). The present paper focuses on early M stars (M0-M3), that is above the full convection threshold. Applying tomographic imaging techniques to time series of rotationally modulated circularly polarized profiles collected with the NARVAL spectropolarimeter, we determine the rotation period and reconstruct the large-scale magnetic topologies of six early M dwarfs. We find that early-M stars preferentially host large-scale fields with dominantly toroidal and non-axisymmetric poloidal configurations, along with significant differential rotation (and long-term variability); only the lowest-mass star of our subsample is found to host an almost fully poloidal, mainly axisymmetric large-scale field resembling those found in mid-M dwarfs. This abrupt change in the large-scale magnetic topologies of M dwarfs (occurring at spectral type M3) has no related signature on X-ray luminosities (measuring the total amount of magnetic flux); it thus suggests that underlying dynamo processes become more efficient at producing large-scale fields (despite producing the same flux) at spectral types later than M3. We suspect that this change relates to the rapid decrease in the radiative cores of low-mass stars and to the simultaneous sharp increase of the convective turnover times (with decreasing stellar mass) that models predict to occur at M3; it may also be (at least partly) responsible for the reduced magnetic braking reported for fully convective stars. Based on observations obtained at the Télescope Bernard Lyot (TBL), operated by the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique of France. E-mail: donati@ast.obs-mip.fr (J-FD); jmorin@ast.obs-mip.fr (JM); petit

  5. Arrangement of scale-interaction and large-scale modulation in high Reynolds number turbulent boundary layers

    NASA Astrophysics Data System (ADS)

    Baars, Woutijn J.; Hutchins, Nicholas; Marusic, Ivan

    2015-11-01

    Interactions between small- and large-scale motions are inherent in the near-wall dynamics of wall-bounded flows. We here examine the scale-interaction embedded within the streamwise velocity component. Data were acquired using hot-wire anemometry in ZPG turbulent boundary layers, for Reynolds numbers ranging from Reτ ≡ δUτ / ν ~ 2800 to 22800. After first decomposing velocity signals into contributions from small- and large-scales, we then represent the time-varying small-scale energy with time series of its instantaneous amplitude and instantaneous frequency, via a wavelet-based method. Features of the scale-interaction are inferred from isocorrelation maps, formed by correlating the large-scale velocity with its concurrent small-scale amplitude and frequency. Below the onset of the log-region, the physics constitutes aspects of amplitude modulation and frequency modulation. Time shifts, associated with the correlation extrema--representing the lead/lag of the small-scale signatures relative to the large-scales--are shown to be governed by inner-scaling. Wall-normal trends of time shifts are explained by considering the arrangement of scales in the log- and intermittent-regions, and how they relate to stochastic top-down and bottom-up processes.

  6. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  7. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  8. Performance of Extended Local Clustering Organization (LCO) for Large Scale Job-Shop Scheduling Problem (JSP)

    NASA Astrophysics Data System (ADS)

    Konno, Yohko; Suzuki, Keiji

    This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.

  9. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-01

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  10. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  11. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders. PMID:27096165

  12. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App

    PubMed Central

    Kumar, Sukhbinder; Griffiths, Timothy D.

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance—the capacity to make sense of complex ‘auditory scenes’ is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the ‘stochastic figure-ground’ stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a ‘game’ featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders. PMID:27096165

  13. Large-scale smart passive system for civil engineering applications

    NASA Astrophysics Data System (ADS)

    Jung, Hyung-Jo; Jang, Dong-Doo; Lee, Heon-Jae; Cho, Sang-Won

    2008-03-01

    The smart passive system consisting of a magnetorheological (MR) damper and an electromagnetic induction (EMI) part has been recently proposed. An EMI part can generate the input current for an MR damper from vibration of a structure according to Faraday's law of electromagnetic induction. The control performance of the smart passive system has been demonstrated mainly by numerical simulations. It was verified from the numerical results that the system could be effective to reduce the structural responses in the cases of civil engineering structures such as buildings and bridges. On the other hand, the experimental validation of the system is not sufficiently conducted yet. In this paper, the feasibility of the smart passive system to real-scale structures is investigated. To do this, the large-scale smart passive system is designed, manufactured, and tested. The system consists of the large-capacity MR damper, which has a maximum force level of approximately +/-10,000N, a maximum stroke level of +/-35mm and the maximum current level of 3 A, and the large-scale EMI part, which is designed to generate sufficient induced current for the damper. The applicability of the smart passive system to large real-scale structures is examined through a series of shaking table tests. The magnitudes of the induced current of the EMI part with various sinusoidal excitation inputs are measured. According to the test results, the large-scale EMI part shows the possibility that it could generate the sufficient current or power for changing the damping characteristics of the large-capacity MR damper.

  14. Constraints on large-scale dark acoustic oscillations from cosmology

    NASA Astrophysics Data System (ADS)

    Cyr-Racine, Francis-Yan; de Putter, Roland; Raccanelli, Alvise; Sigurdson, Kris

    2014-03-01

    If all or a fraction of the dark matter (DM) were coupled to a bath of dark radiation (DR) in the early Universe, we expect the combined DM-DR system to give rise to acoustic oscillations of the dark matter until it decouples from the DR. Much like the standard baryon acoustic oscillations, these dark acoustic oscillations (DAO) imprint a characteristic scale, the sound horizon of dark matter, on the matter power spectrum. We compute in detail how the microphysics of the DM-DR interaction affects the clustering of matter in the Universe and show that the DAO physics also gives rise to unique signatures in the temperature and polarization spectra of the cosmic microwave background (CMB). We use cosmological data from the CMB, baryon acoustic oscillations, and large-scale structure to constrain the possible fraction of interacting DM as well as the strength of its interaction with DR. Like nearly all knowledge we have gleaned about DM since inferring its existence this constraint rests on the betrayal by gravity of the location of otherwise invisible DM. Although our results can be straightforwardly applied to a broad class of models that couple dark matter particles to various light relativistic species, in order to make quantitative predictions, we model the interacting component as dark atoms coupled to a bath of dark photons. We find that linear cosmological data and CMB lensing put strong constraints on the existence of DAO features in the CMB and the large-scale structure of the Universe. Interestingly, we find that at most ˜5% of all DM can be very strongly interacting with DR. We show that our results are surprisingly constraining for the recently proposed double-disk DM model, a novel example of how large-scale precision cosmological data can be used to constrain galactic physics and subgalactic structure.

  15. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  16. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    SciTech Connect

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  17. Infectious diseases in large-scale cat hoarding investigations.

    PubMed

    Polak, K C; Levy, J K; Crawford, P C; Leutenegger, C M; Moriello, K A

    2014-08-01

    Animal hoarders accumulate animals in over-crowded conditions without adequate nutrition, sanitation, and veterinary care. As a result, animals rescued from hoarding frequently have a variety of medical conditions including respiratory infections, gastrointestinal disease, parasitism, malnutrition, and other evidence of neglect. The purpose of this study was to characterize the infectious diseases carried by clinically affected cats and to determine the prevalence of retroviral infections among cats in large-scale cat hoarding investigations. Records were reviewed retrospectively from four large-scale seizures of cats from failed sanctuaries from November 2009 through March 2012. The number of cats seized in each case ranged from 387 to 697. Cats were screened for feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) in all four cases and for dermatophytosis in one case. A subset of cats exhibiting signs of upper respiratory disease or diarrhea had been tested for infections by PCR and fecal flotation for treatment planning. Mycoplasma felis (78%), calicivirus (78%), and Streptococcus equi subspecies zooepidemicus (55%) were the most common respiratory infections. Feline enteric coronavirus (88%), Giardia (56%), Clostridium perfringens (49%), and Tritrichomonas foetus (39%) were most common in cats with diarrhea. The seroprevalence of FeLV and FIV were 8% and 8%, respectively. In the one case in which cats with lesions suspicious for dermatophytosis were cultured for Microsporum canis, 69/76 lesional cats were culture-positive; of these, half were believed to be truly infected and half were believed to be fomite carriers. Cats from large-scale hoarding cases had high risk for enteric and respiratory infections, retroviruses, and dermatophytosis. Case responders should be prepared for mass treatment of infectious diseases and should implement protocols to prevent transmission of feline or zoonotic infections during the emergency response and when

  18. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  19. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  20. Towards large scale production and separation of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Alvarez, Noe T.

    Since their discovery, carbon nanotubes (CNTs) have boosted the research and applications of nanotechnology; however, many applications of CNTs are inaccessible because they depend upon large-scale CNT production and separations. Type, chirality and diameter control of CNTs determine many of their physical properties, and such control is still not accesible. This thesis studies the fundamentals for scalable selective reactions of HiPCo CNTs as well as the early phase of routes to an inexpensive approach for large-scale CNT production. In the growth part, this thesis covers a complete wet-chemistry process of catalyst and catalyst support deposition for growth of vertically aligned (VA) CNTs. A wet-chemistry preparation process has significant importance for CNT synthesis through chemical vapor deposition (CVD). CVD is by far, the most suitable and inexpensive process for large-scale CNT production when compared to other common processes such as laser ablation and arc discharge. However, its potential has been limited by low-yielding and difficult preparation processes of catalyst and its support, therefore its competitiveness has been reduced. The wet-chemistry process takes advantage of current nanoparticle technology to deposit the catalyst and the catalyst support as a thin film of nanoparticles, making the protocol simple compared to electron beam evaporation and sputtering processes. In the CNT selective reactions part, this thesis studies UV irradiation of individually dispersed HiPCo CNTs that generates auto-selective reactions in the liquid phase with good control over their diameter and chirality. This technique is ideal for large-scale and continuous-process of separations of CNTs by diameter and type. Additionally, an innovative simple catalyst deposition through abrasion is demonstrated. Simple friction between the catalyst and the substrates deposit a high enough density of metal catalyst particles for successful CNT growth. This simple approach has

  1. Why large-scale seasonal streamflow forecasts are feasible

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Candogan Yossef, N.; Van Beek, L. P.

    2011-12-01

    Seasonal forecasts of precipitation and temperature, using either statistical or dynamic prediction, have been around for almost 2 decades. The skill of these forecasts differ both in space and time, with highest skill in areas heavily influenced by SST anomalies such as El Nino or areas where land surface properties have a major impact on e.g. Monsoon strength, such as the vegetation cover of the Sahel region or the snow cover of the Tibetan plateau. However, the skill of seasonal forecasts is limited in most regions, with anomaly correlation coefficients varying between 0.2 and 0.5 for 1-3 month precipitation totals. This raises the question whether seasonal hydrological forecasting is feasible. Here, we make the case that it is. Using the example of statistical forecasts of NAO-strength and related precipitation anomalies over Europe, we show that the skill of large-scale streamflow forecasts is generally much higher than the precipitation forecasts itself, provided that the initial state of the system is accurately estimated. In the latter case, even the precipitation climatology can produce skillful results. This is due to the inertia of the hydrological system rooted in the storage of soil moisture, groundwater and snow pack, as corroborated by a recent study using snow observations for seasonal streamflow forecasting in the Western US. These examples seem to suggest that for accurate seasonal hydrological forecasting, correct state estimation is more important than accurate seasonal meteorological forecasts. However, large-scale estimation of hydrological states is difficult and validation of large-scale hydrological models often reveals large biases in e.g. streamflow estimates. Fortunately, as shown with a validation study of the global model PCR-GLOBWB, these biases are of less importance when seasonal forecasts are evaluated in terms of their ability to reproduce anomalous flows and extreme events, i.e. by anomaly correlations or categorical quantile

  2. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-07-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  3. UAV Data Processing for Large Scale Topographical Mapping

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. <5,000 hectares. For detail spatial planning purposes in Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial

  4. Inflation in de Sitter spacetime and CMB large scale anomaly

    NASA Astrophysics Data System (ADS)

    Zhao, Dong; Li, Ming-Hua; Wang, Ping; Chang, Zhe

    2015-09-01

    The influence of cosmological constant-type dark energy in the early universe is investigated. This is accommodated by a new dispersion relation in de Sitter spacetime. We perform a global fit to explore the cosmological parameter space by using the CosmoMC package with the recently released Planck TT and WMAP polarization datasets. Using the results from the global fit, we compute a new CMB temperature-temperature (TT) spectrum. The obtained TT spectrum has lower power compared with that based on the ACDM model at large scales. Supported by National Natural Science Foundation of China (11375203)

  5. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  6. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  7. Synthesis and sensing application of large scale bilayer graphene

    NASA Astrophysics Data System (ADS)

    Hong, Sung Ju; Yoo, Jung Hoon; Baek, Seung Jae; Park, Yung Woo

    2012-02-01

    We have synthesized large scale bilayer graphene by using Chemical Vapor Deposition (CVD) in atmospheric pressure. Bilayer graphene was grown by using CH4, H2 and Ar gases. The growth temperature was 1050^o. Conventional FET measurement shows ambipolar transfer characteristics. Results of Raman spectroscopy, Atomic Force microscope (AFM) and Transmission Electron Microscope (TEM) indicate the film is bilayer graphene. Especially, adlayer structure which interrupt uniformity was reduced in low methane flow condition. Furthermore, large size CVD bilayer graphene film can be investigated to apply sensor devices. By using conventional photolithography process, we have fabricated device array structure and studied sensing behavior.

  8. Implementation of Large Scale Integrated (LSI) circuit design software

    NASA Technical Reports Server (NTRS)

    Kuehlthau, R. L.; Pitts, E. R.

    1976-01-01

    Portions of the Computer Aided Design and Test system, a collection of Large Scale Integrated (LSI) circuit design programs were modified and upgraded. Major modifications were made to the Mask Analysis Program in the form of additional operating commands and file processing options. Modifications were also made to the Artwork Interactive Design System to correct some deficiencies in the original program as well as to add several new command features related to improving the response of AIDS when dealing with large files. The remaining work was concerned with updating various programs within CADAT to incorporate the silicon on sapphire silicon gate technology.

  9. Large-scale sodium spray fire code validation (SOFICOV) test

    SciTech Connect

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m/sup 3/ Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes.

  10. Radiative shocks on large scale lasers. Preliminary results

    NASA Astrophysics Data System (ADS)

    Leygnac, S.; Bouquet, S.; Stehle, C.; Barroso, P.; Batani, D.; Benuzzi, A.; Cathala, B.; Chièze, J.-P.; Fleury, X.; Grandjouan, N.; Grenier, J.; Hall, T.; Henry, E.; Koenig, M.; Lafon, J. P. J.; Malka, V.; Marchet, B.; Merdji, H.; Michaut, C.; Poles, L.; Thais, F.

    2001-05-01

    Radiative shocks, those structure is strongly influenced by the radiation field, are present in various astrophysical objects (circumstellar envelopes of variable stars, supernovae ...). Their modeling is very difficult and thus will take benefit from experimental informations. This approach is now possible using large scale lasers. Preliminary experiments have been performed with the nanosecond LULI laser at Ecole Polytechnique (France) in 2000. A radiative shock has been obtained in a low pressure xenon cell. The preparation of such experiments and their interpretation is performed using analytical calculations and numerical simulations.

  11. On the analysis of large-scale genomic structures.

    PubMed

    Oiwa, Nestor Norio; Goldman, Carla

    2005-01-01

    We apply methods from statistical physics (histograms, correlation functions, fractal dimensions, and singularity spectra) to characterize large-scale structure of the distribution of nucleotides along genomic sequences. We discuss the role of the extension of noncoding segments ("junk DNA") for the genomic organization, and the connection between the coding segment distribution and the high-eukaryotic chromatin condensation. The following sequences taken from GenBank were analyzed: complete genome of Xanthomonas campestri, complete genome of yeast, chromosome V of Caenorhabditis elegans, and human chromosome XVII around gene BRCA1. The results are compared with the random and periodic sequences and those generated by simple and generalized fractal Cantor sets. PMID:15858230

  12. Large-scale genotoxicity assessments in the marine environment

    SciTech Connect

    Hose, J.E.

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. 31 refs., 2 tabs.

  13. Floodplain management in Africa: Large scale analysis of flood data

    NASA Astrophysics Data System (ADS)

    Padi, Philip Tetteh; Baldassarre, Giuliano Di; Castellarin, Attilio

    2011-01-01

    To mitigate a continuously increasing flood risk in Africa, sustainable actions are urgently needed. In this context, we describe a comprehensive statistical analysis of flood data in the African continent. The study refers to quality-controlled, large and consistent databases of flood data, i.e. maximum discharge value and times series of annual maximum flows. Probabilistic envelope curves are derived for the African continent by means of a large scale regional analysis. Moreover, some initial insights on the statistical characteristics of African floods are provided. The results of this study are relevant and can be used to get some indications to support flood management in Africa.

  14. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  15. A multilevel optimization of large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, M. K.

    1976-01-01

    A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.

  16. Design of a large-scale CFB boiler

    SciTech Connect

    Darling, S.; Li, S.

    1997-12-31

    Many CFB boilers sized 100--150 MWe are in operation, and several others sized 150--250 MWe are in operation or under construction. The next step for CFB technology is the 300--400 MWe size range. This paper will describe Foster Wheeler`s large-scale CFB boiler experience and the design for a 300 MWe CFB boiler. The authors will show how the design incorporates Foster Wheeler`s unique combination of extensive utility experience and CFB boiler experience. All the benefits of CFB technology which include low emissions, fuel flexibility, low maintenance and competitive cost are now available in the 300--400 MWe size range.

  17. [National Strategic Promotion for Large-Scale Clinical Cancer Research].

    PubMed

    Toyama, Senya

    2016-04-01

    The number of clinical research by clinical cancer study groups has been decreasing this year in Japan. They say the reason is the abolition of donations to the groups from the pharmaceutical companies after the Diovan scandal. But I suppose fundamental problem is that government-supported large-scale clinical cancer study system for evidence based medicine (EBM) has not been fully established. An urgent establishment of the system based on the national strategy is needed for the cancer patients and the public health promotion. PMID:27220800

  18. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  19. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    NASA Astrophysics Data System (ADS)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  20. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  1. Generation of Large-Scale Winds in Horizontally Anisotropic Convection.

    PubMed

    von Hardenberg, J; Goluskin, D; Provenzale, A; Spiegel, E A

    2015-09-25

    We simulate three-dimensional, horizontally periodic Rayleigh-Bénard convection, confined between free-slip horizontal plates and rotating about a distant horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind. PMID:26451558

  2. Large-Scale Purification of Peroxisomes for Preparative Applications.

    PubMed

    Cramer, Jana; Effelsberg, Daniel; Girzalsky, Wolfgang; Erdmann, Ralf

    2015-09-01

    This protocol is designed for large-scale isolation of highly purified peroxisomes from Saccharomyces cerevisiae using two consecutive density gradient centrifugations. Instructions are provided for harvesting up to 60 g of oleic acid-induced yeast cells for the preparation of spheroplasts and generation of organellar pellets (OPs) enriched in peroxisomes and mitochondria. The OPs are loaded onto eight continuous 36%-68% (w/v) sucrose gradients. After centrifugation, the peak peroxisomal fractions are determined by measurement of catalase activity. These fractions are subsequently pooled and subjected to a second density gradient centrifugation using 20%-40% (w/v) Nycodenz. PMID:26330621

  3. Enabling large-scale biomedical analysis in the cloud.

    PubMed

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  4. Monochromatic waves induced by large-scale parametric forcing.

    PubMed

    Nepomnyashchy, A; Abarzhi, S I

    2010-03-01

    We study the formation and stability of monochromatic waves induced by large-scale modulations in the framework of the complex Ginzburg-Landau equation with parametric nonresonant forcing dependent on the spatial coordinate. In the limiting case of forcing with very large characteristic length scale, analytical solutions for the equation are found and conditions of their existence are outlined. Stability analysis indicates that the interval of existence of a monochromatic wave can contain a subinterval where the wave is stable. We discuss potential applications of the model in rheology, fluid dynamics, and optics. PMID:20365907

  5. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    NASA Astrophysics Data System (ADS)

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-04-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing.

  6. THE LARGE-SCALE MAGNETIC FIELDS OF THIN ACCRETION DISKS

    SciTech Connect

    Cao Xinwu; Spruit, Hendrik C. E-mail: henk@mpa-garching.mpg.de

    2013-03-10

    Large-scale magnetic field threading an accretion disk is a key ingredient in the jet formation model. The most attractive scenario for the origin of such a large-scale field is the advection of the field by the gas in the accretion disk from the interstellar medium or a companion star. However, it is realized that outward diffusion of the accreted field is fast compared with the inward accretion velocity in a geometrically thin accretion disk if the value of the Prandtl number P{sub m} is around unity. In this work, we revisit this problem considering the angular momentum of the disk to be removed predominantly by the magnetically driven outflows. The radial velocity of the disk is significantly increased due to the presence of the outflows. Using a simplified model for the vertical disk structure, we find that even moderately weak fields can cause sufficient angular momentum loss via a magnetic wind to balance outward diffusion. There are two equilibrium points, one at low field strengths corresponding to a plasma-beta at the midplane of order several hundred, and one for strong accreted fields, {beta} {approx} 1. We surmise that the first is relevant for the accretion of weak, possibly external, fields through the outer parts of the disk, while the latter one could explain the tendency, observed in full three-dimensional numerical simulations, of strong flux bundles at the centers of disk to stay confined in spite of strong magnetororational instability turbulence surrounding them.

  7. Development of large-scale functional brain networks in children.

    PubMed

    Supekar, Kaustubh; Musen, Mark; Menon, Vinod

    2009-07-01

    The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y) and 22 young-adults (ages 19-22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism. PMID:19621066

  8. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing.

    PubMed

    Xiao, Bo; Pradhan, Sangram K; Santiago, Kevin C; Rutherford, Gugu N; Pradhan, Aswini K

    2016-01-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing. PMID:27072067

  9. NIC-based Reduction Algorithms for Large-scale Clusters

    SciTech Connect

    Petrini, F; Moody, A T; Fernandez, J; Frachtenberg, E; Panda, D K

    2004-07-30

    Efficient algorithms for reduction operations across a group of processes are crucial for good performance in many large-scale, parallel scientific applications. While previous algorithms limit processing to the host CPU, we utilize the programmable processors and local memory available on modern cluster network interface cards (NICs) to explore a new dimension in the design of reduction algorithms. In this paper, we present the benefits and challenges, design issues and solutions, analytical models, and experimental evaluations of a family of NIC-based reduction algorithms. Performance and scalability evaluations were conducted on the ASCI Linux Cluster (ALC), a 960-node, 1920-processor machine at Lawrence Livermore National Laboratory, which uses the Quadrics QsNet interconnect. We find NIC-based reductions on modern interconnects to be more efficient than host-based implementations in both scalability and consistency. In particular, at large-scale--1812 processes--NIC-based reductions of small integer and floating-point arrays provided respective speedups of 121% and 39% over the host-based, production-level MPI implementation.

  10. Alignment of quasar polarizations with large-scale structures

    NASA Astrophysics Data System (ADS)

    Hutsemékers, D.; Braibant, L.; Pelgrims, V.; Sluse, D.

    2014-12-01

    We have measured the optical linear polarization of quasars belonging to Gpc scale quasar groups at redshift z ~ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is on the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel to their host large-scale structures. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID 092.A-0221.Table 1 is available in electronic form at http://www.aanda.org

  11. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  12. Exploring Cloud Computing for Large-scale Scientific Applications

    SciTech Connect

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  13. The combustion behavior of large scale lithium titanate battery

    PubMed Central

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  14. Large scale reconstruction of the solar coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Amari, T.; Aly, J.-J.; Chopin, P.; Canou, A.; Mikic, Z.

    2014-10-01

    It is now becoming necessary to access the global magnetic structure of the solar low corona at a large scale in order to understand its physics and more particularly the conditions of energization of the magnetic fields and the multiple connections between distant active regions (ARs) which may trigger eruptive events in an almost coordinated way. Various vector magnetographs, either on board spacecraft or ground-based, currently allow to obtain vector synoptic maps, composite magnetograms made of multiple interactive ARs, and full disk magnetograms. We present a method recently developed for reconstructing the global solar coronal magnetic field as a nonlinear force-free magnetic field in spherical geometry, generalizing our previous results in Cartesian geometry. This method is implemented in the new code XTRAPOLS, which thus appears as an extension of our active region scale code XTRAPOL. We apply our method by performing a reconstruction at a specific time for which we dispose of a set of composite data constituted of a vector magnetogram provided by SDO/HMI, embedded in a larger full disk vector magnetogram provided by the same instrument, finally embedded in a synoptic map provided by SOLIS. It turns out to be possible to access the large scale structure of the corona and its energetic contents, and also the AR scale, at which we recover the presence of a twisted flux rope in equilibrium.

  15. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  16. Large-scale anisotropy in stably stratified rotating flows

    SciTech Connect

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  17. Online education in a large scale rehabilitation institution.

    PubMed

    Mazzoleni, M Cristina; Rognoni, Carla; Pagani, Marco; Imbriani, Marcello

    2012-01-01

    Large scale multiple venue institutions face problems when delivering educations to their healthcare staff. The present study is aimed at evaluating the feasibility of relying on e-learning for at least part of the training of the Salvatore Maugeri Foundation healthcare staff. The paper reports the results of the delivery of e-learning courses to the personnel during a span of time of 7 months in order to assess the attitude to online courses attendance, the proportion between administered online education and administered traditional education, the economic sustainability of the online education delivery process. 37% of the total healthcare staff have attended online courses and 46% of nurses have proved to be the very active. The ratio between total number of credits and total number of courses for online and traditional education are respectively 18268/5 and 20354/96. These results point out that eLearning is not at all a niche tool used (or usable) by a limited number of people. Economic sustainability, assessed via personnel work hour saving, has been demonstrated. When distance learning is appropriate, online education is an effective, sustainable, well accepted mean to support and promote healthcare staff's education in a large scale institution. PMID:22491113

  18. Systematic renormalization of the effective theory of Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-05-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  19. Semantic overlay network for large-scale spatial information indexing

    NASA Astrophysics Data System (ADS)

    Zou, Zhiqiang; Wang, Yue; Cao, Kai; Qu, Tianshan; Wang, Zhongmin

    2013-08-01

    The increased demand for online services of spatial information poses new challenges to the combined filed of Computer Science and Geographic Information Science. Amongst others, these include fast indexing of spatial data in distributed networks. In this paper we propose a novel semantic overlay network for large-scale multi-dimensional spatial information indexing, called SON_LSII, which has a hybrid structure integrating a semantic quad-tree and Chord ring. The SON_LSII is a small world overlay network that achieves a very competitive trade-off between indexing efficiency and maintenance overhead. To create SON_LSII, we use an effective semantic clustering strategy that considers two aspects, i.e., the semantic of spatial information that peer holds in overlay network and physical network performances. Based on SON_LSII, a mapping method is used to reduce the multi-dimensional features into a single dimension and an efficient indexing algorithm is presented to support complex range queries of the spatial information with a massive number of concurrent users. The results from extensive experiments demonstrate that SON_LSII is superior to existing overlay networks in various respects, including scalability, maintenance, rate of indexing hits, indexing logical hops, and adaptability. Thus, the proposed SON_LSII can be used for large-scale spatial information indexing.

  20. The large-scale morphology of IRAS galaxies

    NASA Technical Reports Server (NTRS)

    Babul, Arif; Starkman, Glenn D.; Strauss, Michael A.

    1993-01-01

    At present, visual inspection is the only method for comparing the large-scale morphologies in the distribution of galaxies to those in model universes generated by N-body simulations. To remedy the situation, we have developed a set of three structure functions (S1, S2, S3) that quantify the degree of large-scale prolateness, oblateness, and sphericity/uniformity of a 3-D particle distribution and have applied them to a volume-limited (less than = 4000 km/s) sample of 699 IRAS galaxies with f sub 60 greater than 1.2 Jy. To determine the structure functions, we randomly select 500 galaxies as origins of spherical windows of radius R sub w, locate the centroid of the galaxies in the window (assuming all galaxies have equal mass) and then, compute the principal moments of inertia (I sub 1, I sub 2, I sub 3) about the centroid. Each S sub i is a function of (I sub 2)/(I sub 1) and (I sub 3)/I sub 1). S1, S2, and S3 tend to unity for highly prolate, oblate, and uniform distributions, respectively and tend to zero otherwise. The resulting 500 values of S sub i at each scale R sub w are used to construct a histogram.

  1. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  2. IP over optical multicasting for large-scale video delivery

    NASA Astrophysics Data System (ADS)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  3. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  4. Simulating subsurface heterogeneity improves large-scale water resources predictions

    NASA Astrophysics Data System (ADS)

    Hartmann, A. J.; Gleeson, T.; Wagener, T.; Wada, Y.

    2014-12-01

    Heterogeneity is abundant everywhere across the hydrosphere. It exists in the soil, the vadose zone and the groundwater. In large-scale hydrological models, subsurface heterogeneity is usually not considered. Instead average or representative values are chosen for each of the simulated grid cells, not incorporating any sub-grid variability. This may lead to unreliable predictions when the models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this study we use a novel, large-scale model that takes into account sub-grid heterogeneity for the simulation of groundwater recharge by using statistical distribution functions. We choose all regions over Europe that are comprised by carbonate rock (~35% of the total area) because the well understood dissolvability of carbonate rocks (karstification) allows for assessing the strength of subsurface heterogeneity. Applying the model with historic data and future climate projections we show that subsurface heterogeneity lowers the vulnerability of groundwater recharge on hydro-climatic extremes and future changes of climate. Comparing our simulations with the PCR-GLOBWB model we can quantify the deviations of simulations for different sub-regions in Europe.

  5. Evaluating large scale orthophotos derived from high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Ioannou, Maria Teresa; Georgopoulos, Andreas

    2013-08-01

    For the purposes of a research project, for the compilation of the archaeological and environmental digital map of the island of Antiparos, the production of updated large scale orthophotos was required. Hence suitable stereoscopic high resolution satellite imagery was acquired. Two Geoeye-1 stereopairs were enough to cover this small island of the Cyclades complex in the central Aegean. For the orientation of the two stereopairs numerous ground control points were determined using GPS observations. Some of them would also serve as check points. The images were processed using commercial stereophotogrammetric software suitable to process satellite stereoscopic imagery. The results of the orientations are evaluated and the digital terrain model was produced using automated and manual procedures. The DTM was checked both internally and externally with comparison to other available DTMs. In this paper the procedures for producing the desired orthophotography are critically presented and the final result is compared and evaluated for its accuracy, completeness and efficiency. The final product is also compared against the orthophotography produced by Ktimatologio S.A. using aerial images in 2007. The orthophotography produced has been evaluated metrically using the available check points, while qualitative evaluation has also been performed. The results are presented and a critical approach for the usability of satellite imagery for the production of large scale orthophotos is attempted.

  6. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  7. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  8. Power suppression at large scales in string inflation

    SciTech Connect

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar E-mail: sddownes@physics.tamu.edu

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  9. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  10. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGESBeta

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  11. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  12. Large scale CMB anomalies from thawing cosmic strings

    NASA Astrophysics Data System (ADS)

    Ringeval, Christophe; Yamauchi, Daisuke; Yokoyama, Jun'ichi; Bouchet, François R.

    2016-02-01

    Cosmic strings formed during inflation are expected to be either diluted over super-Hubble distances, i.e., invisible today, or to have crossed our past light cone very recently. We discuss the latter situation in which a few strings imprint their signature in the Cosmic Microwave Background (CMB) Anisotropies after recombination. Being almost frozen in the Hubble flow, these strings are quasi static and evade almost all of the previously derived constraints on their tension while being able to source large scale anisotropies in the CMB sky. Using a local variance estimator on thousand of numerically simulated Nambu-Goto all sky maps, we compute the expected signal and show that it can mimic a dipole modulation at large angular scales while being negligible at small angles. Interestingly, such a scenario generically produces one cold spot from the thawing of a cosmic string loop. Mixed with anisotropies of inflationary origin, we find that a few strings of tension GU = Script O(1) × 10-6 match the amplitude of the dipole modulation reported in the Planck satellite measurements and could be at the origin of other large scale anomalies.

  13. The combustion behavior of large scale lithium titanate battery

    NASA Astrophysics Data System (ADS)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference.

  14. Large-scale magnetic fields in magnetohydrodynamic turbulence.

    PubMed

    Alexakis, Alexandros

    2013-02-22

    High Reynolds number magnetohydrodynamic turbulence in the presence of zero-flux large-scale magnetic fields is investigated as a function of the magnetic field strength. For a variety of flow configurations, the energy dissipation rate [symbol: see text] follows the scaling [Symbol: see text] proportional U(rms)(3)/ℓ even when the large-scale magnetic field energy is twenty times larger than the kinetic energy. A further increase of the magnetic energy showed a transition to the [Symbol: see text] proportional U(rms)(2) B(rms)/ℓ scaling implying that magnetic shear becomes more efficient at this point at cascading the energy than the velocity fluctuations. Strongly helical configurations form nonturbulent helicity condensates that deviate from these scalings. Weak turbulence scaling was absent from the investigation. Finally, the magnetic energy spectra support the Kolmogorov spectrum k(-5/3) while kinetic energy spectra are closer to the Iroshnikov-Kraichnan spectrum k(-3/2) as observed in the solar wind. PMID:23473153

  15. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  16. Large-scale mapping of mutations affecting zebrafish development

    PubMed Central

    Geisler, Robert; Rauch, Gerd-Jörg; Geiger-Rudolph, Silke; Albrecht, Andrea; van Bebber, Frauke; Berger, Andrea; Busch-Nentwich, Elisabeth; Dahm, Ralf; Dekens, Marcus PS; Dooley, Christopher; Elli, Alexandra F; Gehring, Ines; Geiger, Horst; Geisler, Maria; Glaser, Stefanie; Holley, Scott; Huber, Matthias; Kerr, Andy; Kirn, Anette; Knirsch, Martina; Konantz, Martina; Küchler, Axel M; Maderspacher, Florian; Neuhauss, Stephan C; Nicolson, Teresa; Ober, Elke A; Praeg, Elke; Ray, Russell; Rentzsch, Brit; Rick, Jens M; Rief, Eva; Schauerte, Heike E; Schepp, Carsten P; Schönberger, Ulrike; Schonthaler, Helia B; Seiler, Christoph; Sidi, Samuel; Söllner, Christian; Wehner, Anja; Weiler, Christian; Nüsslein-Volhard, Christiane

    2007-01-01

    Background Large-scale mutagenesis screens in the zebrafish employing the mutagen ENU have isolated several hundred mutant loci that represent putative developmental control genes. In order to realize the potential of such screens, systematic genetic mapping of the mutations is necessary. Here we report on a large-scale effort to map the mutations generated in mutagenesis screening at the Max Planck Institute for Developmental Biology by genome scanning with microsatellite markers. Results We have selected a set of microsatellite markers and developed methods and scoring criteria suitable for efficient, high-throughput genome scanning. We have used these methods to successfully obtain a rough map position for 319 mutant loci from the Tübingen I mutagenesis screen and subsequent screening of the mutant collection. For 277 of these the corresponding gene is not yet identified. Mapping was successful for 80 % of the tested loci. By comparing 21 mutation and gene positions of cloned mutations we have validated the correctness of our linkage group assignments and estimated the standard error of our map positions to be approximately 6 cM. Conclusion By obtaining rough map positions for over 300 zebrafish loci with developmental phenotypes, we have generated a dataset that will be useful not only for cloning of the affected genes, but also to suggest allelism of mutations with similar phenotypes that will be identified in future screens. Furthermore this work validates the usefulness of our methodology for rapid, systematic and inexpensive microsatellite mapping of zebrafish mutations. PMID:17212827

  17. The effective field theory of cosmological large scale structures

    SciTech Connect

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  18. Halo detection via large-scale Bayesian inference

    NASA Astrophysics Data System (ADS)

    Merson, Alexander I.; Jasche, Jens; Abdalla, Filipe B.; Lahav, Ofer; Wandelt, Benjamin; Jones, D. Heath; Colless, Matthew

    2016-08-01

    We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect haloes of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HAmiltonian Density Estimation and Sampling algorithm (HADES), and a Bayesian chain rule (the Blackwell-Rao estimator), which we use to connect the inferred density field to the properties of dark matter haloes. To demonstrate the capability of our approach, we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information, we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful detection of haloes in the mock catalogue increases as a function of the signal to noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.

  19. Large-scale electromagnetic modeling for multiple inhomogeneous domains

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Endo, M.; Cuma, M.

    2008-12-01

    We develop a new formulation of the integral equation (IE) method for three-dimensional (3D) electromagnetic (EM) field computation in large-scale models with multiple inhomogeneous domains. This problem arises in many practical applications including modeling the EM fields within the complex geoelectrical structures in geophysical exploration. In geophysical applications, it is difficult to describe an earth structure using a horizontally layered background conductivity model, which is required for the efficient implementation of the conventional IE approach. As a result, a large domain of interest with anomalous conductivity distribution needs to be discretized, which complicates the computations. The new method allows us to consider multiple inhomogeneous domains, where the conductivity distribution is different from that of the background, and to use independent discretizations for different domains. This reduces dramatically the computational resources required for large-scale modeling. In addition, by using this method, we can analyze the response of each domain separately without an inappropriate use of the superposition principle for the EM field calculations. The method was carefully tested for modeling the marine controlled-source electromagnetic (MCSEM) fields for complex geoelectrical structures with multiple inhomogeneous domains, such as a seafloor with rough bathymetry, salt domes, and reservoirs. We have also used this technique to investigate the return induction effects from regional geoelectrical structures, e.g., seafloor bathymetry and salt domes, which can distort the EM response from the geophysical exploration target.

  20. Large-scale network-level processes during entrainment.

    PubMed

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-03-15

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4-30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band "disconnecting" visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  1. Large-scale network-level processes during entrainment

    PubMed Central

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-01-01

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  2. Effects of subsurface heterogeneity on large-scale hydrological predictions

    NASA Astrophysics Data System (ADS)

    Hartmann, Andreas; Gleeson, Tom; Wagener, Thorsten; Wada, Yoshihide

    2015-04-01

    Heterogeneity is abundant everywhere across the hydrosphere. It exists in the soil, the vadose zone and the groundwater producing preferential flow and complex threshold behavior. In large-scale hydrological models, subsurface heterogeneity is usually not considered. Instead average or representative values are chosen for each of the simulated grid cells, not incorporating any sub-grid variability. This may lead to unreliable predictions when the models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this study we use a novel, large-scale model that takes into account sub-grid heterogeneity for the simulation of groundwater recharge by using statistical distribution functions. We choose all regions over Europe that are comprised by carbonate rock (~35% of the total area) because the well understood dissolvability of carbonate rocks (karstification) allows for assessing the strength of subsurface heterogeneity. Applying the model with historic data and future climate projections we show that subsurface heterogeneity results (1) in larger present-day groundwater recharge and (2) a greater vulnerability to climate in terms of long-term decrease and hydrological extremes.

  3. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    PubMed Central

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-01-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing. PMID:27072067

  4. Very large-scale motions in a turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Lee, Jae Hwa; Jang, Seong Jae; Sung, Hyung Jin

    2011-11-01

    Direct numerical simulation of a turbulent pipe flow with ReD=35000 was performed to investigate the spatially coherent structures associated with very large-scale motions. The corresponding friction Reynolds number, based on pipe radius R, is R+=934, and the computational domain length is 30 R. The computed mean flow statistics agree well with previous DNS data at ReD=44000 and 24000. Inspection of the instantaneous fields and two-point correlation of the streamwise velocity fluctuations showed that the very long meandering motions exceeding 25R exist in logarithmic and wake regions, and the streamwise length scale is almost linearly increased up to y/R ~0.3, while the structures in the turbulent boundary layer only reach up to the edge of the log-layer. Time-resolved instantaneous fields revealed that the hairpin packet-like structures grow with continuous stretching along the streamwise direction and create the very large-scale structures with meandering in the spanwise direction, consistent with the previous conceptual model of Kim & Adrian (1999). This work was supported by the Creative Research Initiatives of NRF/MEST of Korea (No. 2011-0000423).

  5. Large-scale climatic control on European precipitation

    NASA Astrophysics Data System (ADS)

    Lavers, David; Prudhomme, Christel; Hannah, David

    2010-05-01

    Precipitation variability has a significant impact on society. Sectors such as agriculture and water resources management are reliant on predictable and reliable precipitation supply with extreme variability having potentially adverse socio-economic impacts. Therefore, understanding the climate drivers of precipitation is of human relevance. This research examines the strength, location and seasonality of links between precipitation and large-scale Mean Sea Level Pressure (MSLP) fields across Europe. In particular, we aim to evaluate whether European precipitation is correlated with the same atmospheric circulation patterns or if there is a strong spatial and/or seasonal variation in the strength and location of centres of correlations. The work exploits time series of gridded ERA-40 MSLP on a 2.5˚×2.5˚ grid (0˚N-90˚N and 90˚W-90˚E) and gridded European precipitation from the Ensemble project on a 0.5°×0.5° grid (36.25˚N-74.25˚N and 10.25˚W-24.75˚E). Monthly Spearman rank correlation analysis was performed between MSLP and precipitation. During winter, a significant MSLP-precipitation correlation dipole pattern exists across Europe. Strong negative (positive) correlation located near the Icelandic Low and positive (negative) correlation near the Azores High pressure centres are found in northern (southern) Europe. These correlation dipoles resemble the structure of the North Atlantic Oscillation (NAO). The reversal in the correlation dipole patterns occurs at the latitude of central France, with regions to the north (British Isles, northern France, Scandinavia) having a positive relationship with the NAO, and regions to the south (Italy, Portugal, southern France, Spain) exhibiting a negative relationship with the NAO. In the lee of mountain ranges of eastern Britain and central Sweden, correlation with North Atlantic MSLP is reduced, reflecting a reduced influence of westerly flow on precipitation generation as the mountains act as a barrier to moist

  6. Linear complexity integral-equation based methods for large-scale electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Chai, Wenwen

    In general, to solve problems with N parameters, the optimal computational complexity is linear complexity O( N). However, for most computational electromagnetic methods, the complexity is higher than O(N). In this work, we introduced and further developed the H - and H2 -matrix based mathematical framework to break the computational barrier of existing integral-equation (IE)-based methods for large-scale electromagnetic analysis. Our significant contributions include the first-time dense matrix inversion and LU factorization of O(N) complexity for large-scale 3-D circuit extraction and a fast direct integral equation solver that outperforms existing direct solvers for large-scale electrodynamic analysis having millions of unknowns and ˜100 wavelengths. The major contributions of this work are: (1) Direct Matrix Solution of Linear Complexity for 3-D Integrated Circuit (IC) and Package Extraction • O(N) complexity dense matrix inversion and LU factorization algorithms and their applications to capacitance extraction and impedance extraction of large-scale 3-D circuits • O(N) direct matrix solution of highly irregular matrices consisting of both dense and sparse matrix blocks arising from full-wave analysis of general 3-D circuits with lossy conductors in multiple dielectrics. (2) Fast H - and H2 -Based IE Solvers for Large-Scale Electrodynamic Analysis • theoretical proof on the error bounded low-rank representation of electrodynamic integral operators • fast H2 -based iterative solver with O(N) computational cost and controlled accuracy from small to tens of wavelengths • fast H -based direct solver with computational cost minimized based on accuracy • Findings on how to reduce the complexity of H - and H2 -based methods for electrodynamic analysis, which are also applicable to many other fast IE solvers. (3) Fast Algorithms for Accelerating H - and H2 -Based Iterative and Direct Solvers • Optimal H -based representation and its applications from

  7. Penetration of Large Scale Electric Field to Inner Magnetosphere

    NASA Astrophysics Data System (ADS)

    Chen, S. H.; Fok, M. C. H.; Sibeck, D. G.; Wygant, J. R.; Spence, H. E.; Larsen, B.; Reeves, G. D.; Funsten, H. O.

    2015-12-01

    The direct penetration of large scale global electric field to the inner magnetosphere is a critical element in controlling how the background thermal plasma populates within the radiation belts. These plasma populations provide the source of particles and free energy needed for the generation and growth of various plasma waves that, at critical points of resonances in time and phase space, can scatter or energize radiation belt particles to regulate the flux level of the relativistic electrons in the system. At high geomagnetic activity levels, the distribution of large scale electric fields serves as an important indicator of how prevalence of strong wave-particle interactions extend over local times and radial distances. To understand the complex relationship between the global electric fields and thermal plasmas, particularly due to the ionospheric dynamo and the magnetospheric convection effects, and their relations to the geomagnetic activities, we analyze the electric field and cold plasma measurements from Van Allen Probes over more than two years period and simulate a geomagnetic storm event using Coupled Inner Magnetosphere-Ionosphere Model (CIMI). Our statistical analysis of the measurements from Van Allan Probes and CIMI simulations of the March 17, 2013 storm event indicate that: (1) Global dawn-dusk electric field can penetrate the inner magnetosphere inside the inner belt below L~2. (2) Stronger convections occurred in the dusk and midnight sectors than those in the noon and dawn sectors. (3) Strong convections at multiple locations exist at all activity levels but more complex at higher activity levels. (4) At the high activity levels, strongest convections occur in the midnight sectors at larger distances from the Earth and in the dusk sector at closer distances. (5) Two plasma populations of distinct ion temperature isotropies divided at L-Shell ~2, indicating distinct heating mechanisms between inner and outer radiation belts. (6) CIMI

  8. Autonomic Computing Paradigm For Large Scale Scientific And Engineering Applications

    NASA Astrophysics Data System (ADS)

    Hariri, S.; Yang, J.; Zhang, Y.

    2005-12-01

    Large-scale distributed scientific applications are highly adaptive and heterogeneous in terms of their computational requirements. The computational complexity associated with each computational region or domain varies continuously and dramatically both in space and time throughout the whole life cycle of the application execution. Furthermore, the underlying distributed computing environment is similarly complex and dynamic in the availabilities and capacities of the computing resources. These challenges combined together make the current paradigms, which are based on passive components and static compositions, ineffectual. Autonomic Computing paradigm is an approach that efficiently addresses the complexity and dynamism of large scale scientific and engineering applications and realizes the self-management of these applications. In this presentation, we present an Autonomic Runtime Manager (ARM) that supports the development of autonomic applications. The ARM includes two modules: online monitoring and analysis module and autonomic planning and scheduling module. The ARM behaves as a closed-loop control system that dynamically controls and manages the execution of the applications at runtime. It regularly senses the state changes of both the applications and the underlying computing resources. It then uses these runtime information and prior knowledge about the application behavior and its physics to identify the appropriate solution methods as well as the required computing and storage resources. Consequently this approach enables us to develop autonomic applications, which are capable of self-management and self-optimization. We have developed and implemented the autonomic computing paradigms for several large scale applications such as wild fire simulations, simulations of flow through variably saturated geologic formations, and life sciences. The distributed wildfire simulation models the wildfire spread behavior by considering such factors as fuel

  9. Climatological context for large-scale coral bleaching

    NASA Astrophysics Data System (ADS)

    Barton, A. D.; Casey, K. S.

    2005-12-01

    Large-scale coral bleaching was first observed in 1979 and has occurred throughout virtually all of the tropics since that time. Severe bleaching may result in the loss of live coral and in a decline of the integrity of the impacted coral reef ecosystem. Despite the extensive scientific research and increased public awareness of coral bleaching, uncertainties remain about the past and future of large-scale coral bleaching. In order to reduce these uncertainties and place large-scale coral bleaching in the longer-term climatological context, specific criteria and methods for using historical sea surface temperature (SST) data to examine coral bleaching-related thermal conditions are proposed by analyzing three, 132 year SST reconstructions: ERSST, HadISST1, and GISST2.3b. These methodologies are applied to case studies at Discovery Bay, Jamaica (77.27°W, 18.45°N), Sombrero Reef, Florida, USA (81.11°W, 24.63°N), Academy Bay, Galápagos, Ecuador (90.31°W, 0.74°S), Pearl and Hermes Reef, Northwest Hawaiian Islands, USA (175.83°W, 27.83°N), Midway Island, Northwest Hawaiian Islands, USA (177.37°W, 28.25°N), Davies Reef, Australia (147.68°E, 18.83°S), and North Male Atoll, Maldives (73.35°E, 4.70°N). The results of this study show that (1) The historical SST data provide a useful long-term record of thermal conditions in reef ecosystems, giving important insight into the thermal history of coral reefs and (2) While coral bleaching and anomalously warm SSTs have occurred over much of the world in recent decades, case studies in the Caribbean, Northwest Hawaiian Islands, and parts of other regions such as the Great Barrier Reef exhibited SST conditions and cumulative thermal stress prior to 1979 that were comparable to those conditions observed during the strong, frequent coral bleaching events since 1979. This climatological context and knowledge of past environmental conditions in reef ecosystems may foster a better understanding of how coral reefs will

  10. Large-scale dimension densities for heart rate variability analysis

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen

    2006-04-01

    In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (ρlsμ=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ρlsμ (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.

  11. Large-scale dimension densities for heart rate variability analysis.

    PubMed

    Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen

    2006-04-01

    In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (rho (mu/ls) = 0.97 +/- 0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65 +/- 0.13; EH, 0.54 +/- 0.05; YH, 0.57 +/- 0.05; p < 0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in rho (mu/ls) (day, 0.65 +/- 0.13; night, 0.66 +/- 0.12; n.s.) in contrast to healthy controls (day, 0.54 +/- 0.05; night, 0.61 +/- 0.05; p=0.002). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification. PMID:16711836

  12. Large-Scale Graphene Film Deposition for Monolithic Device Fabrication

    NASA Astrophysics Data System (ADS)

    Al-shurman, Khaled

    Since 1958, the concept of integrated circuit (IC) has achieved great technological developments and helped in shrinking electronic devices. Nowadays, an IC consists of more than a million of compacted transistors. The majority of current ICs use silicon as a semiconductor material. According to Moore's law, the number of transistors built-in on a microchip can be double every two years. However, silicon device manufacturing reaches its physical limits. To explain, there is a new trend to shrinking circuitry to seven nanometers where a lot of unknown quantum effects such as tunneling effect can not be controlled. Hence, there is an urgent need for a new platform material to replace Si. Graphene is considered a promising material with enormous potential applications in many electronic and optoelectronics devices due to its superior properties. There are several techniques to produce graphene films. Among these techniques, chemical vapor deposition (CVD) offers a very convenient method to fabricate films for large-scale graphene films. Though CVD method is suitable for large area growth of graphene, the need for transferring a graphene film to silicon-based substrates is required. Furthermore, the graphene films thus achieved are, in fact, not single crystalline. Also, graphene fabrication utilizing Cu and Ni at high growth temperature contaminates the substrate that holds Si CMOS circuitry and CVD chamber as well. So, lowering the deposition temperature is another technological milestone for the successful adoption of graphene in integrated circuits fabrication. In this research, direct large-scale graphene film fabrication on silicon based platform (i.e. SiO2 and Si3N4) at low temperature was achieved. With a focus on low-temperature graphene growth, hot-filament chemical vapor deposition (HF-CVD) was utilized to synthesize graphene film using 200 nm thick nickel film. Raman spectroscopy was utilized to examine graphene formation on the bottom side of the Ni film

  13. MODELING THE LARGE-SCALE BIAS OF NEUTRAL HYDROGEN

    SciTech Connect

    MarIn, Felipe A.; Gnedin, Nickolay Y.; Seo, Hee-Jong; Vallinotto, Alberto E-mail: gnedin@fnal.go E-mail: avalli@fnal.go

    2010-08-01

    We present new analytical estimates of the large-scale bias of neutral hydrogen (H I). We use a simple, non-parametric model which monotonically relates the total mass of a halo M{sub tot} with its H I mass M{sub HI} at zero redshift; for earlier times we assume limiting models for the {Omega}{sub HI} evolution consistent with the data presently available, as well as two main scenarios for the evolution of our M{sub HI}-M{sub tot} relation. We find that both the linear and the first nonlinear bias terms exhibit a strong evolution with redshift, regardless of the specific limiting model assumed for the H I density over time. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the H I power spectrum.

  14. Mass Efficiencies for Common Large-Scale Precision Space Structures

    NASA Technical Reports Server (NTRS)

    Williams, R. Brett; Agnes, Gregory S.

    2005-01-01

    This paper presents a mass-based trade study for large-scale deployable triangular trusses, where the longerons can be monocoque tubes, isogrid tubes, or coilable longeron trusses. Such structures are typically used to support heavy reflectors, solar panels, or other instruments, and are subject to thermal gradients that can vary a great deal based on orbital altitude, location in orbit, and self-shadowing. While multi layer insulation (MLI) blankets are commonly used to minimize the magnitude of these thermal disturbances, they subject the truss to a nonstructural mass penalty. This paper investigates the impact of these add-on thermal protection layers on selecting the lightest precision structure for a given loading scenario.

  15. Nuclear-pumped lasers for large-scale applications

    SciTech Connect

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs.

  16. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  17. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  18. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  19. Large-scale quantum networks based on graphs

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  20. Large Scale Bacterial Colony Screening of Diversified FRET Biosensors

    PubMed Central

    Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver

    2015-01-01

    Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878

  1. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  2. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems

    PubMed Central

    Demchak, Barry; Krüger, Ingolf

    2014-01-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime, thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime. PMID:25383258

  3. Nonzero Density-Velocity Consistency Relations for Large Scale Structures.

    PubMed

    Rizzo, Luca Alberto; Mota, David F; Valageas, Patrick

    2016-08-19

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias. PMID:27588842

  4. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    NASA Astrophysics Data System (ADS)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  5. Large-scale testing of structural clay tile infilled frames

    SciTech Connect

    Flanagan, R.D.; Bennett, R.M.

    1993-03-18

    A summary of large-scale cyclic static tests of structural clay tile infilled frames is given. In-plane racking tests examined the effects of varying frame stiffness, varying infill size, infill offset from frame centerline, and single and double wythe infill construction. Out-of-plane tests examined infilled frame response to inertial loadings and inter-story drift loadings. Sequential in-plane and out-of-plane loadings were performed to determine the effects of orthogonal damage and degradation on both strength and stiffness. A combined out-of-plane inertial and in-plane racking test was conducted to investigate the interaction of multi-directional loading. To determine constitutive properties of the infills, prism compression, mortar compression and various unit tile tests were performed.

  6. Large scale CO emission in the Orion Nebula

    NASA Astrophysics Data System (ADS)

    Marcelino, N.; Berne, O.; Cernicharo, J.

    2011-11-01

    We present 12CO and 13CO J=2-1 large scale maps of Orion A observed with the IRAM 30-m radiotelescope. The size of the maps is 1x0.8 deg^2 at its largest extent, and their angular and velocity resolutions (11" and about 0.4 km/s, respectively) are higher than those used in previous extended CO maps of Orion. Besides a very complex velocity structure, these maps show numerous filaments and gas condensations, cometary globules and cavities (or tunnels). All these are signatures of the feedback action of massive star formation on the preexisting molecular gas. Among them, the detection of periodic structures on the surface of an elongated molecular cloud, provide evidence of the development of hydrodynamical instabilities shaping the molecular cloud. The discovery of these ``waves'' shows the high angular and spectral resolution required to understand star formation processes.

  7. Phase Correlations and Topological Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Coles, P.

    The process of gravitational instability initiated by small primordial density perturbations is a vital ingredient of cosmological models that attempt to explain how galaxies and large-scale structure formed in the Universe. In the standard picture (the "concordance" model), a period of accelerated expansion ("inflation") generated density fluctuations with simple statistical properties through quantum processes (Starobinsky [82], [83], [84]; Guth [39]; Guth & Pi [40]; Albrecht & Steinhardt [2]; Linde [55]). In this scenario the primordial density field is assumed to form a statistically homogeneous and isotropic Gaussian random field (GRF). Over years of observational scrutiny this paradigm has strengthened its hold in the minds of cosmologists and has survived many tests, culminating in those furnished by the Wilkinson Microwave Anisotropy Probe (WMAP; Bennett et al. [7]; Hinshaw et al. [45].

  8. Grid infrastructure to support science portals for large scale instruments.

    SciTech Connect

    von Laszewski, G.; Foster, I.

    1999-09-29

    Soon, a new generation of scientific workbenches will be developed as a collaborative effort among various research institutions in the US. These scientific workbenches will be accessed in the Web via portals. Reusable components are needed to build such portals for different scientific disciplines, allowing uniform desktop access to remote resources. Such components will include tools and services enabling easy collaboration, job submission, job monitoring, component discovery, and persistent object storage. Based on experience gained from Grand Challenge applications for large-scale instruments, we demonstrate how Grid infrastructure components can be used to support the implementation of science portals. The availability of these components will simplify the prototype implementation of a common portal architecture.

  9. Roary: rapid large-scale prokaryote pan genome analysis

    PubMed Central

    Page, Andrew J.; Cummins, Carla A.; Hunt, Martin; Wong, Vanessa K.; Reuter, Sandra; Holden, Matthew T.G.; Fookes, Maria; Falush, Daniel; Keane, Jacqueline A.; Parkhill, Julian

    2015-01-01

    Summary: A typical prokaryote population sequencing study can now consist of hundreds or thousands of isolates. Interrogating these datasets can provide detailed insights into the genetic structure of prokaryotic genomes. We introduce Roary, a tool that rapidly builds large-scale pan genomes, identifying the core and accessory genes. Roary makes construction of the pan genome of thousands of prokaryote samples possible on a standard desktop without compromising on the accuracy of results. Using a single CPU Roary can produce a pan genome consisting of 1000 isolates in 4.5 hours using 13 GB of RAM, with further speedups possible using multiple processors. Availability and implementation: Roary is implemented in Perl and is freely available under an open source GPLv3 license from http://sanger-pathogens.github.io/Roary Contact: roary@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26198102

  10. Large scale Hugoniot material properties for Danby Marble

    SciTech Connect

    Rinehart, E.J.

    1993-11-01

    This paper presents the results of simulation experiments of nuclear underground testing carried out using the HYDROPLUS methodology for yield verifications of non-standard tests. The objective of this test series was to demonstrate the accuracy of stress and velocity measurements in hard, low porosity rock, to obtain comparisons of large-scale material properties with those obtained from laboratory testing of the same material, and to address the problems posed by a material having a clear precursor wave preceding the main shock wave. The test series consisted of three individual experimental tests. The first established material properties of the Danby marble selected for use in the experiments. The second and third tests looked at stress and velocity gage errors obtained when gages were placed in boreholes and grouted into place.

  11. Nonzero Density-Velocity Consistency Relations for Large Scale Structures

    NASA Astrophysics Data System (ADS)

    Rizzo, Luca Alberto; Mota, David F.; Valageas, Patrick

    2016-08-01

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias.

  12. Star formation associated with a large-scale infrared bubble

    NASA Astrophysics Data System (ADS)

    Xu, Jin-Long; Ju, Bing-Gang

    2014-09-01

    Aims: To investigate how a large-scale infrared bubble centered at l = 53.9° and b = 0.2° forms, and to study if star formation is taking place at the periphery of the bubble, we performed a multiwavelength study. Methods: Using the data from the Galactic Ring Survey (GRS) and Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE), we performed a study of a large-scale infrared bubble with a size of about 16 pc at a distance of 2.0 kpc. We present the 12CO J = 1-0, 13CO J = 1-0, and C18O J = 1-0 observations of HII region G53.54-0.01 (Sh2-82) obtained at the Purple Mountain Observation (PMO) 13.7 m radio telescope to investigate the detailed distribution of associated molecular material. In addition, we also used radiorecombination line and VLA data. To select young stellar objects (YSOs) consistent with this region, we used the GLIMPSE I catalog. Results: The large-scale infrared bubble shows a half-shell morphology at 8 μm. The H II regions of G53.54-0.01, G53.64+0.24, and G54.09-0.06 are situated on the bubble. Comparing the radio recombination line velocities and associated 13CO J = 1-0 components of the three H II regions, we found that the 8 μm emission associated with H II region G53.54-0.01 should belong to the foreground emission, and only overlap with the large-scale infrared bubble in the line of sight. Three extended green objects (EGOs, the candidate massive young stellar objects), as well as three H II regions and two small-scale bubbles are found located in the G54.09-0.06 complex, indicating an active massive star-forming region. Emission from C18O at J = 1-0 presents four cloud clumps on the northeastern border of H II region G53.54-0.01. By comparing the spectral profiles of 12CO J = 1-0, 13CO J = 1-0, and C18O J = 1-0 at the peak position of each clump, we found the collected gas in the three clumps, except for the clump coinciding with a massive YSO (IRAS 19282+1814). Using the evolutive model of the H II region, we derived that

  13. Theoretical expectations for bulk flows in large-scale surveys

    NASA Technical Reports Server (NTRS)

    Feldman, Hume A.; Watkins, Richard

    1994-01-01

    We calculate the theoretical expectation for the bulk motion of a large-scale survey of the type recently carried out by Lauer and Postman. Included are the effects of survey geometry, errors in the distance measurements, clustering properties of the sample, and different assumed power spectra. We considered the power spectrum calculated from the Infrared Astronomy Satellite (IRAS)-QDOT survey, as well as spectra from hot + cold and standard cold dark matter models. We find that measurement uncertainty, sparse sampling, and clustering can lead to a much larger expectation for the bulk motion of a cluster sample than for the volume as a whole. However, our results suggest that the expected bulk motion is still inconsistent with that reported by Lauer and Postman at the 95%-97% confidence level.

  14. Jet noise generated by large-scale coherent motion

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1991-01-01

    The noise generated by large scale turbulence structures and instability waves of jets is discussed. Emphasis is placed on supersonic jets with moderate to high Reynolds numbers. This is because it is in these jets that unambiguous experimental and theoretical evidence is found indicating that large turbulence structures and instability waves are directly responsible for generating the dominant part of the noise. For subsonic jets similar large turbulence structures and instability waves do play a crucial role in the dynamics, spread, and mixing of the jet fluid. However, at subsonic convection speeds, they do not appear to be efficient noise generators. Many investigators believe that the dominant noise source of subsonic jets is, in fact, the small scale turbulence. As yet, this belief has not yet received universal acceptance. The issues involved are complicated and are not easy to resolve.

  15. Large-scale velocity fields. [of solar rotation

    NASA Technical Reports Server (NTRS)

    Howard, Robert F.; Kichatinov, L. L.; Bogart, Richard S.; Ribes, Elizabeth

    1991-01-01

    The present evaluation of recent observational results bearing on the nature and characteristics of solar rotation gives attention to the status of current understanding on such large-scale velocity-field-associated phenomena as solar supergranulation, mesogranulation, and giant-scale convection. Also noted are theoretical suggestions reconciling theory and observations of giant-scale solar convection. The photosphere's global meridional circulation is suggested by solar rotation models requiring pole-to-equator flows of a few m/sec, as well as by the observed migration of magnetic activity over the solar cycle. The solar rotation exhibits a latitude and cycle dependence which can be understood in terms of a time-dependent convective toroidal roll pattern.

  16. Large-Scale All-Dielectric Metamaterial Perfect Reflectors

    SciTech Connect

    Moitra, Parikshit; Slovick, Brian A.; li, Wei; Kravchencko, Ivan I.; Briggs, Dayrl P.; Krishnamurthy, S.; Valentine, Jason

    2015-05-08

    All-dielectric metamaterials offer a potential low-loss alternative to plasmonic metamaterials at optical frequencies. In this paper, we take advantage of the low absorption loss as well as the simple unit cell geometry to demonstrate large-scale (centimeter-sized) all-dielectric metamaterial perfect reflectors made from silicon cylinder resonators. These perfect reflectors, operating in the telecommunications band, were fabricated using self-assembly based nanosphere lithography. In spite of the disorder originating from the self-assembly process, the average reflectance of the metamaterial perfect reflectors is 99.7% at 1530 nm, surpassing the reflectance of metallic mirrors. Moreover, the spectral separation of the electric and magnetic resonances can be chosen to achieve the required reflection bandwidth while maintaining a high tolerance to disorder. Finally, the scalability of this design could lead to new avenues of manipulating light for low-loss and large-area photonic applications.

  17. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  18. Large-scale characterization of the murine cardiac proteome.

    PubMed

    Cosme, Jake; Emili, Andrew; Gramolini, Anthony O

    2013-01-01

    Cardiomyopathies are diseases of the heart that result in impaired cardiac muscle function. This dysfunction can progress to an inability to supply blood to the body. Cardiovascular diseases play a large role in overall global morbidity. Investigating the protein changes in the heart during disease can uncover pathophysiological mechanisms and potential therapeutic targets. Establishing a global protein expression "footprint" can facilitate more targeted studies of diseases of the heart.In the technical review presented here, we present methods to elucidate the heart's proteome through subfractionation of the cellular compartments to reduce sample complexity and improve detection of lower abundant proteins during multidimensional protein identification technology analysis. Analysis of the cytosolic, microsomal, and mitochondrial subproteomes separately in order to characterize the murine cardiac proteome is advantageous by simplifying complex cardiac protein mixtures. In combination with bioinformatic analysis and genome correlation, large-scale protein changes can be identified at the cellular compartment level in this animal model. PMID:23606244

  19. Computational solutions to large-scale data management and analysis

    PubMed Central

    Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.

    2011-01-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155

  20. Large-Scale Chaos and Fluctuations in Active Nematics

    NASA Astrophysics Data System (ADS)

    Ngo, Sandrine; Peshkov, Anton; Aranson, Igor S.; Bertin, Eric; Ginelli, Francesco; Chaté, Hugues

    2014-07-01

    We show that dry active nematics, e.g., collections of shaken elongated granular particles, exhibit large-scale spatiotemporal chaos made of interacting dense, ordered, bandlike structures in a parameter region including the linear onset of nematic order. These results are obtained from the study of both the well-known (deterministic) hydrodynamic equations describing these systems and of the self-propelled particle model they were derived from. We prove, in particular, that the chaos stems from the generic instability of the band solution of the hydrodynamic equations. Revisiting the status of the strong fluctuations and long-range correlations in the particle model, we show that the giant number fluctuations observed in the chaotic phase are a trivial consequence of density segregation. However anomalous, curvature-driven number fluctuations are present in the homogeneous quasiordered nematic phase and characterized by a nontrivial scaling exponent.

  1. Reconstructing a Large-Scale Population for Social Simulation

    NASA Astrophysics Data System (ADS)

    Fan, Zongchen; Meng, Rongqing; Ge, Yuanzheng; Qiu, Xiaogang

    The advent of social simulation has provided an opportunity to research on social systems. More and more researchers tend to describe the components of social systems in a more detailed level. Any simulation needs the support of population data to initialize and implement the simulation systems. However, it's impossible to get the data which provide full information about individuals and households. We propose a two-step method to reconstruct a large-scale population for a Chinese city according to Chinese culture. Firstly, a baseline population is generated through gathering individuals into households one by one; secondly, social relationships such as friendship are assigned to the baseline population. Through a case study, a population of 3,112,559 individuals gathered in 1,133,835 households is reconstructed for Urumqi city, and the results show that the generated data can respect the real data quite well. The generated data can be applied to support modeling of some social phenomenon.

  2. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    Over the past couple of years, with support from NASA, we used a large collection of data from GPS, VLBI, SLR, and DORIS networks which span the Westem U.S. Cordillera (WUSC) to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work was roughly divided into an analysis of these space geodetic observations to infer the deformation field across and within the entire plate boundary zone, and an investigation of the implications of this deformation field regarding plate boundary dynamics. Following the determination of the first generation WUSC velocity solution, we placed high priority on the dissemination of the velocity estimates. With in-kind support from the Smithsonian Astrophysical Observatory, we constructed a web-site which allows anyone to access the data, and to determine their own velocity reference frame.

  3. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    Over the past couple of years, with support from NASA, we used a large collection of data from GPS, VLBI, SLR, and DORIS networks which span the Western U.S. Cordillera (WUSC) to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work was roughly divided into an analysis of these space geodetic observations to infer the deformation field across and within the entire plate boundary zone, and an investigation of the implications of this deformation field regarding plate boundary dynamics. Following the determination of the first generation WUSC velocity solution, we placed high priority on the dissemination of the velocity estimates. With in-kind support from the Smithsonian Astrophysical Observatory, we constructed a web-site which allows anyone to access the data, and to determine their own velocity reference frame.

  4. Large-scale cortical correlation structure of spontaneous oscillatory activity

    PubMed Central

    Hipp, Joerg F.; Hawellek, David J.; Corbetta, Maurizio; Siegel, Markus; Engel, Andreas K.

    2013-01-01

    Little is known about the brain-wide correlation of electrophysiological signals. Here we show that spontaneous oscillatory neuronal activity exhibits frequency-specific spatial correlation structure in the human brain. We developed an analysis approach that discounts spurious correlation of signal power caused by the limited spatial resolution of electrophysiological measures. We applied this approach to source estimates of spontaneous neuronal activity reconstructed from magnetoencephalography (MEG). Overall, correlation of power across cortical regions was strongest in the alpha to beta frequency range (8–32 Hz) and correlation patterns depended on the underlying oscillation frequency. Global hubs resided in the medial temporal lobe in the theta frequency range (4–6 Hz), in lateral parietal areas in the alpha to beta frequency range (8–23 Hz), and in sensorimotor areas for higher frequencies (32–45 Hz). Our data suggest that interactions in various large-scale cortical networks may be reflected in frequency specific power-envelope correlations. PMID:22561454

  5. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  6. Large-Scale Advanced Prop-Fan (LAP) blade design

    NASA Technical Reports Server (NTRS)

    Violette, John A.; Sullivan, William E.; Turnberg, Jay E.

    1984-01-01

    This report covers the design analysis of a very thin, highly swept, propeller blade to be used in the Large-Scale Advanced Prop-Fan (LAP) test program. The report includes: design requirements and goals, a description of the blade configuration which meets requirements, a description of the analytical methods utilized/developed to demonstrate compliance with the requirements, and the results of these analyses. The methods described include: finite element modeling, predicted aerodynamic loads and their application to the blade, steady state and vibratory response analyses, blade resonant frequencies and mode shapes, bird impact analysis, and predictions of stalled and unstalled flutter phenomena. Summarized results include deflections, retention loads, stress/strength comparisons, foreign object damage resistance, resonant frequencies and critical speed margins, resonant vibratory mode shapes, calculated boundaries of stalled and unstalled flutter, and aerodynamic and acoustic performance calculations.

  7. Proteomics beyond large-scale protein expression analysis.

    PubMed

    Boersema, Paul J; Kahraman, Abdullah; Picotti, Paola

    2015-08-01

    Proteomics is commonly referred to as the application of high-throughput approaches to protein expression analysis. Typical results of proteomics studies are inventories of the protein content of a sample or lists of differentially expressed proteins across multiple conditions. Recently, however, an explosion of novel proteomics workflows has significantly expanded proteomics beyond the analysis of protein expression. Targeted proteomics methods, for example, enable the analysis of the fine dynamics of protein systems, such as a specific pathway or a network of interacting proteins, and the determination of protein complex stoichiometries. Structural proteomics tools allow extraction of restraints for structural modeling and identification of structurally altered proteins on a proteome-wide scale. Other variations of the proteomic workflow can be applied to the large-scale analysis of protein activity, location, degradation and turnover. These exciting developments provide new tools for multi-level 'omics' analysis and for the modeling of biological networks in the context of systems biology studies. PMID:25636126

  8. Applications of large-scale density functional theory in biology.

    PubMed

    Cole, Daniel J; Hine, Nicholas D M

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality. PMID:27494095

  9. Large scale radio/X-ray jets in microquasars

    NASA Astrophysics Data System (ADS)

    Corbel, Stephane; Tzioumis, Anastasios; Fender, Rob; Kaaret, Philip; Orosz, Jerry; Tomsick, John

    2012-10-01

    The discovery with ATCA of large scale radio lobes around the microquasar XTE J1550-564 has led to the discovery with Chandra (for the first time) of moving relativistic X-ray jets in a galactic accreting source. The lobes are likely due to the interaction of relativistic plasma with the ISM. This ATCA proposal has allowed similar discovery in H 1743-322, and therefore that it maybe a common occurrence in the Galaxy. Recently, we have witnessed with ATCA the formation of similar lobes in the black hole GX 339-4. We propose to use the Compact Array to continue our search for radio lobes in microquasars that have been active in the past years.

  10. Large scale radio/X-ray jets in microquasars

    NASA Astrophysics Data System (ADS)

    Corbel, Stephane; Tzioumis, Anastasios; Fender, Rob; Kaaret, Philip; Tomsick, John; Orosz, Jerry

    2011-10-01

    The discovery with ATCA of large scale radio lobes around the microquasar XTE J1550-564 has led to the discovery with Chandra (for the first time) of moving relativistic X-ray jets in a galactic accreting source. The lobes are likely due to the interaction of relativistic plasma with the ISM. This ATCA proposal has allowed similar discovery in H 1743-322, and therefore that it maybe a common occurrence in the Galaxy. Recently, we have witnessed with ATCA the formation of similar lobes in the black hole GX 339-4. We propose to use the Compact Array to continue our search for radio lobes in microquasars that have been active in the past years.

  11. Large scale radio/X-ray jets in microquasars

    NASA Astrophysics Data System (ADS)

    Corbel, Stephane; Tzioumis, Anastasios; Fender, Rob; Kaaret, Philip; Orosz, Jerry; Tomsick, John

    2013-10-01

    The discovery with ATCA of large scale radio lobes around the microquasar XTE J1550-564 has led to the discovery with Chandra (for the first time) of moving relativistic X-ray jets in a galactic accreting source. The lobes are likely due to the interaction of relativistic plasma with the ISM. This ATCA proposal has allowed similar discovery in H 1743-322, and therefore that it maybe a common occurrence in the Galaxy. Recently, we have witnessed with ATCA the formation of similar lobes in the black hole GX 339-4. We propose to use the Compact Array to continue our search for radio lobes in microquasars that have been active in the past years.

  12. Large scale radio/X-ray jets in microquasars

    NASA Astrophysics Data System (ADS)

    Corbel, Stephane; Tzioumis, Anastasios; Fender, Rob; Kaaret, Philip; Tomsick, John; Orosz, Jerry

    2011-04-01

    The discovery with ATCA of large scale radio lobes around the microquasar XTE J1550-564 has led to the discovery with Chandra (for the first time) of moving relativistic X-ray jets in a galactic accreting source. The lobes are likely due to the interaction of relativistic plasma with the ISM. This ATCA proposal has allowed similar discovery in H 1743-322, and therefore that it maybe a common occurrence in the Galaxy. Recently, we have witnessed with ATCA the formation of similar lobes in the black hole GX 339-4. We propose to use the Compact Array to continue our search for radio lobes in microquasars that have been active in the past years.

  13. Large scale radio/X-ray jets in microquasars

    NASA Astrophysics Data System (ADS)

    Corbel, Stephane; Tzioumis, Anastasios; Fender, Rob; Kaaret, Philip; Orosz, Jerry; Tomsick, John

    2013-04-01

    The discovery with ATCA of large scale radio lobes around the microquasar XTE J1550-564 has led to the discovery with Chandra (for the first time) of moving relativistic X-ray jets in a galactic accreting source. The lobes are likely due to the interaction of relativistic plasma with the ISM. This ATCA proposal has allowed similar discovery in H 1743-322, and therefore that it maybe a common occurrence in the Galaxy. Recently, we have witnessed with ATCA the formation of similar lobes in the black hole GX 339-4. We propose to use the Compact Array to continue our search for radio lobes in microquasars that have been active in the past years.

  14. Large scale radio/X-ray jets in microquasars

    NASA Astrophysics Data System (ADS)

    Corbel, Stephane; Tzioumis, Anastasios; Fender, Rob; Kaaret, Philip; Tomsick, John; Orosz, Jerry

    2012-04-01

    The discovery with ATCA of large scale radio lobes around the microquasar XTE J1550-564 has led to the discovery with Chandra (for the first time) of moving relativistic X-ray jets in a galactic accreting source. The lobes are likely due to the interaction of relativistic plasma with the ISM. This ATCA proposal has allowed similar discovery in H 1743-322, and therefore that it maybe a common occurrence in the Galaxy. Recently, we have witnessed with ATCA the formation of similar lobes in the black hole GX 339-4. We propose to use the Compact Array to continue our search for radio lobes in microquasars that have been active in the past years.

  15. Impact of Parallel Computing on Large Scale Aeroelastic Computations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    Aeroelasticity is computationally one of the most intensive fields in aerospace engineering. Though over the last three decades the computational speed of supercomputers have substantially increased, they are still inadequate for large scale aeroelastic computations using high fidelity flow and structural equations. In addition to reaching a saturation in computational speed because of changes in economics, computer manufactures are stopping the manufacturing of mainframe type supercomputers. This has led computational aeroelasticians to face the gigantic task of finding alternate approaches for fulfilling their needs. The alternate path to over come speed and availability limitations of mainframe type supercomputers is to use parallel computers. During this decade several different architectures have evolved. In FY92 the US Government started the High Performance Computing and Communication (HPCC) program. As a participant in this program NASA developed several parallel computational tools for aeroelastic applications. This talk describes the impact of those application tools on high fidelity based multidisciplinary analysis.

  16. Lightweight computational steering of very large scale molecular dynamics simulations

    SciTech Connect

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  17. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­‐scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  18. A large-scale crop protection bioassay data set.

    PubMed

    Gaulton, Anna; Kale, Namrata; van Westen, Gerard J P; Bellis, Louisa J; Bento, A Patrícia; Davies, Mark; Hersey, Anne; Papadatos, George; Forster, Mark; Wege, Philip; Overington, John P

    2015-01-01

    ChEMBL is a large-scale drug discovery database containing bioactivity information primarily extracted from scientific literature. Due to the medicinal chemistry focus of the journals from which data are extracted, the data are currently of most direct value in the field of human health research. However, many of the scientific use-cases for the current data set are equally applicable in other fields, such as crop protection research: for example, identification of chemical scaffolds active against a particular target or endpoint, the de-convolution of the potential targets of a phenotypic assay, or the potential targets/pathways for safety liabilities. In order to broaden the applicability of the ChEMBL database and allow more widespread use in crop protection research, an extensive data set of bioactivity data of insecticidal, fungicidal and herbicidal compounds and assays was collated and added to the database. PMID:26175909

  19. Monitoring large-scale precipitation over the globe

    SciTech Connect

    Xie, Pingping; Arkin, P.A.

    1997-11-01

    A previously developed algorithm was used to produce global monthly precipitation analysis on a 2.5{degrees} latitude/longitude grid for a 17-year period from 1979 to 1995. This paper reports the construction of the data set and describes some of its applications. Seven kinds of individual data sources were used, including gauge observations, estimates inferred from satellite observations, and the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis. The merged analysis was applied to investigate the annual and interannual variability in large-scale precipitation. The mean distribution and the annual cycle of the 17-year merged analysis exhibited reasonable agreements with existing long-term means but with major differences over the eastern Pacific. The interannual variability associated with the El Nino-Southern Oscillation is similar to previous findings, but with substantial details over the ocean. 13 refs., 4 figs.

  20. Towards online multiresolution community detection in large-scale networks.

    PubMed

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  1. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications. PMID:26357251

  2. Thermophoretically induced large-scale deformations around microscopic heat centers

    NASA Astrophysics Data System (ADS)

    Puljiz, Mate; Orlishausen, Michael; Köhler, Werner; Menzel, Andreas M.

    2016-05-01

    Selectively heating a microscopic colloidal particle embedded in a soft elastic matrix is a situation of high practical relevance. For instance, during hyperthermic cancer treatment, cell tissue surrounding heated magnetic colloidal particles is destroyed. Experiments on soft elastic polymeric matrices suggest a very long-ranged, non-decaying radial component of the thermophoretically induced displacement fields around the microscopic heat centers. We theoretically confirm this conjecture using a macroscopic hydrodynamic two-fluid description. Both thermophoretic and elastic effects are included in this theory. Indeed, we find that the elasticity of the environment can cause the experimentally observed large-scale radial displacements in the embedding matrix. Additional experiments confirm the central role of elasticity. Finally, a linearly decaying radial component of the displacement field in the experiments is attributed to the finite size of the experimental sample. Similar results are obtained from our theoretical analysis under modified boundary conditions.

  3. Large-Scale Quantitative Analysis of Painting Arts

    PubMed Central

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  4. A mini review: photobioreactors for large scale algal cultivation.

    PubMed

    Gupta, Prabuddha L; Lee, Seung-Mok; Choi, Hee-Jeong

    2015-09-01

    Microalgae cultivation has gained much interest in terms of the production of foods, biofuels, and bioactive compounds and offers a great potential option for cleaning the environment through CO2 sequestration and wastewater treatment. Although open pond cultivation is most affordable option, there tends to be insufficient control on growth conditions and the risk of contamination. In contrast, while providing minimal risk of contamination, closed photobioreactors offer better control on culture conditions, such as: CO2 supply, water supply, optimal temperatures, efficient exposure to light, culture density, pH levels, and mixing rates. For a large scale production of biomass, efficient photobioreactors are required. This review paper describes general design considerations pertaining to photobioreactor systems, in order to cultivate microalgae for biomass production. It also discusses the current challenges in designing of photobioreactors for the production of low-cost biomass. PMID:26085485

  5. Successful Physician Training Program for Large Scale EMR Implementation

    PubMed Central

    Stevens, L.A.; Mailes, E.S.; Goad, B.A.; Longhurst, C.A.

    2015-01-01

    Summary End-user training is an essential element of electronic medical record (EMR) implementation and frequently suffers from minimal institutional investment. In addition, discussion of successful EMR training programs for physicians is limited in the literature. The authors describe a successful physician-training program at Stanford Children’s Health as part of a large scale EMR implementation. Evaluations of classroom training, obtained at the conclusion of each class, revealed high physician satisfaction with the program. Free-text comments from learners focused on duration and timing of training, the learning environment, quality of the instructors, and specificity of training to their role or department. Based upon participant feedback and institutional experience, best practice recommendations, including physician engagement, curricular design, and assessment of proficiency and recognition, are suggested for future provider EMR training programs. The authors strongly recommend the creation of coursework to group providers by common workflow. PMID:25848415

  6. Automated Sequence Preprocessing in a Large-Scale Sequencing Environment

    PubMed Central

    Wendl, Michael C.; Dear, Simon; Hodgson, Dave; Hillier, LaDeana

    1998-01-01

    A software system for transforming fragments from four-color fluorescence-based gel electrophoresis experiments into assembled sequence is described. It has been developed for large-scale processing of all trace data, including shotgun and finishing reads, regardless of clone origin. Design considerations are discussed in detail, as are programming implementation and graphic tools. The importance of input validation, record tracking, and use of base quality values is emphasized. Several quality analysis metrics are proposed and applied to sample results from recently sequenced clones. Such quantities prove to be a valuable aid in evaluating modifications of sequencing protocol. The system is in full production use at both the Genome Sequencing Center and the Sanger Centre, for which combined weekly production is ∼100,000 sequencing reads per week. PMID:9750196

  7. Large-scale asymmetric synthesis of a cathepsin S inhibitor.

    PubMed

    Lorenz, Jon C; Busacca, Carl A; Feng, XuWu; Grinberg, Nelu; Haddad, Nizar; Johnson, Joe; Kapadia, Suresh; Lee, Heewon; Saha, Anjan; Sarvestani, Max; Spinelli, Earl M; Varsolona, Rich; Wei, Xudong; Zeng, Xingzhong; Senanayake, Chris H

    2010-02-19

    A potent reversible inhibitor of the cysteine protease cathepsin-S was prepared on large scale using a convergent synthetic route, free of chromatography and cryogenics. Late-stage peptide coupling of a chiral urea acid fragment with a functionalized aminonitrile was employed to prepare the target, using 2-hydroxypyridine as a robust, nonexplosive replacement for HOBT. The two key intermediates were prepared using a modified Strecker reaction for the aminonitrile and a phosphonation-olefination-rhodium-catalyzed asymmetric hydrogenation sequence for the urea. A palladium-catalyzed vinyl transfer coupled with a Claisen reaction was used to produce the aldehyde required for the side chain. Key scale up issues, safety calorimetry, and optimization of all steps for multikilogram production are discussed. PMID:20102230

  8. Scalable parallel distance field construction for large-scale applications

    SciTech Connect

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.

  9. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  10. Atypical Behavior Identification in Large Scale Network Traffic

    SciTech Connect

    Best, Daniel M.; Hafen, Ryan P.; Olsen, Bryan K.; Pike, William A.

    2011-10-23

    Cyber analysts are faced with the daunting challenge of identifying exploits and threats within potentially billions of daily records of network traffic. Enterprise-wide cyber traffic involves hundreds of millions of distinct IP addresses and results in data sets ranging from terabytes to petabytes of raw data. Creating behavioral models and identifying trends based on those models requires data intensive architectures and techniques that can scale as data volume increases. Analysts need scalable visualization methods that foster interactive exploration of data and enable identification of behavioral anomalies. Developers must carefully consider application design, storage, processing, and display to provide usability and interactivity with large-scale data. We present an application that highlights atypical behavior in enterprise network flow records. This is accomplished by utilizing data intensive architectures to store the data, aggregation techniques to optimize data access, statistical techniques to characterize behavior, and a visual analytic environment to render the behavioral trends, highlight atypical activity, and allow for exploration.

  11. Are Critical Phenomena Relevant to Large-Scale Evolution?

    NASA Astrophysics Data System (ADS)

    Sole, Ricard V.; Bascompte, Jordi

    1996-02-01

    Recent theoretical studies, based on the theory of self-organized critical systems, seem to suggest that the dynamical patterns of macroevolution could belong to such class of critical phenomena. Two basic approaches have been proposed: the Kauffman-Johnsen model (based on the use of coupled fitness landscapes) and the Bak-Sneppen model. Both are reviewed here. These models are oversimplified pictures of biological evolution, but the (possible) validity of them is based on the concept of universality, i.e. that apparently very different systems sharing some few common properties should also behave in a very similar way. In this paper we explore the current evidence from the fossil record, showing that some properties that are suggestive of critical dynamics would also be the result of random phenomema. Some general properties of the large-scale pattern of evolution, which should be reproduced by these models, are discussed.

  12. The large-scale radio structure of R Aquarii

    NASA Technical Reports Server (NTRS)

    Hollis, J. M.; Michalitsianos, A. G.; Oliversen, R. J.; Yusef-Zadeh, F.; Kafatos, M.

    1987-01-01

    Radio continuum observations of the R Aqr symbiotic star system, using the compact D configuration of the VLA at 6-cm wavelength, reveal a large-scale about 2-arcmin structure engulfing the binary, which has long been known to have a similar optical nebula. This optical/radio nebula possesses about 4 x 10 to the 42nd ergs of kinetic energy which is typical of a recurrent nova outburst. Moreover, a cluster of a dozen additional 6-cm radio sources were observed in proximity to R Aqr, most of these discrete sources lie about 3 arcmin south and/or west of R Aqr and, coupled with previous 20-cm data, spectral indices limits suggest a thermal nature for some of these sources. If the thermal members of the cluster are associated with R Aqr, it may indicate a prehistoric eruption of the system's suspected recurrent nova. The nonthermal cluster members may be extragalactic background radio sources.

  13. Large-Scale All-Dielectric Metamaterial Perfect Reflectors

    DOE PAGESBeta

    Moitra, Parikshit; Slovick, Brian A.; li, Wei; Kravchencko, Ivan I.; Briggs, Dayrl P.; Krishnamurthy, S.; Valentine, Jason

    2015-05-08

    All-dielectric metamaterials offer a potential low-loss alternative to plasmonic metamaterials at optical frequencies. In this paper, we take advantage of the low absorption loss as well as the simple unit cell geometry to demonstrate large-scale (centimeter-sized) all-dielectric metamaterial perfect reflectors made from silicon cylinder resonators. These perfect reflectors, operating in the telecommunications band, were fabricated using self-assembly based nanosphere lithography. In spite of the disorder originating from the self-assembly process, the average reflectance of the metamaterial perfect reflectors is 99.7% at 1530 nm, surpassing the reflectance of metallic mirrors. Moreover, the spectral separation of the electric and magnetic resonances canmore » be chosen to achieve the required reflection bandwidth while maintaining a high tolerance to disorder. Finally, the scalability of this design could lead to new avenues of manipulating light for low-loss and large-area photonic applications.« less

  14. Large-scale coastal evolution of Louisiana's barrier islands

    USGS Publications Warehouse

    List, Jeffrey H.; Jaffe, Bruce E.; Sallenger,, Asbury H., Jr.

    1991-01-01

    The prediction of large-scale coastal change is an extremely important, but distant goal. Here we describe some of our initial efforts in this direction, using historical bathymetric information along a 150 km reach of the rapidly evolving barrier island coast of Louisiana. Preliminary results suggest that the relative sea level rise rate, though extremely high in the area, has played a secondary role in coastal erosion over the last 100 years, with longshore transport of sand-sized sediment being the primary cause. Prediction of future conditions is hampered by a general lack of erosion processes understanding; however, an examination of the changing volumes of sand stored in a large ebb-tidal delta system suggests a continued high rate of shoreline retreat driven by the longshore re-distribution of sand.

  15. Modeling and Dynamic Simulation of a Large Scale Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Lv, C.; Qiu, T. N.; Wu, J. H.; Xie, X. J.; Li, Q.

    In order to simulate the transient behaviors of a newly developed 2 kW helium refrigerator, a numerical model of the critical equipment including a screw compressor with variable-frequency drive, plate-fin heat exchangers, a turbine expander, and pneumatic valves wasdeveloped. In the simulation,the calculation of the helium thermodynamic properties arebased on 32-parameter modified Benedict-Webb-Rubin (MBWR) state equation.The start-up process of the warm compressor station with gas management subsystem, and the cool-down process of cold box in an actual operation, were dynamically simulated. The developed model was verified by comparing the simulated results with the experimental data.Besides, system responses of increasing heat load were simulated. This model can also be used to design and optimize other large scale helium refrigerators.

  16. Ocean fertilization experiments may initiate a large scale phytoplankton bloom

    NASA Astrophysics Data System (ADS)

    Neufeld, Zoltán; Haynes, Peter H.; Garçon, Véronique; Sudre, Joël

    2002-06-01

    Oceanic plankton plays an important role in the marine food chain and through its significant contribution to the global carbon cycle can also influence the climate. Plankton bloom is a sudden rapid increase of the population. It occurs naturally in the North Atlantic as a result of seasonal changes. Ocean fertilization experiments have shown that supply of iron, an important trace element, can trigger a phytoplankton bloom in oceanic regions with low natural phytoplankton density. Here we use a simple mathematical model of the combined effects of stirring by ocean eddies and plankton evolution to consider the impact of a transient local perturbation, e.g. in the form of iron enrichment as in recent `ocean fertilization' experiments. The model not only explains aspects of the bloom observed in such experiments but predicts the unexpected outcome of a large scale bloom that in its extent could be comparable to the spring bloom in the North Atlantic.

  17. Hierarchical features of large-scale cortical connectivity

    NASA Astrophysics Data System (ADS)

    da F. Costa, L.; Sporns, O.

    2005-12-01

    The analysis of complex networks has revealed patterns of organization in a variety of natural and artificial systems, including neuronal networks of the brain at multiple scales. In this paper, we describe a novel analysis of the large-scale connectivity between regions of the mammalian cerebral cortex, utilizing a set of hierarchical measurements proposed recently. We examine previously identified functional clusters of brain regions in macaque visual cortex and cat cortex and find significant differences between such clusters in terms of several hierarchical measures, revealing differences in how these clusters are embedded in the overall cortical architecture. For example, the ventral cluster of visual cortex maintains structurally more segregated, less divergent connections than the dorsal cluster, which may point to functionally different roles of their constituent brain regions.

  18. Complex modular structure of large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  19. Large scale motions of thermal transport in a turbulent channel

    NASA Astrophysics Data System (ADS)

    Dharmarathne, Suranga; Tutkun, Murat; Araya, Guillermo; Leonardi, Stefano; Castillo, Luciano

    2015-11-01

    The importance of large scale motions (LSMs) on thermal transport in a turbulent channel flow at friction number of 394 is investigated. Two-point correlation analysis reveals that LSM which significantly contribute to turbulence kinetic energy and scalar transport is a reminiscent of a hairpin packet. Low-order mode representation of the original fields using proper orthogonal decomposition (POD) unveils that the most dominant mode that transports is 3-4 channel half-heights long and such structure which contribute to scalar transport is 2-4 channel half-heights long. Consequently, the study discloses that LSMs are effective in transporting both streamwise component of turbulence kinetic energy and scalar variances.

  20. A large-scale evaluation of computational protein function prediction.

    PubMed

    Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-03-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools. PMID:23353650