Sample records for scale problems arising

  1. A Nonparametric Framework for Comparing Trends and Gaps across Tests

    ERIC Educational Resources Information Center

    Ho, Andrew Dean

    2009-01-01

    Problems of scale typically arise when comparing test score trends, gaps, and gap trends across different tests. To overcome some of these difficulties, test score distributions on the same score scale can be represented by nonparametric graphs or statistics that are invariant under monotone scale transformations. This article motivates and then…

  2. Parallel block schemes for large scale least squares computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less

  3. Solving LP Relaxations of Large-Scale Precedence Constrained Problems

    NASA Astrophysics Data System (ADS)

    Bienstock, Daniel; Zuckerberg, Mark

    We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.

  4. Sparse Measurement Systems: Applications, Analysis, Algorithms and Design

    ERIC Educational Resources Information Center

    Narayanaswamy, Balakrishnan

    2011-01-01

    This thesis deals with "large-scale" detection problems that arise in many real world applications such as sensor networks, mapping with mobile robots and group testing for biological screening and drug discovery. These are problems where the values of a large number of inputs need to be inferred from noisy observations and where the…

  5. Medical Student and Junior Doctors' Tolerance of Ambiguity: Development of a New Scale

    ERIC Educational Resources Information Center

    Hancock, Jason; Roberts, Martin; Monrouxe, Lynn; Mattick, Karen

    2015-01-01

    The practice of medicine involves inherent ambiguity, arising from limitations of knowledge, diagnostic problems, complexities of treatment and outcome and unpredictability of patient response. Research into doctors' tolerance of ambiguity is hampered by poor conceptual clarity and inadequate measurement scales. We aimed to create and pilot a…

  6. Applied mathematical problems in modern electromagnetics

    NASA Astrophysics Data System (ADS)

    Kriegsman, Gregory

    1994-05-01

    We have primarily investigated two classes of electromagnetic problems. The first contains the quantitative description of microwave heating of dispersive and conductive materials. Such problems arise, for example, when biological tissue are exposed, accidentally or purposefully, to microwave radiation. Other instances occur in ceramic processing, such as sintering and microwave assisted chemical vapor infiltration and other industrial drying processes, such as the curing of paints and concrete. The second class characterizes the scattering of microwaves by complex targets which possess two or more disparate length and/or time scales. Spatially complex scatterers arise in a variety of applications, such as large gratings and slowly changing guiding structures. The former are useful in developing microstrip energy couplers while the later can be used to model anatomical subsystems (e.g., the open guiding structure composed of two legs and the adjoining lower torso). Temporally complex targets occur in applications involving dispersive media whose relaxation times differ by orders of magnitude from thermal and/or electromagnetic time scales. For both cases the mathematical description of the problems gives rise to complicated ill-conditioned boundary value problems, whose accurate solutions require a blend of both asymptotic techniques, such as multiscale methods and matched asymptotic expansions, and numerical methods incorporating radiation boundary conditions, such as finite differences and finite elements.

  7. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  8. Study of Varying Boundary Layer Height on Turret Flow Structures

    DTIC Science & Technology

    2011-06-01

    fluid dynamics. The difficulties of the problem arise in modeling several complex flow features including separation, reattachment, three-dimensional...impossible. In this case, the approach is to create a model to calculate the properties of interest. The main issue with resolving turbulent flows...operation and their effect is modeled through subgrid scale models . As a result, the the most important turbulent scales are resolved and the

  9. New design for interfacing computers to the Octopus network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sloan, L.J.

    1977-03-14

    The Lawrence Livermore Laboratory has several large-scale computers which are connected to the Octopus network. Several difficulties arise in providing adequate resources along with reliable performance. To alleviate some of these problems a new method of bringing large computers into the Octopus environment is proposed.

  10. Some current themes in physical hydrology of the land-atmosphere interface

    USGS Publications Warehouse

    Milly, P.C.D.

    1991-01-01

    Certain themes arise repeatedly in current literature dealing with the physical hydrology of the interface between the atmosphere and the continents. Papers contributed to the 1991 International Association of Hydrological Sciences Symposium on Hydrological Interactions between Atmosphere, Soil and Vegetation echo these themes, which are discussed in this paper. The land-atmosphere interface is the region where atmosphere, soil, and vegetation have mutual physical contact, and a description of exchanges of matter or energy among these domains must often consider the physical properties and states of the entire system. A difficult family of problems is associated with the reconciliation of the wide range of spatial scales that arise in the course of observational, theoretical, and modeling activities. These scales are determined by some of the physical elements of the interface, by patterns of natural variability of the physical composition of the interface, by the dynamics of the processes at the interface, and by methods of measurement and computation. Global environmental problems are seen by many hydrologists as a major driving force for development of the science. The challenge for hydrologists will be to respond to this force as scientists rather than problem-solvers.

  11. Observations from Space in a Global Ecology Programme

    ERIC Educational Resources Information Center

    Kondratyev, Kirill Ya

    1974-01-01

    In order to resolve problems arising from the possibility of ecological crisis, we need more and better information about our environment. The condition of nature on a planetary scale can be monitored efficiently only with the aid of satellites, human observers in earth orbit, and computer analysis of data. (Author/GS)

  12. The limitations of staggered grid finite differences in plasticity problems

    NASA Astrophysics Data System (ADS)

    Pranger, Casper; Herrendörfer, Robert; Le Pourhiet, Laetitia

    2017-04-01

    Most crustal-scale applications operate at grid sizes much larger than those at which plasticity occurs in nature. As a consequence, plastic shear bands often localize to the scale of one grid cell, and numerical ploys — like introducing an artificial length scale — are needed to counter this. If for whatever reasons (good or bad) this is not done, we find that problems may arise due to the fact that in the staggered grid finite difference discretization, unknowns like components of the stress tensor and velocity vector are located in physically different positions. This incurs frequent interpolation, reducing the accuracy of the discretization. For purely stress-dependent plasticity problems the adverse effects might be contained because the magnitude of the stress discontinuity across a plastic shear band is limited. However, we find that when rate-dependence of friction is added in the mix, things become ugly really fast and the already hard-to-solve and highly nonlinear problem of plasticity incurs an extra penalty.

  13. Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos

    DTIC Science & Technology

    2009-05-01

    instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors

  14. Contemporary Religious Conflicts and Religious Education in the Republic of Korea

    ERIC Educational Resources Information Center

    Kim, Chongsuh

    2007-01-01

    The Republic of (South) Korea is a multi-religious society. Naturally, large- or small-scale conflicts arise between religious groups. Moreover, inter-religious troubles related to the educational system, such as educational ideologies, textbook content and forced chapel attendance, have often caused social conflicts. Most of the problems derive…

  15. Teaching Discrete and Programmable Logic Design Techniques Using a Single Laboratory Board

    ERIC Educational Resources Information Center

    Debiec, P.; Byczuk, M.

    2011-01-01

    Programmable logic devices (PLDs) are used at many universities in introductory digital logic laboratories, where kits containing a single high-capacity PLD replace "standard" sets containing breadboards, wires, and small- or medium-scale integration (SSI/MSI) chips. From the pedagogical point of view, two problems arise in these…

  16. The Opening of Higher Education

    ERIC Educational Resources Information Center

    Matkin, Gary W.

    2012-01-01

    In a 1974 report presented to the Organisation for Economic Co-operation and Development (OECD), Martin Trow laid out a framework for understanding large-scale, worldwide changes in higher education. Trow's essay also pointed to the problems that "arise out of the transition from one phase to another in a broad pattern of development of higher…

  17. The Use of Kruskal-Newton Diagrams for Differential Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T. Fishaleck and R.B. White

    2008-02-19

    The method of Kruskal-Newton diagrams for the solution of differential equations with boundary layers is shown to provide rapid intuitive understanding of layer scaling and can result in the conceptual simplification of some problems. The method is illustrated using equations arising in the theory of pattern formation and in plasma physics.

  18. On a Game of Large-Scale Projects Competition

    NASA Astrophysics Data System (ADS)

    Nikonov, Oleg I.; Medvedeva, Marina A.

    2009-09-01

    The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].

  19. [Eyberg inventory of child behavior. Standardization of the Spanish version and its usefulness in ambulatory pediatrics].

    PubMed

    García-Tornel Florensa, S; Calzada, E J; Eyberg, S M; Mas Alguacil, J C; Vilamala Serra, C; Baraza Mendoza, C; Villena Collado, H; González García, M; Calvo Hernández, M; Trinxant Doménech, A

    1998-05-01

    Taking into account the high prevalence of behavioral problems in the pediatric outpatient clinic, a need for a useful and easy to administer tool for the evaluation of this problem arises. The psychometric characteristics of the Spanish version of the Eyberg Behavioral Child Inventory (EBCI), [in Spanish Inventario de Eyberg para el Comportamiento de Niño (IECN)], a 36-item questionnaire were established. The ECBI inventory/questionnaire was translated into Spanish. The basis of the ECBI is the evaluation of the child's behavior through the parents' answers to the questionnaire. Healthy children between 2 and 12 years of age were included and were taken from pediatric outpatient clinics from urban and suburban areas of Barcelona and from our hospital's own ambulatory clinic. The final sample included 518 subjects. The mean score on the intensity scale was 96.8 and on the problem scale 3.9. Internal consistency (Cronbach's alpha) was 0.73 and the test-retest had an r of 0.89 (p < 0.001) for the intensity scale and r = 0.93 (p < 0.001) for the problem scale. Interrater reliability for the intensity scale was r = 0.58 (p < 0.001) and r = 0.32 (p < 0.001) for the problem scale. Concurrent validity between both scales was r = 0.343 (p < 0.001). The IECN is a useful and easy tool to apply in the pediatrician's office as a method for early detection of behavior problems.

  20. Critical Analysis of the Mathematical Formalism of Theoretical Physics. V. Foundations of the Theory of Negative Numbers

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2015-04-01

    Analysis of the foundations of the theory of negative numbers is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. Statement of the problem is as follows. As is known, point O in the Cartesian coordinate system XOY determines the position of zero on the scale. The number ``zero'' belongs to both the scale of positive numbers and the scale of negative numbers. In this case, the following formallogical contradiction arises: the number 0 is both positive number and negative number; or, equivalently, the number 0 is neither positive number nor negative number, i.e. number 0 has no sign. Then the following question arises: Do negative numbers exist in science and practice? A detailed analysis of the problem shows that negative numbers do not exist because the foundations of the theory of negative numbers contrary to the formal-logical laws. It is proved that: (a) all numbers have no signs; (b) the concepts ``negative number'' and ``negative sign of number'' represent a formallogical error; (c) signs ``plus'' and ``minus'' are only symbols of mathematical operations. The logical errors determine the essence of the theory of negative numbers: the theory of negative number is a false theory.

  1. Boundary Korn Inequality and Neumann Problems in Homogenization of Systems of Elasticity

    NASA Astrophysics Data System (ADS)

    Geng, Jun; Shen, Zhongwei; Song, Liang

    2017-06-01

    This paper is concerned with a family of elliptic systems of linear elasticity with rapidly oscillating periodic coefficients, arising in the theory of homogenization. We establish uniform optimal regularity estimates for solutions of Neumann problems in a bounded Lipschitz domain with L 2 boundary data. The proof relies on a boundary Korn inequality for solutions of systems of linear elasticity and uses a large-scale Rellich estimate obtained in Shen (Anal PDE, arXiv:1505.00694v2).

  2. A Minimum-Residual Finite Element Method for the Convection-Diffusion Equation

    DTIC Science & Technology

    2013-05-01

    4p . We note that these two choices of discretization for V are not mutually exclusive, and that novel choices for Vh are likely the key to yielding...the inside with the positive- definite operator A, which is precisely the discrete system that arises under the optimal test function framework of DPG...converts the fine-scale problem into a symmetric-positive definite one, allowing for a well-behaved subgrid model of fine scale behavior. We begin again

  3. Building a Model of Support for Preschool Children with Speech and Language Disorders

    ERIC Educational Resources Information Center

    Robertson, Natalie; Ohi, Sarah

    2016-01-01

    Speech and language disorders impede young children's abilities to communicate and are often associated with a number of behavioural problems arising in the preschool classroom. This paper reports a small-scale study that investigated 23 Australian educators' and 7 Speech Pathologists' experiences in working with three to five year old children…

  4. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGES

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...

    2016-02-24

    In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  5. Relative locality and the soccer ball problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amelino-Camelia, Giovanni; Freidel, Laurent; Smolin, Lee

    We consider the behavior of macroscopic bodies within the framework of relative locality [G. Amelino-Camelia, L. Freidel, J. Kowalski-Glikman, and L. Smolin, arXiv:1101.0931]. This is a recent proposal for Planck scale modifications of the relativistic dynamics of particles which are described as arising from deformations in the geometry of momentum space. We consider and resolve a common objection against such proposals, which is that, even if the corrections are small for elementary particles in current experiments, they are huge when applied to composite systems such as soccer balls, planets, and stars, with energies E{sub macro} much larger than M{sub P}.more » We show that this soccer ball problem does not arise within the framework of relative locality because the nonlinear effects for the dynamics of a composite system with N elementary particles appear at most of order E{sub macro}/N{center_dot}M{sub P}.« less

  6. On a theorem of existence for scaling problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osmolovskii, V.G.

    1995-12-05

    The authors study the question of the existence of the global minimum of the functional over the set of functions where {Omega} {contained_in} R{sup n} is a bounded domain, and a fixed function K (x,y) = K (y,x) belongs to L{sub 2} ({Omega} x {Omega}). Such functionals arise in some mathematical models of economics and sociology.

  7. Improved Flux Formulations for Unsteady Low Mach Number Flows

    DTIC Science & Technology

    2012-07-01

    challenging problem since it requires the resolution of disparate time scales. Unsteady effects may arise from a combination of hydrodynamic effects...Many practical applications including rotorcraft flows, jets and shear layers include a combination of both acoustic and hydrodynamic effects...are computed independently as scalar formulations thus making it possible to independently tailor the dissipation for hydrodynamic and acoustic

  8. SUSY’s Ladder: Reframing sequestering at Large Volume

    DOE PAGES

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less

  9. Spatial and Temporal Scaling of Thermal Infrared Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Goel, Narendra S.

    1995-01-01

    Although remote sensing has a central role to play in the acquisition of synoptic data obtained at multiple spatial and temporal scales to facilitate our understanding of local and regional processes as they influence the global climate, the use of thermal infrared (TIR) remote sensing data in this capacity has received only minimal attention. This results from some fundamental challenges that are associated with employing TIR data collected at different space and time scales, either with the same or different sensing systems, and also from other problems that arise in applying a multiple scaled approach to the measurement of surface temperatures. In this paper, we describe some of the more important problems associated with using TIR remote sensing data obtained at different spatial and temporal scales, examine why these problems appear as impediments to using multiple scaled TIR data, and provide some suggestions for future research activities that may address these problems. We elucidate the fundamental concept of scale as it relates to remote sensing and explore how space and time relationships affect TIR data from a problem-dependency perspective. We also describe how linearity and non-linearity observation versus parameter relationships affect the quantitative analysis of TIR data. Some insight is given on how the atmosphere between target and sensor influences the accurate measurement of surface temperatures and how these effects will be compounded in analyzing multiple scaled TIR data. Last, we describe some of the challenges in modeling TIR data obtained at different space and time scales and discuss how multiple scaled TIR data can be used to provide new and important information for measuring and modeling land-atmosphere energy balance processes.

  10. [The function, activity and participation: the occupational reintegration].

    PubMed

    Zampolini, Mauro

    2015-01-01

    The return to work is a significant outcome after amputation. To reach this goal it is necessary to measure properly this process. Unfortunately, for amputee, we have different scales but often focused on specific groups of problems The International Classification of funtioning (ICF) can constitute the frame of reference where to converge scales available and according to which define problems related to disability. For the person amputated the theme of the return to work arises differently for the conditions traumatic and non-traumatic. For the first return to work is a priority given the younger age. For the latter, given the advanced age, the return to work is likely to be a measure of the success of rehabilitation is not particularly relevant.

  11. Cosmological signatures of a UV-conformal standard model.

    PubMed

    Dorsch, Glauber C; Huber, Stephan J; No, Jose Miguel

    2014-09-19

    Quantum scale invariance in the UV has been recently advocated as an attractive way of solving the gauge hierarchy problem arising in the standard model. We explore the cosmological signatures at the electroweak scale when the breaking of scale invariance originates from a hidden sector and is mediated to the standard model by gauge interactions (gauge mediation). These scenarios, while being hard to distinguish from the standard model at LHC, can give rise to a strong electroweak phase transition leading to the generation of a large stochastic gravitational wave signal in possible reach of future space-based detectors such as eLISA and BBO. This relic would be the cosmological imprint of the breaking of scale invariance in nature.

  12. Obtaining lutein-rich extract from microalgal biomass at preparative scale.

    PubMed

    Fernández-Sevilla, José M; Fernández, F Gabriel Acién; Grima, Emilio Molina

    2012-01-01

    Lutein extracts are in increasing demand due to their alleged role in the prevention of degenerative disorders such as age-related macular degeneration (AMD). Lutein extracts are currently obtained from plant sources, but microalgae have been demonstrated to be a competitive source likely to become an alternative. The extraction of lutein from microalgae posesses specific problems that arise from the different structure and composition of the source biomass. Here is presented a method for the recovery of lutein-rich carotenoid extracts from microalgal biomass in the kilogram scale.

  13. On the Asymptotic Behavior of a Log Gas in the Bulk Scaling Limit in the Presence of a Varying External Potential I

    NASA Astrophysics Data System (ADS)

    Bothner, Thomas; Deift, Percy; Its, Alexander; Krasovsky, Igor

    2015-08-01

    We study the determinant , of the integrable Fredholm operator K s acting on the interval (-1, 1) with kernel . This determinant arises in the analysis of a log-gas of interacting particles in the bulk-scaling limit, at inverse temperature , in the presence of an external potential supported on an interval of length . We evaluate, in particular, the double scaling limit of as and , in the region , for any fixed . This problem was first considered by Dyson (Chen Ning Yang: A Great Physicist of the Twentieth Century. International Press, Cambridge, pp. 131-146, 1995).

  14. Recognition by Linear Combination of Models

    DTIC Science & Technology

    1989-08-01

    to the model (or to the viewed object) prior to, or during the matching stage. Such an approach is used in [Chien & Aggarwal 1987 , Faugeras & Hebert...1986, Fishler & Bolles 1981, Huttenlocher & Ullman 1987 , Lowe 1985, Thompson & Mundy 1987 , Ullman 1986]. Key problems that arise in any alignment...cludes 3-D rotation, translation and scaling, followed by an orthographic projection. The 1 transformation is determined as in [Huttenlocher & Ullman 1987

  15. Legal, institutional, and political issues in transportation of nuclear materials at the back end of the LWR nuclear fuel cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lippek, H.E.; Schuller, C.R.

    1979-03-01

    A study was conducted to identify major legal and institutional problems and issues in the transportation of spent fuel and associated processing wastes at the back end of the LWR nuclear fuel cycle. (Most of the discussion centers on the transportation of spent fuel, since this activity will involve virtually all of the legal and institutional problems likely to be encountered in moving waste materials, as well.) Actions or approaches that might be pursued to resolve the problems identified in the analysis are suggested. Two scenarios for the industrial-scale transportation of spent fuel and radioactive wastes, taken together, high-light mostmore » of the major problems and issues of a legal and institutional nature that are likely to arise: (1) utilizing the Allied General Nuclear Services (AGNS) facility at Barnwell, SC, as a temporary storage facility for spent fuel; and (2) utilizing AGNS for full-scale commercial reprocessing of spent LWR fuel.« less

  16. Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem

    DOE PAGES

    Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...

    2015-10-02

    The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less

  17. Monge-Ampére simulation of fourth order PDEs in two dimensions with application to elastic-electrostatic contact problems

    NASA Astrophysics Data System (ADS)

    DiPietro, Kelsey L.; Lindsay, Alan E.

    2017-11-01

    We present an efficient moving mesh method for the simulation of fourth order nonlinear partial differential equations (PDEs) in two dimensions using the Parabolic Monge-Ampére (PMA) equation. PMA methods have been successfully applied to the simulation of second order problems, but not on systems with higher order equations which arise in many topical applications. Our main application is the resolution of fine scale behavior in PDEs describing elastic-electrostatic interactions. The PDE system considered has multiple parameter dependent singular solution modalities, including finite time singularities and sharp interface dynamics. We describe how to construct a dynamic mesh algorithm for such problems which incorporates known self similar or boundary layer scalings of the underlying equation to locate and dynamically resolve fine scale solution features in these singular regimes. We find a key step in using the PMA equation for mesh generation in fourth order problems is the adoption of a high order representation of the transformation from the computational to physical mesh. We demonstrate the efficacy of the new method on a variety of examples and establish several new results and conjectures on the nature of self-similar singularity formation in higher order PDEs.

  18. Comparative analysis of different variants of the Uzawa algorithm in problems of the theory of elasticity for incompressible materials.

    PubMed

    Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A

    2016-09-01

    Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Zhaojun; Yang, Chao

    What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcasemore » some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.« less

  20. Triangles with Integer Dimensions

    ERIC Educational Resources Information Center

    Gilbertson, Nicholas J.; Rogers, Kimberly Cervello

    2016-01-01

    Interesting and engaging mathematics problems can come from anywhere. Sometimes great problems arise from interesting contexts. At other times, interesting problems arise from asking "what if" questions while appreciating the structure and beauty of mathematics. The intriguing problem described in this article resulted from the second…

  1. On strong homogeneity of a class of global optimization algorithms working with infinite and infinitesimal scales

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.

    2018-06-01

    The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly homogeneous methods are considered. First, it is shown that even if a method possesses this property theoretically, numerically very small and large scaling constants can lead to ill-conditioning of the scaled problem. Second, a new class of global optimization problems where the objective function can have not only finite but also infinite or infinitesimal Lipschitz constants is introduced. Third, the strong homogeneity of several Lipschitz global optimization algorithms is studied in the framework of the Infinity Computing paradigm allowing one to work numerically with a variety of infinities and infinitesimals. Fourth, it is proved that a class of efficient univariate methods enjoys this property for finite, infinite and infinitesimal scaling and shifting constants. Finally, it is shown that in certain cases the usage of numerical infinities and infinitesimals can avoid ill-conditioning produced by scaling. Numerical experiments illustrating theoretical results are described.

  2. NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data.

    PubMed

    Pnevmatikakis, Eftychios A; Giovannucci, Andrea

    2017-11-01

    Motion correction is a challenging pre-processing problem that arises early in the analysis pipeline of calcium imaging data sequences. The motion artifacts in two-photon microscopy recordings can be non-rigid, arising from the finite time of raster scanning and non-uniform deformations of the brain medium. We introduce an algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching. NoRMCorre operates by splitting the field of view (FOV) into overlapping spatial patches along all directions. The patches are registered at a sub-pixel resolution for rigid translation against a regularly updated template. The estimated alignments are subsequently up-sampled to create a smooth motion field for each frame that can efficiently approximate non-rigid artifacts in a piecewise-rigid manner. Existing approaches either do not scale well in terms of computational performance or are targeted to non-rigid artifacts arising just from the finite speed of raster scanning, and thus cannot correct for non-rigid motion observable in datasets from a large FOV. NoRMCorre can be run in an online mode resulting in comparable to or even faster than real time motion registration of streaming data. We evaluate its performance with simple yet intuitive metrics and compare against other non-rigid registration methods on simulated data and in vivo two-photon calcium imaging datasets. Open source Matlab and Python code is also made available. The proposed method and accompanying code can be useful for solving large scale image registration problems in calcium imaging, especially in the presence of non-rigid deformations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  3. Collaborative Research and Development Delivery. Order 0041: Models for the Prediction of Interfacial Properties

    DTIC Science & Technology

    2006-08-01

    and analytical techniques. Materials with larger grains, such as gamma titanium aluminide , can be instrumented with strain gages on each grain...scale. Materials such as Ti-15-Al-33Nb(at.%) have a significantly smaller microstructure than gamma titanium aluminide , therefore strain gages can...contact fatigue problems that arise at the blade -disk interface in aircraft engines. The stress fields can be used to predict the performance of

  4. Extensions of the standard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramond, P.

    1983-01-01

    In these lectures we focus on several issues that arise in theoretical extensions of the standard model. First we describe the kinds of fermions that can be added to the standard model without affecting known phenomenology. We focus in particular on three types: the vector-like completion of the existing fermions as would be predicted by a Kaluza-Klein type theory, which we find cannot be realistically achieved without some chiral symmetry; fermions which are vector-like by themselves, such as do appear in supersymmetric extensions, and finally anomaly-free chiral sets of fermions. We note that a chiral symmetry, such as the Peccei-Quinnmore » symmetry can be used to produce a vector-like theory which, at scales less than M/sub W/, appears to be chiral. Next, we turn to the analysis of the second hierarchy problem which arises in Grand Unified extensions of the standard model, and plays a crucial role in proton decay of supersymmetric extensions. We review the known mechanisms for avoiding this problem and present a new one which seems to lead to the (family) triplication of the gauge group. Finally, this being a summer school, we present a list of homework problems. 44 references.« less

  5. The Problems of Diagnosis and Remediation of Dyscalculia.

    ERIC Educational Resources Information Center

    Price, Nigel; Youe, Simon

    2000-01-01

    Focuses on the problems of diagnosis and remediation of dyscalculia. Explores whether there is justification for believing that specific difficulty with mathematics arises jointly with a specific language problem, or whether a specific difficulty with mathematics can arise independently of problems with language. Uses a case study to illuminate…

  6. Guidance and control strategies for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Naidu, Desineni S.; Hibey, Joseph L.

    1989-01-01

    The optimal control problem arising in coplanar orbital transfer employing aeroassist technology and the fuel-optimal control problem arising in orbital transfer vehicles employing aeroassist technology are addressed.

  7. New convergence results for the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Bonettini, S.; Prato, M.

    2015-09-01

    The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.

  8. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  9. [Continuity and discontinuity of the geomerida: the bionomic and biotic aspects].

    PubMed

    Kafanov, A I

    2005-01-01

    The view of the spatial structure of the geomerida (Earth's life cover) as a continuum that prevails in modern phytocoenology is mostly determined by a physiognomic (landscape-bionomic) discrimination of vegetation components. In this connection, geography of life forms appears as subject of the landscapebionomic biogeography. In zoocoenology there is a tendency of synthesis of alternative concepts based on the assumption that there are no absolute continuum and absolute discontinuum in the organic nature. The problem of continuum and discontinuum of living cover being problem of scale aries from fractal structure of geomerida. This problem arises from fractal nature of the spatial structure of geomerida. The continuum mainly belongs to regularities of topological order. At regional and subregional scale the continuum of biochores is rather rare. The objective evidences of relative discontinuity of the living cover are determined by significant alterations of species diversity at the regional, subregional and even topological scale Alternatively to conventionally discriminated units in physionomically continuous vegetation, the same biotic complexes, represented as operational units of biogeographical and biocenological zoning, are distinguished repeatedly and independently by different researchers. An area occupied by certain flora (fauna, biota) could be considered as elementary unit of biotic diversity (elementary biotic complex).

  10. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  11. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  12. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  13. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  14. Homogenization techniques for population dynamics in strongly heterogeneous landscapes.

    PubMed

    Yurk, Brian P; Cobbold, Christina A

    2018-12-01

    An important problem in spatial ecology is to understand how population-scale patterns emerge from individual-level birth, death, and movement processes. These processes, which depend on local landscape characteristics, vary spatially and may exhibit sharp transitions through behavioural responses to habitat edges, leading to discontinuous population densities. Such systems can be modelled using reaction-diffusion equations with interface conditions that capture local behaviour at patch boundaries. In this work we develop a novel homogenization technique to approximate the large-scale dynamics of the system. We illustrate our approach, which also generalizes to multiple species, with an example of logistic growth within a periodic environment. We find that population persistence and the large-scale population carrying capacity is influenced by patch residence times that depend on patch preference, as well as movement rates in adjacent patches. The forms of the homogenized coefficients yield key theoretical insights into how large-scale dynamics arise from the small-scale features.

  15. Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology

    NASA Astrophysics Data System (ADS)

    Mohanty, B.; Kathuria, D.; Katzfuss, M.

    2016-12-01

    Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.

  16. Inequalities, Assessment and Computer Algebra

    ERIC Educational Resources Information Center

    Sangwin, Christopher J.

    2015-01-01

    The goal of this paper is to examine single variable real inequalities that arise as tutorial problems and to examine the extent to which current computer algebra systems (CAS) can (1) automatically solve such problems and (2) determine whether students' own answers to such problems are correct. We review how inequalities arise in contemporary…

  17. Sampling problems: The small scale structure of precipitation

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1981-01-01

    The quantitative measurement of precipitation characteristics for any area on the surface of the Earth is not an easy task. Precipitation is rather variable in both space and time, and the distribution of surface rainfall data given location typically is substantially skewed. There are a number of precipitation process at work in the atmosphere, and few of them are well understood. The formal theory on sampling and estimating precipitation appears considerably deficient. Little systematic attention is given to nonsampling errors that always arise in utilizing any measurement system. Although the precipitation measurement problem is an old one, it continues to be one that is in need of systematic and careful attention. A brief history of the presently competing measurement technologies should aid us in understanding the problem inherent in this measurement task.

  18. A Multiscale Nested Modeling Framework to Simulate the Interaction of Surface Gravity Waves with Nonlinear Internal Gravity Waves

    DTIC Science & Technology

    2015-09-30

    Meneveau, C., and L. Shen (2014), Large-eddy simulation of offshore wind farm , Physics of Fluids, 26, 025101. Zhang, Z., Fringer, O.B., and S.R...being centimeter scale, surface mixed layer processes arising from the combined actions of tides, winds and mesoscale currents. Issues related to...the internal wave field and how it impacts the surface waves. APPROACH We are focusing on the problem of modification of the wind -wave field

  19. The anamorphic universe

    NASA Astrophysics Data System (ADS)

    Ijjas, Anna; Steinhardt, Paul J.

    2015-10-01

    We introduce ``anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariant spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.

  20. The Marriage of Gas and Dust

    NASA Astrophysics Data System (ADS)

    Price, D. J.; Laibe, G.

    2015-10-01

    Dust-gas mixtures are the simplest example of a two fluid mixture. We show that when simulating such mixtures with particles or with particles coupled to grids a problem arises due to the need to resolve a very small length scale when the coupling is strong. Since this is occurs in the limit when the fluids are well coupled, we show how the dust-gas equations can be reformulated to describe a single fluid mixture. The equations are similar to the usual fluid equations supplemented by a diffusion equation for the dust-to-gas ratio or alternatively the dust fraction. This solves a number of numerical problems as well as making the physics clear.

  1. Analysis of passive scalar advection in parallel shear flows: Sorting of modes at intermediate time scales

    NASA Astrophysics Data System (ADS)

    Camassa, Roberto; McLaughlin, Richard M.; Viotti, Claudio

    2010-11-01

    The time evolution of a passive scalar advected by parallel shear flows is studied for a class of rapidly varying initial data. Such situations are of practical importance in a wide range of applications from microfluidics to geophysics. In these contexts, it is well-known that the long-time evolution of the tracer concentration is governed by Taylor's asymptotic theory of dispersion. In contrast, we focus here on the evolution of the tracer at intermediate time scales. We show how intermediate regimes can be identified before Taylor's, and in particular, how the Taylor regime can be delayed indefinitely by properly manufactured initial data. A complete characterization of the sorting of these time scales and their associated spatial structures is presented. These analytical predictions are compared with highly resolved numerical simulations. Specifically, this comparison is carried out for the case of periodic variations in the streamwise direction on the short scale with envelope modulations on the long scales, and show how this structure can lead to "anomalously" diffusive transients in the evolution of the scalar onto the ultimate regime governed by Taylor dispersion. Mathematically, the occurrence of these transients can be viewed as a competition in the asymptotic dominance between large Péclet (Pe) numbers and the long/short scale aspect ratios (LVel/LTracer≡k), two independent nondimensional parameters of the problem. We provide analytical predictions of the associated time scales by a modal analysis of the eigenvalue problem arising in the separation of variables of the governing advection-diffusion equation. The anomalous time scale in the asymptotic limit of large k Pe is derived for the short scale periodic structure of the scalar's initial data, for both exactly solvable cases and in general with WKBJ analysis. In particular, the exactly solvable sawtooth flow is especially important in that it provides a short cut to the exact solution to the eigenvalue problem for the physically relevant vanishing Neumann boundary conditions in linear-shear channel flow. We show that the life of the corresponding modes at large Pe for this case is shorter than the ones arising from shear free zones in the fluid's interior. A WKBJ study of the latter modes provides a longer intermediate time evolution. This part of the analysis is technical, as the corresponding spectrum is dominated by asymptotically coalescing turning points in the limit of large Pe numbers. When large scale initial data components are present, the transient regime of the WKBJ (anomalous) modes evolves into one governed by Taylor dispersion. This is studied by a regular perturbation expansion of the spectrum in the small wavenumber regimes.

  2. Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widlund, Olof B.

    2015-06-09

    The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less

  3. Complexity and approximability for a problem of intersecting of proximity graphs with minimum number of equal disks

    NASA Astrophysics Data System (ADS)

    Kobylkin, Konstantin

    2016-10-01

    Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.

  4. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  5. Distributed-Memory Fast Maximal Independent Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less

  6. Coal conversion: description of technologies and necessary biomedical and environmental research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1976-08-01

    This document contains a description of the biomedical and environmental research necessary to ensure the timely attainment of coal conversion technologies amenable to man and his environment. The document is divided into three sections. The first deals with the types of processes currently being considered for development; the data currently available on composition of product, process and product streams, and their potential effects; and problems that might arise from transportation and use of products. Section II is concerned with a description of the necessary research in each of the King-Muir categories, while the third section presents the research strategies necessarymore » to assess the potential problems at the conversion plant (site specific) and those problems that might effect the general public and environment as a result of the operation of large-scale coal conversion plants.« less

  7. Balanced program plan. Volume IV. Coal conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richmond, C. R.; Reichle, D. E.; Gehrs, C. W.

    1976-05-01

    This document contains a description of the biomedical and environmental research necessary to ensure the timely attainment of coal conversion technologies amenable to man and his environment. The document is divided into three sections. The first deals with the types of processes currently being considered for development; the data currently available on composition of product, process and product streams, and their potential effects; and problems that might arise from transportation and use of products. Section II is concerned with a description of the necessary research in each of the King-Muir categories, while the third section presents the research strategies necessarymore » to assess the potential problems at the conversion plant (site specific) and those problems that might effect the general public and environment as a result of the operation of large-scale coal conversion plants. (auth)« less

  8. Balanced program plan. Volume 4. Coal conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1976-05-01

    This document contains a description of the biomedical and environmental research necessary to ensure the timely attainment of coal conversion technologies amenable to man and his environment. The document is divided into three sections. The first deals with the types of processes currently being considered for development; the data currently available on composition of product, process and product streams, and their potential effects; and problems that might arise from transportation and use of products. Section II is concerned with a description of the necessary research in each of the King-Muir categories, while the third section presents the research strategies necessarymore » to assess the potential problems at the conversion plant (site specific) and those problems that might effect the general public and environment as a result of the operation of large-scale coal conversion plants.« less

  9. The anamorphic universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ijjas, Anna; Steinhardt, Paul J., E-mail: aijjas@princeton.edu, E-mail: steinh@princeton.edu

    We introduce ''anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariantmore » spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.« less

  10. Naturalness from a composite top?

    DOE PAGES

    Pierce, Aaron; Zhao, Yue

    2017-01-12

    Here, we consider a theory with composite top quarks but an elementary Higgs boson. The hierarchy problem can be solved by supplementing TeV scale top compositeness with either supersymmetry or Higgs compositeness appearing at the multi-TeV scale. Furthermore, the Higgs boson couples to uncolored partons within the top quark. We also study how this approach can give rise to a novel screening effect that suppresses production of the colored top partners at the LHC. Strong constraints arise from Z tomore » $$\\bar{b}$$b, as well potentially from avor physics. Independent of flavor considerations, current constraints imply a compositeness scale &TeV; this implies that the model is likely tuned at the percent level. Four top quark production at the LHC is a smoking-gun probe of this scenario. New CP violation in D meson mixing is also possible.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Aaron; Zhao, Yue

    Here, we consider a theory with composite top quarks but an elementary Higgs boson. The hierarchy problem can be solved by supplementing TeV scale top compositeness with either supersymmetry or Higgs compositeness appearing at the multi-TeV scale. Furthermore, the Higgs boson couples to uncolored partons within the top quark. We also study how this approach can give rise to a novel screening effect that suppresses production of the colored top partners at the LHC. Strong constraints arise from Z tomore » $$\\bar{b}$$b, as well potentially from avor physics. Independent of flavor considerations, current constraints imply a compositeness scale &TeV; this implies that the model is likely tuned at the percent level. Four top quark production at the LHC is a smoking-gun probe of this scenario. New CP violation in D meson mixing is also possible.« less

  12. EPR-dosimetry of ionizing radiation

    NASA Astrophysics Data System (ADS)

    Popova, Mariia; Vakhnin, Dmitrii; Tyshchenko, Igor

    2017-09-01

    This article discusses the problems that arise during the radiation sterilization of medical products. It is propose the solution based on alanine EPR-dosimetry. The parameters of spectrometer and methods of absorbed dose calculation are given. In addition, the problems that arise during heavy particles irradiation are investigated.

  13. Parasol: An Architecture for Cross-Cloud Federated Graph Querying

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lieberman, Michael; Choudhury, Sutanay; Hughes, Marisa

    2014-06-22

    Large scale data fusion of multiple datasets can often provide in- sights that examining datasets individually cannot. However, when these datasets reside in different data centers and cannot be collocated due to technical, administrative, or policy barriers, a unique set of problems arise that hamper querying and data fusion. To ad- dress these problems, a system and architecture named Parasol is presented that enables federated queries over graph databases residing in multiple clouds. Parasol’s design is flexible and requires only minimal assumptions for participant clouds. Query optimization techniques are also described that are compatible with Parasol’s lightweight architecture. Experiments onmore » a prototype implementation of Parasol indicate its suitability for cross-cloud federated graph queries.« less

  14. On determining important aspects of mathematical models: Application to problems in physics and chemistry

    NASA Technical Reports Server (NTRS)

    Rabitz, Herschel

    1987-01-01

    The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.

  15. Impact of the inherent separation of scales in the Navier-Stokes- alphabeta equations.

    PubMed

    Kim, Tae-Yeon; Cassiani, Massimo; Albertson, John D; Dolbow, John E; Fried, Eliot; Gurtin, Morton E

    2009-04-01

    We study the effect of the length scales alpha and beta in the Navier-Stokes- alphabeta equations on the energy spectrum and the alignment between the vorticity and the eigenvectors of the stretching tensor in three-dimensional homogeneous and isotropic turbulent flows in a periodic cubic domain, including the limiting cases of the Navier-Stokes- alpha and Navier-Stokes equations. A significant increase in the accuracy of the energy spectrum at large wave numbers arises for beta

  16. The asymptotic homogenization elasticity tensor properties for composites with material discontinuities

    NASA Astrophysics Data System (ADS)

    Penta, Raimondo; Gerisch, Alf

    2017-01-01

    The classical asymptotic homogenization approach for linear elastic composites with discontinuous material properties is considered as a starting point. The sharp length scale separation between the fine periodic structure and the whole material formally leads to anisotropic elastic-type balance equations on the coarse scale, where the arising fourth rank operator is to be computed solving single periodic cell problems on the fine scale. After revisiting the derivation of the problem, which here explicitly points out how the discontinuity in the individual constituents' elastic coefficients translates into stress jump interface conditions for the cell problems, we prove that the gradient of the cell problem solution is minor symmetric and that its cell average is zero. This property holds for perfect interfaces only (i.e., when the elastic displacement is continuous across the composite's interface) and can be used to assess the accuracy of the computed numerical solutions. These facts are further exploited, together with the individual constituents' elastic coefficients and the specific form of the cell problems, to prove a theorem that characterizes the fourth rank operator appearing in the coarse-scale elastic-type balance equations as a composite material effective elasticity tensor. We both recover known facts, such as minor and major symmetries and positive definiteness, and establish new facts concerning the Voigt and Reuss bounds. The latter are shown for the first time without assuming any equivalence between coarse and fine-scale energies ( Hill's condition), which, in contrast to the case of representative volume elements, does not identically hold in the context of asymptotic homogenization. We conclude with instructive three-dimensional numerical simulations of a soft elastic matrix with an embedded cubic stiffer inclusion to show the profile of the physically relevant elastic moduli (Young's and shear moduli) and Poisson's ratio at increasing (up to 100 %) inclusion's volume fraction, thus providing a proxy for the design of artificial elastic composites.

  17. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    NASA Astrophysics Data System (ADS)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  18. Neutrino masses from neutral top partners

    NASA Astrophysics Data System (ADS)

    Batell, Brian; McCullough, Matthew

    2015-10-01

    We present theories of "natural neutrinos" in which neutral fermionic top partner fields are simultaneously the right-handed neutrinos (RHN), linking seemingly disparate aspects of the Standard Model structure: (a) The RHN top partners are responsible for the observed small neutrino masses, (b) they help ameliorate the tuning in the weak scale and address the little hierarchy problem, and (c) the factor of 3 arising from Nc in the top-loop Higgs mass corrections is countered by a factor of 3 from the number of vectorlike generations of RHN. The RHN top partners may arise in pseudo-Nambu-Goldstone-Boson Higgs models such as the twin Higgs, as well as more general composite, little, and orbifold Higgs scenarios, and three simple example models are presented. This framework firmly predicts a TeV-scale seesaw, as the RHN masses are bounded to be below the TeV scale by naturalness. The generation of light neutrino masses relies on a collective breaking of the lepton number, allowing for comparatively large neutrino Yukawa couplings and a rich associated phenomenology. The structure of the neutrino mass mechanism realizes in certain limits the inverse or linear classes of seesaw. Natural neutrino models are testable at a variety of current and future experiments, particularly in tests of lepton universality, searches for lepton flavor violation, and precision electroweak and Higgs coupling measurements possible at high energy e+e- and hadron colliders.

  19. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only modeling of the covariance matrix obtained by evaluating the covariance function at the observation points. We first assumed that this covariance matrix is stationary and that the unresolved scales are not correlated between the observation points, i.e., the matrix is diagonal, and that the values along the diagonal are constant. Tests with these assumptions were unsuccessful, indicating that a more sophisticated model of the covariance is needed for assimilation of data with nonstationary spectrum. A new method for modeling the covariance matrix based on an extended set of modeling assumptions is proposed. First, it is assumed that the covariance matrix is diagonal, that is, that the unresolved scales are not correlated between the observation points. It is postulated that the values on the diagonal depend on a wavenumber that is characteristic for the unresolved part of the spectrum. It is further postulated that this characteristic wavenumber can be diagnosed from the observations and from the estimate of the projection of the state that is being estimated. It is demonstrated that the new method successfully overcomes previously encountered difficulties.

  20. Predicting Upscaled Behavior of Aqueous Reactants in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Wright, E. E.; Hansen, S. K.; Bolster, D.; Richter, D. H.; Vesselinov, V. V.

    2017-12-01

    When modeling reactive transport, reaction rates are often overestimated due to the improper assumption of perfect mixing at the support scale of the transport model. In reality, fronts tend to form between participants in thermodynamically favorable reactions, leading to segregation of reactants into islands or fingers. When such a configuration arises, reactions are limited to the interface between the reactive solutes. Closure methods for estimating control-volume-effective reaction rates in terms of quantities defined at the control volume scale do not presently exist, but their development is crucial for effective field-scale modeling. We attack this problem through a combination of analytical and numerical means. Specifically, we numerically study reactive transport through an ensemble of realizations of two-dimensional heterogeneous porous media. We then employ regression analysis to calibrate an analytically-derived relationship between reaction rate and various dimensionless quantities representing conductivity-field heterogeneity and the respective strengths of diffusion, reaction and advection.

  1. A numerical study of different projection-based model reduction techniques applied to computational homogenisation

    NASA Astrophysics Data System (ADS)

    Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia

    2017-10-01

    Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capela, Fabio; Ramazanov, Sabir, E-mail: fc403@cam.ac.uk, E-mail: Sabir.Ramazanov@ulb.ac.be

    At large scales and for sufficiently early times, dark matter is described as a pressureless perfect fluid—dust— non-interacting with Standard Model fields. These features are captured by a simple model with two scalars: a Lagrange multiplier and another playing the role of the velocity potential. That model arises naturally in some gravitational frameworks, e.g., the mimetic dark matter scenario. We consider an extension of the model by means of higher derivative terms, such that the dust solutions are preserved at the background level, but there is a non-zero sound speed at the linear level. We associate this Modified Dust withmore » dark matter, and study the linear evolution of cosmological perturbations in that picture. The most prominent effect is the suppression of their power spectrum for sufficiently large cosmological momenta. This can be relevant in view of the problems that cold dark matter faces at sub-galactic scales, e.g., the missing satellites problem. At even shorter scales, however, perturbations of Modified Dust are enhanced compared to the predictions of more common particle dark matter scenarios. This is a peculiarity of their evolution in radiation dominated background. We also briefly discuss clustering of Modified Dust. We write the system of equations in the Newtonian limit, and sketch the possible mechanism which could prevent the appearance of caustic singularities. The same mechanism may be relevant in light of the core-cusp problem.« less

  3. An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH

    NASA Astrophysics Data System (ADS)

    Lee, D.; Gopal, S.; Mohapatra, P.

    2012-07-01

    We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.

  4. Approximate registration of point clouds with large scale differences

    NASA Astrophysics Data System (ADS)

    Novak, D.; Schindler, K.

    2013-10-01

    3D reconstruction of objects is a basic task in many fields, including surveying, engineering, entertainment and cultural heritage. The task is nowadays often accomplished with a laser scanner, which produces dense point clouds, but lacks accurate colour information, and lacks per-point accuracy measures. An obvious solution is to combine laser scanning with photogrammetric recording. In that context, the problem arises to register the two datasets, which feature large scale, translation and rotation differences. The absence of approximate registration parameters (3D translation, 3D rotation and scale) precludes the use of fine-registration methods such as ICP. Here, we present a method to register realistic photogrammetric and laser point clouds in a fully automated fashion. The proposed method decomposes the registration into a sequence of simpler steps: first, two rotation angles are determined by finding dominant surface normal directions, then the remaining parameters are found with RANSAC followed by ICP and scale refinement. These two steps are carried out at low resolution, before computing a precise final registration at higher resolution.

  5. Orbiter entry aerothermodynamics

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1985-01-01

    The challenge in the definition of the entry aerothermodynamic environment arising from the challenge of a reliable and reusable Orbiter is reviewed in light of the existing technology. Select problems pertinent to the orbiter development are discussed with reference to comprehensive treatments. These problems include boundary layer transition, leeward-side heating, shock/shock interaction scaling, tile gap heating, and nonequilibrium effects such as surface catalysis. Sample measurements obtained from test flights of the Orbiter are presented with comparison to preflight expectations. Numerical and wind tunnel simulations gave efficient information for defining the entry environment and an adequate level of preflight confidence. The high quality flight data provide an opportunity to refine the operational capability of the orbiter and serve as a benchmark both for the development of aerothermodynamic technology and for use in meeting future entry heating challenges.

  6. Optimistic barrier synchronization

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1992-01-01

    Barrier synchronization is fundamental operation in parallel computation. In many contexts, at the point a processor enters a barrier it knows that it has already processed all the work required of it prior to synchronization. The alternative case, when a processor cannot enter a barrier with the assurance that it has already performed all the necessary pre-synchronization computation, is treated. The problem arises when the number of pre-sychronization messages to be received by a processor is unkown, for example, in a parallel discrete simulation or any other computation that is largely driven by an unpredictable exchange of messages. We describe an optimistic O(log sup 2 P) barrier algorithm for such problems, study its performance on a large-scale parallel system, and consider extensions to general associative reductions as well as associative parallel prefix computations.

  7. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  8. The use of MCNP and gamma spectrometry in supporting the evaluation of NORM in Libyan oil pipeline scale

    NASA Astrophysics Data System (ADS)

    Habib, Ahmed S.; Bradley, D. A.; Regan, P. H.; Shutt, A. L.

    2010-07-01

    The accumulation of scales in production pipes is a common problem in the oil industry, reducing fluid flow and also leading to costly remedies and disposal issues. Typical materials found in such scale are sulphates and carbonates of calcium and barium, or iron sulphide. Radium arising from the uranium/thorium present in oil-bearing rock formations may replace the barium or calcium in these salts to form radium salts. This creates what is known as technologically enhanced naturally occurring radioactive material (TENORM or simply NORM). NORM is a serious environmental and health and safety issue arising from commercial oil and gas extraction operations. Whilst a good deal has been published on the characterisation and measurement of radioactive scales from offshore oil production, little information has been published regarding NORM associated with land-based facilities such as that of the Libyan oil industry. The ongoing investigation described in this paper concerns an assessment of NORM from a number of land based Libyan oil fields. A total of 27 pipe scale samples were collected from eight oil fields, from different locations in Libya. The dose rates, measured using a handheld survey meter positioned on sample surfaces, ranged from 0.1-27.3 μSv h -1. In the initial evaluations of the sample activity, use is being made of a portable HPGe based spectrometry system. To comply with the prevailing safety regulations of the University of Surrey, the samples are being counted in their original form, creating a need for correction of non-homogeneous sample geometries. To derive a detection efficiency based on the actual sample geometries, a technique has been developed using a Monte Carlo particle transport code (MCNPX). A preliminary activity determination has been performed using an HPGe portable detector system.

  9. Alternative experiments using the geophysical fluid flow cell

    NASA Technical Reports Server (NTRS)

    Hart, J. E.

    1984-01-01

    This study addresses the possibility of doing large scale dynamics experiments using the Geophysical Fluid Flow Cell. In particular, cases where the forcing generates a statically stable stratification almost everywhere in the spherical shell are evaluated. This situation is typical of the Earth's atmosphere and oceans. By calculating the strongest meridional circulation expected in the spacelab experiments, and testing its stability using quasi-geostrophic stability theory, it is shown that strongly nonlinear baroclinic waves on a zonally symmetric modified thermal wind will not occur. The Geophysical Fluid Flow Cell does not have a deep enough fluid layer to permit useful studies of large scale planetary wave processes arising from instability. It is argued, however, that by introducing suitable meridional barriers, a significant contribution to the understanding of the oceanic thermocline problem could be made.

  10. Very light dilaton and naturally light Higgs boson

    NASA Astrophysics Data System (ADS)

    Hong, Deog Ki

    2018-02-01

    We study very light dilaton, arising from a scale-invariant ultraviolet theory of the Higgs sector in the standard model of particle physics. Imposing the scale symmetry below the ultraviolet scale of the Higgs sector, we alleviate the fine-tuning problem associated with the Higgs mass. When the electroweak symmetry is spontaneously broken radiatively à la Coleman-Weinberg, the dilaton develops a vacuum expectation value away from the origin to give an extra contribution to the Higgs potential so that the Higgs mass becomes naturally around the electroweak scale. The ultraviolet scale of the Higgs sector can be therefore much higher than the electroweak scale, as the dilaton drives the Higgs mass to the electroweak scale. We also show that the light dilaton in this scenario is a good candidate for dark matter of mass m D ˜ 1 eV - 10 keV, if the ultraviolet scale is about 10-100 TeV. Finally we propose a dilaton-assisted composite Higgs model to realize our scenario. In addition to the light dilaton the model predicts a heavy U(1) axial vector boson and two massive, oppositely charged, pseudo Nambu-Goldstone bosons, which might be accessible at LHC.

  11. Absence of Asymptotic Freedom in Doped Mott Insulators: Breakdown of Strong Coupling Expansions

    NASA Astrophysics Data System (ADS)

    Phillips, Philip; Galanakis, Dimitrios; Stanescu, Tudor D.

    2004-12-01

    We show that doped Mott insulators such as the copper-oxide superconductors are asymptotically slaved in that the quasiparticle weight Z near half-filling depends critically on the existence of the high-energy scale set by the upper Hubbard band. In particular, near half-filling, the following dichotomy arises: Z≠0 when the high-energy scale is integrated out but Z=0 in the thermodynamic limit when it is retained. Slavery to the high-energy scale arises from quantum interference between electronic excitations across the Mott gap. Broad spectral features seen in photoemission in the normal state of the cuprates are argued to arise from high-energy slavery.

  12. Dissipative closures for statistical moments, fluid moments, and subgrid scales in plasma turbulence

    NASA Astrophysics Data System (ADS)

    Smith, Stephen Andrew

    1997-11-01

    Closures are necessary in the study physical systems with large numbers of degrees of freedom when it is only possible to compute a small number of modes. The modes that are to be computed, the resolved modes, are coupled to unresolved modes that must be estimated. This thesis focuses on dissipative closures models for two problems that arises in the study of plasma turbulence: the fluid moment closure problem and the subgrid scale closure problem. The fluid moment closures of Hammett and Perkins (1990) were originally applied to a one-dimensional kinetic equation, the Vlasov equation. These closures are generalized in this thesis and applied to the stochastic oscillator problem, a standard paradigm problem for statistical closures. The linear theory of the Hammett- Perkins closures is shown to converge with increasing numbers of moments. A novel parameterized hyperviscosity is proposed for two- dimensional drift-wave turbulence. The magnitude and exponent of the hyperviscosity are expressed as functions of the large scale advection velocity. Traditionally hyperviscosities are applied to simulations with a fixed exponent that must be arbitrarily chosen. Expressing the exponent as a function of the simulation parameters eliminates this ambiguity. These functions are parameterized by comparing the hyperviscous dissipation to the subgrid dissipation calculated from direct numerical simulations. Tests of the parameterization demonstrate that it performs better than using no additional damping term or than using a standard hyperviscosity. Heuristic arguments are presented to extend this hyperviscosity model to three-dimensional (3D) drift-wave turbulence where eddies are highly elongated along the field line. Preliminary results indicate that this generalized 3D hyperviscosity is capable of reducing the resolution requirements for 3D gyrofluid turbulence simulations.

  13. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    PubMed

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  14. Thermal issues at the SSC

    NASA Technical Reports Server (NTRS)

    Ranganathan, Raj P.; Dao, Bui V.

    1992-01-01

    A variety of heat transfer problems arise in the design of the Superconducting Super Collider (SSC). One class of problems is to minimize heat leak from the ambient to the SSC rings, since the rings contain superconducting magnets maintained at a temperature of 4 K. Another arises from the need to dump the beam of protrons (traveling around the SSC rings) on to absorbers during an abort of the collider. Yet another category of problems is the cooling of equipment to dissipate the heat generated during operation. An overview of these problems and sample heat transfer results are given in this paper.

  15. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  16. An O( N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  17. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  18. Realistic anomaly-mediated supersymmetry breaking

    NASA Astrophysics Data System (ADS)

    Chacko, Zacharia; Luty, Markus A.; Maksymyk, Ivan; Pontón, Eduardo

    2000-03-01

    We consider supersymmetry breaking communicated entirely by the superconformal anomaly in supergravity. This scenario is naturally realized if supersymmetry is broken in a hidden sector whose couplings to the observable sector are suppressed by more than powers of the Planck scale, as occurs if supersymmetry is broken in a parallel universe living in extra dimensions. This scenario is extremely predictive: soft supersymmetry breaking couplings are completely determined by anomalous dimensions in the effective theory at the weak scale. Gaugino and scalar masses are naturally of the same order, and flavor-changing neutral currents are automatically suppressed. The most glaring problem with this scenario is that slepton masses are negative in the minimal supersymmetric standard model. We point out that this problem can be simply solved by coupling extra Higgs doublets to the leptons. Lepton flavor-changing neutral currents can be naturally avoided by approximate symmetries. We also describe more speculative solutions involving compositeness near the weak scale. We then turn to electroweak symmetry breaking. Adding an explicit μ term gives a value for Bμ that is too large by a factor of ~ 100. We construct a realistic model in which the μ term arises from the vacuum expectation value of a singlet field, so all weak-scale masses are directly related to m3/2. We show that fully realistic electroweak symmetry breaking can occur in this model with moderate fine-tuning.

  19. Poincaré-Treshchev Mechanism in Multi-scale, Nearly Integrable Hamiltonian Systems

    NASA Astrophysics Data System (ADS)

    Xu, Lu; Li, Yong; Yi, Yingfei

    2018-02-01

    This paper is a continuation to our work (Xu et al. in Ann Henri Poincaré 18(1):53-83, 2017) concerning the persistence of lower-dimensional tori on resonant surfaces of a multi-scale, nearly integrable Hamiltonian system. This type of systems, being properly degenerate, arise naturally in planar and spatial lunar problems of celestial mechanics for which the persistence problem ties closely to the stability of the systems. For such a system, under certain non-degenerate conditions of Rüssmann type, the majority persistence of non-resonant tori and the existence of a nearly full measure set of Poincaré non-degenerate, lower-dimensional, quasi-periodic invariant tori on a resonant surface corresponding to the highest order of scale is proved in Han et al. (Ann Henri Poincaré 10(8):1419-1436, 2010) and Xu et al. (2017), respectively. In this work, we consider a resonant surface corresponding to any intermediate order of scale and show the existence of a nearly full measure set of Poincaré non-degenerate, lower-dimensional, quasi-periodic invariant tori on the resonant surface. The proof is based on a normal form reduction which consists of a finite step of KAM iterations in pushing the non-integrable perturbation to a sufficiently high order and the splitting of resonant tori on the resonant surface according to the Poincaré-Treshchev mechanism.

  20. Numerical simulation of damage evolution for ductile materials and mechanical properties study

    NASA Astrophysics Data System (ADS)

    El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.

    2015-12-01

    This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.

  1. Scalable implicit incompressible resistive MHD with stabilized FE and fully-coupled Newton–Krylov-AMG

    DOE PAGES

    Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; ...

    2016-02-10

    Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less

  2. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  3. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations

    PubMed Central

    Brocke, Ekaterina; Bhalla, Upinder S.; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience. PMID:27672364

  4. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations.

    PubMed

    Brocke, Ekaterina; Bhalla, Upinder S; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience.

  5. Evaluation of LANDSAT multispectral scanner images for mapping altered rocks in the east Tintic Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Rowan, L. C.; Abrams, M. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Positive findings of earlier evaluations of the color-ratio compositing technique for mapping limonitic altered rocks in south-central Nevada are confirmed, but important limitations in the approach used are pointed out. These limitations arise from environmental, geologic, and image processing factors. The greater vegetation density in the East Tintic Mountains required several modifications in procedures to improve the overall mapping accuracy of the CRC approach. Large format ratio images provide better internal registration of the diazo films and avoids the problems associated with magnifications required in the original procedure. Use of the Linoscan 204 color recognition scanner permits accurate consistent extraction of the green pixels representing limonitic bedrock maps that can be used for mapping at large scales as well as for small scale reconnaissance.

  6. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE PAGES

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    2018-05-01

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  7. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  8. Final Technical Report [Scalable methods for electronic excitations and optical responses of nanostructures: mathematics to algorithms to observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Yousef

    2014-03-19

    The master project under which this work is funded had as its main objective to develop computational methods for modeling electronic excited-state and optical properties of various nanostructures. The specific goals of the computer science group were primarily to develop effective numerical algorithms in Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TDDFT). There were essentially four distinct stated objectives. The first objective was to study and develop effective numerical algorithms for solving large eigenvalue problems such as those that arise in Density Functional Theory (DFT) methods. The second objective was to explore so-called linear scaling methods ormore » Methods that avoid diagonalization. The third was to develop effective approaches for Time-Dependent DFT (TDDFT). Our fourth and final objective was to examine effective solution strategies for other problems in electronic excitations, such as the GW/Bethe-Salpeter method, and quantum transport problems.« less

  9. Quantum algorithm for linear systems of equations.

    PubMed

    Harrow, Aram W; Hassidim, Avinatan; Lloyd, Seth

    2009-10-09

    Solving linear systems of equations is a common problem that arises both on its own and as a subroutine in more complex problems: given a matrix A and a vector b(-->), find a vector x(-->) such that Ax(-->) = b(-->). We consider the case where one does not need to know the solution x(-->) itself, but rather an approximation of the expectation value of some operator associated with x(-->), e.g., x(-->)(dagger) Mx(-->) for some matrix M. In this case, when A is sparse, N x N and has condition number kappa, the fastest known classical algorithms can find x(-->) and estimate x(-->)(dagger) Mx(-->) in time scaling roughly as N square root(kappa). Here, we exhibit a quantum algorithm for estimating x(-->)(dagger) Mx(-->) whose runtime is a polynomial of log(N) and kappa. Indeed, for small values of kappa [i.e., poly log(N)], we prove (using some common complexity-theoretic assumptions) that any classical algorithm for this problem generically requires exponentially more time than our quantum algorithm.

  10. The Effect of Normalization in Violence Video Classification Performance

    NASA Astrophysics Data System (ADS)

    Ali, Ashikin; Senan, Norhalina

    2017-08-01

    Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.

  11. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  12. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  13. Investigation of instability of displacement front in non-isothermal flow problems

    NASA Astrophysics Data System (ADS)

    Syulyukina, Natalia; Pergament, Anna

    2012-11-01

    In this paper, we investigate the issues of front instability arising in non-isothermal flow displacement processes. The problem of two-phase flow of immiscible fluids, oil and water, is considered, including sources and dependence of viscosity on temperature. Three-dimensional problem with perturbation close to the injection well was considered to find the characteristic scale of the instability. As a result of numerical calculations, theoretical studies on the development of the instability due to the fact that the viscosity of the displacing fluid is less than the viscosity of the displaced have been confirmed. The influence of temperature on the evolution of the instability was considered. For this purpose, the dependence of oil viscosity on temperature has been added to the problem. Numerical calculations were carried out for different values of temperature and it was shown that with increasing of production rate. Thus, it has been demonstrated that the selection of the optimal temperature for injected fluids a possible way for stimulation of oil production also delaying the field water-flooding. This work was supporting by the RFBR grant 12-01-00793-a.

  14. Medical student and junior doctors' tolerance of ambiguity: development of a new scale.

    PubMed

    Hancock, Jason; Roberts, Martin; Monrouxe, Lynn; Mattick, Karen

    2015-03-01

    The practice of medicine involves inherent ambiguity, arising from limitations of knowledge, diagnostic problems, complexities of treatment and outcome and unpredictability of patient response. Research into doctors' tolerance of ambiguity is hampered by poor conceptual clarity and inadequate measurement scales. We aimed to create and pilot a measurement scale for tolerance of ambiguity in medical students and junior doctors that addresses the limitations of existing scales. After defining tolerance of ambiguity, scale items were generated by literature review and expert consultation. Feedback on the draft scale was sought and incorporated. 411 medical students and 75 foundation doctors in Exeter, UK were asked to complete the scale. Psychometric analysis enabled further scale refinement and comparison of scale scores across subgroups. The pilot study achieved a 64% response rate. The final 29 item version of the Tolerance of Ambiguity in Medical Students and Doctors (TAMSAD) scale had good internal reliability (Cronbach's α 0.80). Tolerance of ambiguity was higher in foundation year 2 doctors than first, third and fourth year medical students (-5.23, P = 0.012; -5.98, P = 0.013; -4.62, P = 0.035, for each year group respectively). The TAMSAD scale offers a valid and reliable alternative to existing scales. Further work is required in different settings and in longitudinal studies but this study offers intriguing provisional insights.

  15. Large-scale marine ecosystem change and the conservation of marine mammals

    USGS Publications Warehouse

    O'Shea, T.J.; Odell, D.K.

    2008-01-01

    Papers in this Special Feature stem from a symposium on large-scale ecosystem change and the conservation of marine mammals convened at the 86th Annual Meeting of the American Society of Mammalogists in June 2006. Major changes are occurring in multiple aspects of the marine environment at unprecedented rates, within the life spans of some individual marine mammals. Drivers of change include shifts in climate, acoustic pollution, disturbances to trophic structure, fisheries interactions, harmful algal blooms, and environmental contaminants. This Special Feature provides an in-depth examination of 3 issues that are particularly troublesome. The 1st article notes the huge spatial and temporal scales of change to which marine mammals are showing ecological responses, and how these species can function as sentinels of such change. The 2nd paper describes the serious problems arising from conflicts with fisheries, and the 3rd contribution reviews the growing issues associated with underwater noise. ?? 2008 American Society of Mammalogists.

  16. Aeroelastic-Acoustics Simulation of Flight Systems

    NASA Technical Reports Server (NTRS)

    Gupta, kajal K.; Choi, S.; Ibrahim, A.

    2009-01-01

    This paper describes the details of a numerical finite element (FE) based analysis procedure and a resulting code for the simulation of the acoustics phenomenon arising from aeroelastic interactions. Both CFD and structural simulations are based on FE discretization employing unstructured grids. The sound pressure level (SPL) on structural surfaces is calculated from the root mean square (RMS) of the unsteady pressure and the acoustic wave frequencies are computed from a fast Fourier transform (FFT) of the unsteady pressure distribution as a function of time. The resulting tool proves to be unique as it is designed to analyze complex practical problems, involving large scale computations, in a routine fashion.

  17. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  18. Attention problems and pathological gaming: resolving the 'chicken and egg' in a prospective analysis.

    PubMed

    Ferguson, Christopher J; Ceranoglu, T Atilla

    2014-03-01

    Pathological gaming (PG) behaviors are behaviors which interfere with other life responsibilities. Continued debate exists regarding whether symptoms of PG behaviors are a unique phenomenon or arise from other mental health problems, including attention problems. Development of attention problems and occurrence of pathological gaming in 144 adolescents were followed during a 1-year prospective analysis. Teens and their parents reported on pathological gaming behaviors, attention problems, and current grade point average, as well as several social variables. Results were analyzed using regression and path analysis. Attention problems tended to precede pathological gaming behaviors, but the inverse was not true. Attention problems but not pathological gaming predicted lower GPA 1 year later. Current results suggest that pathological gaming arises from attention problems, but not the inverse. These results suggest that pathological gaming behaviors are symptomatic of underlying attention related mental health issues, rather than a unique phenomenon.

  19. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  20. Breakdown of Strong Coupling Expansions for doped Mott Insulators

    NASA Astrophysics Data System (ADS)

    Phillips, Philip; Galanakis, Dimitrios; Stanescu, Tudor

    2005-03-01

    We show that doped Mott insulators, such as the copper-oxide superconductors, are asymptotically slaved in that the quasiparticle weight, Z, near half-filling depends critically on the existence of the high energy scale set by the upper Hubbard band. In particular, near half filling, the following dichotomy arises: Z0 when the high energy scale is integrated out but Z=0 in the thermodynamic limit when it is retained. Slavery to the high energy scale arises from quantum interference between electronic excitations across the Mott gap.

  1. Making research relevant? Ecological methods and the ecosystem services framework

    NASA Astrophysics Data System (ADS)

    Root-Bernstein, Meredith; Jaksic, Fabián. M.

    2017-07-01

    We examine some unexpected epistemological conflicts that arise at the interfaces between ecological science, the ecosystem services framework, policy, and industry. We use an example from our own research to motivate and illustrate our main arguments, while also reviewing standard approaches to ecological science using the ecosystem services framework. While we agree that the ecosystem services framework has benefits in its industrial applications because it may force economic decision makers to consider a broader range of costs and benefits than they would do otherwise, we find that many alignments of ecology with the ecosystem services framework are asking questions that are irrelevant to real-world applications, and generating data that does not serve real-world applications. We attempt to clarify why these problems arise and how to avoid them. We urge fellow ecologists to reflect on the kind of research that can lead to both scientific advances and applied relevance to society. In our view, traditional empirical approaches at landscape scales or with place-based emphases are necessary to provide applied knowledge for problem solving, which is needed once decision makers identify risks to ecosystem services. We conclude that the ecosystem services framework is a good policy tool when applied to decision-making contexts, but not a good theory either of social valuation or ecological interactions, and should not be treated as one.

  2. Quenching rate for a nonlocal problem arising in the micro-electro mechanical system

    NASA Astrophysics Data System (ADS)

    Guo, Jong-Shenq; Hu, Bei

    2018-03-01

    In this paper, we study the quenching rate of the solution for a nonlocal parabolic problem which arises in the study of the micro-electro mechanical system. This question is equivalent to the stabilization of the solution to the transformed problem in self-similar variables. First, some a priori estimates are provided. In order to construct a Lyapunov function, due to the lack of time monotonicity property, we then derive some very useful and challenging estimates by a delicate analysis. Finally, with this Lyapunov function, we prove that the quenching rate is self-similar which is the same as the problem without the nonlocal term, except the constant limit depends on the solution itself.

  3. Anomalous diffusion and dynamics of fluorescence recovery after photobleaching in the random-comb model

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.; Abad, E.; Baumgaertner, A.

    2016-07-01

    We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.

  4. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  5. Brittle fracture in viscoelastic materials as a pattern-formation process

    NASA Astrophysics Data System (ADS)

    Fleck, M.; Pilipenko, D.; Spatschek, R.; Brener, E. A.

    2011-04-01

    A continuum model of crack propagation in brittle viscoelastic materials is presented and discussed. Thereby, the phenomenon of fracture is understood as an elastically induced nonequilibrium interfacial pattern formation process. In this spirit, a full description of a propagating crack provides the determination of the entire time dependent shape of the crack surface, which is assumed to be extended over a finite and self-consistently selected length scale. The mechanism of crack propagation, that is, the motion of the crack surface, is then determined through linear nonequilibrium transport equations. Here we consider two different mechanisms, a first-order phase transformation and surface diffusion. We give scaling arguments showing that steady-state solutions with a self-consistently selected propagation velocity and crack shape can exist provided that elastodynamic or viscoelastic effects are taken into account, whereas static elasticity alone is not sufficient. In this respect, inertial effects as well as viscous damping are identified to be sufficient crack tip selection mechanisms. Exploring the arising description of brittle fracture numerically, we study steady-state crack propagation in the viscoelastic and inertia limit as well as in an intermediate regime, where both effects are important. The arising free boundary problems are solved by phase field methods and a sharp interface approach using a multipole expansion technique. Different types of loading, mode I, mode III fracture, as well as mixtures of them, are discussed.

  6. Protein Dynamics from NMR and Computer Simulation

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Kravchenko, Olga; Kemple, Marvin; Likic, Vladimir; Klimtchuk, Elena; Prendergast, Franklyn

    2002-03-01

    Proteins exhibit internal motions from the millisecond to sub-nanosecond time scale. The challenge is to relate these internal motions to biological function. A strategy to address this aim is to apply a combination of several techniques including high-resolution NMR, computer simulation of molecular dynamics (MD), molecular graphics, and finally molecular biology, the latter to generate appropriate samples. Two difficulties that arise are: (1) the time scale which is most directly biologically relevant (ms to μs) is not readily accessible by these techniques and (2) the techniques focus on local and not collective motions. We will outline methods using ^13C-NMR to help alleviate the second problem, as applied to intestinal fatty acid binding protein, a relatively small intracellular protein believed to be involved in fatty acid transport and metabolism. This work is supported in part by PHS Grant GM34847 (FGP) and by a fellowship from the American Heart Association (QW).

  7. Golden Ratio in a Coupled-Oscillator Problem

    ERIC Educational Resources Information Center

    Moorman, Crystal M.; Goff, John Eric

    2007-01-01

    The golden ratio appears in a classical mechanics coupled-oscillator problem that many undergraduates may not solve. Once the symmetry is broken in a more standard problem, the golden ratio appears. Several student exercises arise from the problem considered in this paper.

  8. Calculation of Rayleigh type sums for zeros of the equation arising in spectral problem

    NASA Astrophysics Data System (ADS)

    Kostin, A. B.; Sherstyukov, V. B.

    2017-12-01

    For zeros of the equation (arising in the oblique derivative problem) μ J n ‧ ( μ ) cos α + i n J n ( μ ) sin α = 0 , μ ∈ ℂ , with parameters n ∈ ℤ, α ∈ [-π/2, π/2] and the Bessel function Jn (μ) special summation relationships are proved. The obtained results are consistent with the theory of well-known Rayleigh sums calculating by zeros of the Bessel function.

  9. Small dark energy and stable vacuum from Dilaton-Gauss-Bonnet coupling in TMT

    NASA Astrophysics Data System (ADS)

    Guendelman, Eduardo I.; Nishino, Hitoshi; Rajpoot, Subhash

    2017-04-01

    In two measures theories (TMT), in addition to the Riemannian measure of integration, being the square root of the determinant of the metric, we introduce a metric-independent density Φ in four dimensions defined in terms of scalars \\varphi _a by Φ =\\varepsilon ^{μ ν ρ σ } \\varepsilon _{abcd} (partial _{μ }\\varphi _a)(partial _{ν }\\varphi _b) (partial _{ρ }\\varphi _c) (partial _{σ }\\varphi _d). With the help of a dilaton field φ we construct theories that are globally scale invariant. In particular, by introducing couplings of the dilaton φ to the Gauss-Bonnet (GB) topological density {√{-g}} φ ( R_{μ ν ρ σ }^2 - 4 R_{μ ν }^2 + R^2 ) we obtain a theory that is scale invariant up to a total divergence. Integration of the \\varphi _a field equation leads to an integration constant that breaks the global scale symmetry. We discuss the stabilizing effects of the coupling of the dilaton to the GB-topological density on the vacua with a very small cosmological constant and the resolution of the `TMT Vacuum-Manifold Problem' which exists in the zero cosmological-constant vacuum limit. This problem generically arises from an effective potential that is a perfect square, and it gives rise to a vacuum manifold instead of a unique vacuum solution in the presence of many different scalars, like the dilaton, the Higgs, etc. In the non-zero cosmological-constant case this problem disappears. Furthermore, the GB coupling to the dilaton eliminates flat directions in the effective potential, and it totally lifts the vacuum-manifold degeneracy.

  10. The Applied Mathematics for Power Systems (AMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael

    2012-07-24

    Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less

  11. Common Methodological Problems in Research on the Addictions.

    ERIC Educational Resources Information Center

    Nathan, Peter E.; Lansky, David

    1978-01-01

    Identifies common problems in research on the addictions and offers suggestions for remediating these methodological problems. The addictions considered include alcoholism and drug dependencies. Problems considered are those arising from inadequate, incomplete, or biased reviews of relevant literatures and methodological shortcomings of subject…

  12. Finite Dimensional Approximations for Continuum Multiscale Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlyand, Leonid

    2017-01-24

    The completed research project concerns the development of novel computational techniques for modeling nonlinear multiscale physical and biological phenomena. Specifically, it addresses the theoretical development and applications of the homogenization theory (coarse graining) approach to calculation of the effective properties of highly heterogenous biological and bio-inspired materials with many spatial scales and nonlinear behavior. This theory studies properties of strongly heterogeneous media in problems arising in materials science, geoscience, biology, etc. Modeling of such media raises fundamental mathematical questions, primarily in partial differential equations (PDEs) and calculus of variations, the subject of the PI’s research. The focus of completed researchmore » was on mathematical models of biological and bio-inspired materials with the common theme of multiscale analysis and coarse grain computational techniques. Biological and bio-inspired materials offer the unique ability to create environmentally clean functional materials used for energy conversion and storage. These materials are intrinsically complex, with hierarchical organization occurring on many nested length and time scales. The potential to rationally design and tailor the properties of these materials for broad energy applications has been hampered by the lack of computational techniques, which are able to bridge from the molecular to the macroscopic scale. The project addressed the challenge of computational treatments of such complex materials by the development of a synergistic approach that combines innovative multiscale modeling/analysis techniques with high performance computing.« less

  13. Low thrust propulsion system effects on communication satellites.

    NASA Technical Reports Server (NTRS)

    Hall, D. F.; Lyon, W. C.

    1972-01-01

    Choice of type and placement of thrusters on spacecraft (s/c) should include consideration of their effects on other subsystems. Models are presented of the exhaust plumes of mercury, cesium, colloid, hydrazine, ammonia, and Teflon rockets. Effects arising from plume impingement on s/c surfaces, radio frequency interference, optical interference, and earth environmental contamination are discussed. Some constraints arise in the placement of mercury, cesium, and Teflon thrusters. Few problems exist with other thruster types, nor is earth contamination a problem.

  14. The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1989-01-01

    The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.

  15. Performance issues for domain-oriented time-driven distributed simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1987-01-01

    It has long been recognized that simulations form an interesting and important class of computations that may benefit from distributed or parallel processing. Since the point of parallel processing is improved performance, the recent proliferation of multiprocessors requires that we consider the performance issues that naturally arise when attempting to implement a distributed simulation. Three such issues are: (1) the problem of mapping the simulation onto the architecture, (2) the possibilities for performing redundant computation in order to reduce communication, and (3) the avoidance of deadlock due to distributed contention for message-buffer space. These issues are discussed in the context of a battlefield simulation implemented on a medium-scale multiprocessor message-passing architecture.

  16. Qubit Architecture with High Coherence and Fast Tunable Coupling.

    PubMed

    Chen, Yu; Neill, C; Roushan, P; Leung, N; Fang, M; Barends, R; Kelly, J; Campbell, B; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Megrant, A; Mutus, J Y; O'Malley, P J J; Quintana, C M; Sank, D; Vainsencher, A; Wenner, J; White, T C; Geller, Michael R; Cleland, A N; Martinis, John M

    2014-11-28

    We introduce a superconducting qubit architecture that combines high-coherence qubits and tunable qubit-qubit coupling. With the ability to set the coupling to zero, we demonstrate that this architecture is protected from the frequency crowding problems that arise from fixed coupling. More importantly, the coupling can be tuned dynamically with nanosecond resolution, making this architecture a versatile platform with applications ranging from quantum logic gates to quantum simulation. We illustrate the advantages of dynamical coupling by implementing a novel adiabatic controlled-z gate, with a speed approaching that of single-qubit gates. Integrating coherence and scalable control, the introduced qubit architecture provides a promising path towards large-scale quantum computation and simulation.

  17. Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Choi, S. B.; Ibrahim, A.

    2010-01-01

    A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.

  18. Dynamics of the cosmological relaxation after reheating

    NASA Astrophysics Data System (ADS)

    Choi, Kiwoon; Kim, Hyungjin; Sekiguchi, Toyokazu

    2017-04-01

    We examine if the cosmological relaxation mechanism, which was proposed recently as a new solution to the hierarchy problem, can be compatible with high reheating temperature well above the weak scale. As the barrier potential disappears at high temperature, the relaxion rolls down further after the reheating, which may ruin the successful implementation of the relaxation mechanism. It is noted that if the relaxion is coupled to a dark gauge boson, the new frictional force arising from dark gauge boson production can efficiently slow down the relaxion motion, which allows the relaxion to be stabilized after the electroweak phase transition for a wide range of model parameters, while satisfying the known observational constraints.

  19. Historical Evidence of Importance to the Industrialization of Flat-plate Silicon Photovoltaic Systems, Volume 2

    NASA Technical Reports Server (NTRS)

    Smith, J. L.; Gates, W. R.; Lee, T.

    1978-01-01

    Problems which may arise as the low cost silicon solar array (LSSA) project attempts to industrialize the production technologies are defined. The charge to insure an annual production capability of 500 MW peak for the photovoltaic supply industry by 1986 was critically examined, and focused on one of the motivations behind this goal-concern over the timely development of industrial capacity to supply anticipated demand. Conclusions from the analysis are utilized in a discussion of LSSA's industrialization plans, particularly the plans for pilot, demonstration and commercial scale production plants. Specific recommendations for the implementation of an industrialization task and the disposition of the project quantity goal were derived.

  20. Eigenmode computation of cavities with perturbed geometry using matrix perturbation methods applied on generalized eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gorgizadeh, Shahnam; Flisgen, Thomas; van Rienen, Ursula

    2018-07-01

    Generalized eigenvalue problems are standard problems in computational sciences. They may arise in electromagnetic fields from the discretization of the Helmholtz equation by for example the finite element method (FEM). Geometrical perturbations of the structure under concern lead to a new generalized eigenvalue problems with different system matrices. Geometrical perturbations may arise by manufacturing tolerances, harsh operating conditions or during shape optimization. Directly solving the eigenvalue problem for each perturbation is computationally costly. The perturbed eigenpairs can be approximated using eigenpair derivatives. Two common approaches for the calculation of eigenpair derivatives, namely modal superposition method and direct algebraic methods, are discussed in this paper. Based on the direct algebraic methods an iterative algorithm is developed for efficiently calculating the eigenvalues and eigenvectors of the perturbed geometry from the eigenvalues and eigenvectors of the unperturbed geometry.

  1. Cosmological constraints on pseudo-Nambu-Goldstone bosons

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Jaffe, Andrew H.

    1991-01-01

    Particle physics models with pseudo-Nambu-Goldstone bosons (PNGBs) are characterized by two mass scales: a global spontaneous symmetry breaking scale, f, and a soft (explicit) symmetry breaking scale, Lambda. General model insensitive constraints were studied on this 2-D parameter space arising from the cosmological and astrophysical effects of PNGBs. In particular, constraints were studied arising from vacuum misalignment and thermal production of PNGBs, topological defects, and the cosmological effects of PNGB decay products, as well as astrophysical constraints from stellar PNGB emission. Bounds on the Peccei-Quinn axion scale, 10(exp 10) GeV approx. = or less than f sub pq approx. = or less than 10(exp 10) to 10(exp 12) GeV, emerge as a special case, where the soft breaking scale is fixed at Lambda sub QCD approx. = 100 MeV.

  2. The Riemann-Hilbert problem for nonsymmetric systems

    NASA Astrophysics Data System (ADS)

    Greenberg, W.; Zweifel, P. F.; Paveri-Fontana, S.

    1991-12-01

    A comparison of the Riemann-Hilbert problem and the Wiener-Hopf factorization problem arising in the solution of half-space singular integral equations is presented. Emphasis is on the factorization of functions lacking the reflection symmetry usual in transport theory.

  3. Scaled model guidelines for solar coronagraphs' external occulters with an optimized shape.

    PubMed

    Landini, Federico; Baccani, Cristian; Schweitzer, Hagen; Asoubar, Daniel; Romoli, Marco; Taccola, Matteo; Focardi, Mauro; Pancrazzi, Maurizio; Fineschi, Silvano

    2017-12-01

    One of the major challenges faced by externally occulted solar coronagraphs is the suppression of the light diffracted by the occulter edge. It is a contribution to the stray light that overwhelms the coronal signal on the focal plane and must be reduced by modifying the geometrical shape of the occulter. There is a rich literature, mostly experimental, on the appropriate choice of the most suitable shape. The problem arises when huge coronagraphs, such as those in formation flight, shall be tested in a laboratory. A recent contribution [Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757] provides the guidelines for scaling the geometry and replicate in the laboratory the flight diffraction pattern as produced by the whole solar disk and a flight occulter but leaves the conclusion on the occulter scale law somehow unjustified. This paper provides the numerical support for validating that conclusion and presents the first-ever simulation of the diffraction behind an occulter with an optimized shape along the optical axis with the solar disk as a source. This paper, together with Opt. Lett.41, 757 (2016)OPLEDP0146-959210.1364/OL.41.000757, aims at constituting a complete guide for scaling the coronagraphs' geometry.

  4. A Comparison of Solver Performance for Complex Gastric Electrophysiology Models

    PubMed Central

    Sathar, Shameer; Cheng, Leo K.; Trew, Mark L.

    2016-01-01

    Computational techniques for solving systems of equations arising in gastric electrophysiology have not been studied for efficient solution process. We present a computationally challenging problem of simulating gastric electrophysiology in anatomically realistic stomach geometries with multiple intracellular and extracellular domains. The multiscale nature of the problem and mesh resolution required to capture geometric and functional features necessitates efficient solution methods if the problem is to be tractable. In this study, we investigated and compared several parallel preconditioners for the linear systems arising from tetrahedral discretisation of electrically isotropic and anisotropic problems, with and without stimuli. The results showed that the isotropic problem was computationally less challenging than the anisotropic problem and that the application of extracellular stimuli increased workload considerably. Preconditioning based on block Jacobi and algebraic multigrid solvers were found to have the best overall solution times and least iteration counts, respectively. The algebraic multigrid preconditioner would be expected to perform better on large problems. PMID:26736543

  5. Problem-Solving during Shared Reading at Kindergarten

    ERIC Educational Resources Information Center

    Gosen, Myrte N.; Berenst, Jan; de Glopper, Kees

    2015-01-01

    This paper reports on a conversation analytic study of problem-solving interactions during shared reading at three kindergartens in the Netherlands. It illustrates how teachers and pupils discuss book characters' problems that arise in the events in the picture books. A close analysis of the data demonstrates that problem-solving interactions do…

  6. Transnational Environmental Problems--The United States, Canada, Mexico.

    ERIC Educational Resources Information Center

    Wilcher, Marshall E.

    1983-01-01

    Examines problems associated with transboundary environmental pollution, focusing on problems arising between the United States and Mexico and between the United States and Canada. Also discusses new organizational forms developed to bring transboundary issues to a higher policy-making level. (JN)

  7. Fair Inference on Outcomes

    PubMed Central

    Nabi, Razieh; Shpitser, Ilya

    2017-01-01

    In this paper, we consider the problem of fair statistical inference involving outcome variables. Examples include classification and regression problems, and estimating treatment effects in randomized trials or observational data. The issue of fairness arises in such problems where some covariates or treatments are “sensitive,” in the sense of having potential of creating discrimination. In this paper, we argue that the presence of discrimination can be formalized in a sensible way as the presence of an effect of a sensitive covariate on the outcome along certain causal pathways, a view which generalizes (Pearl 2009). A fair outcome model can then be learned by solving a constrained optimization problem. We discuss a number of complications that arise in classical statistical inference due to this view and provide workarounds based on recent work in causal and semi-parametric inference.

  8. Distinctions between fraud, bias, errors, misunderstanding, and incompetence.

    PubMed

    DeMets, D L

    1997-12-01

    Randomized clinical trials are challenging not only in their design and analysis, but in their conduct as well. Despite the best intentions and efforts, problems often arise in the conduct of trials, including errors, misunderstandings, and bias. In some instances, key players in a trial may discover that they are not able or competent to meet requirements of the study. In a few cases, fraudulent activity occurs. While none of these problems is desirable, randomized clinical trials are usually found sufficiently robust by many key individuals to produce valid results. Other problems are not tolerable. Confusion may arise among scientists, scientific and lay press, and the public about the distinctions between these areas and their implications. We shall try to define these problems and illustrate their impact through a series of examples.

  9. A non-local free boundary problem arising in a theory of financial bubbles

    PubMed Central

    Berestycki, Henri; Monneau, Regis; Scheinkman, José A.

    2014-01-01

    We consider an evolution non-local free boundary problem that arises in the modelling of speculative bubbles. The solution of the model is the speculative component in the price of an asset. In the framework of viscosity solutions, we show the existence and uniqueness of the solution. We also show that the solution is convex in space, and establish several monotonicity properties of the solution and of the free boundary with respect to parameters of the problem. To study the free boundary, we use, in particular, the fact that the odd part of the solution solves a more standard obstacle problem. We show that the free boundary is and describe the asymptotics of the free boundary as c, the cost of transacting the asset, goes to zero. PMID:25288815

  10. The Quantum Measurement Problem and Physical reality: A Computation Theoretic Perspective

    NASA Astrophysics Data System (ADS)

    Srikanth, R.

    2006-11-01

    Is the universe computable? If yes, is it computationally a polynomial place? In standard quantum mechanics, which permits infinite parallelism and the infinitely precise specification of states, a negative answer to both questions is not ruled out. On the other hand, empirical evidence suggests that NP-complete problems are intractable in the physical world. Likewise, computational problems known to be algorithmically uncomputable do not seem to be computable by any physical means. We suggest that this close correspondence between the efficiency and power of abstract algorithms on the one hand, and physical computers on the other, finds a natural explanation if the universe is assumed to be algorithmic; that is, that physical reality is the product of discrete sub-physical information processing equivalent to the actions of a probabilistic Turing machine. This assumption can be reconciled with the observed exponentiality of quantum systems at microscopic scales, and the consequent possibility of implementing Shor's quantum polynomial time algorithm at that scale, provided the degree of superposition is intrinsically, finitely upper-bounded. If this bound is associated with the quantum-classical divide (the Heisenberg cut), a natural resolution to the quantum measurement problem arises. From this viewpoint, macroscopic classicality is an evidence that the universe is in BPP, and both questions raised above receive affirmative answers. A recently proposed computational model of quantum measurement, which relates the Heisenberg cut to the discreteness of Hilbert space, is briefly discussed. A connection to quantum gravity is noted. Our results are compatible with the philosophy that mathematical truths are independent of the laws of physics.

  11. Growth of matter perturbation in quintessence cosmology

    NASA Astrophysics Data System (ADS)

    Mulki, Fargiza A. M.; Wulandari, Hesti R. T.

    2017-01-01

    Big bang theory states that universe emerged from singularity with very high temperature and density, then expands homogeneously and isotropically. This theory gives rise standard cosmological principle which declares that universe is homogeneous and isotropic on large scales. However, universe is not perfectly homogeneous and isotropic on small scales. There exist structures starting from clusters, galaxies even to stars and planetary system scales. Cosmological perturbation theory is a fundamental theory that explains the origin of structures. According to this theory, the structures can be regarded as small perturbations in the early universe, which evolves as the universe expands. In addition to the problem of inhomogeneities of the universe, observations of supernovae Ia suggest that our universe is being accelerated. Various models of dark energy have been proposed to explain cosmic acceleration, one of them is cosmological constant. Because of several problems arise from cosmological constant, the alternative models have been proposed, one of these models is quintessence. We reconstruct growth of structure model following quintessence scenario at several epochs of the universe, which is specified by the effective equation of state parameters for each stage. Discussion begins with the dynamics of quintessence, in which exponential potential is analytically derived, which leads to various conditions of the universe. We then focus on scaling and quintessence dominated solutions. Subsequently, we review the basics of cosmological perturbation theory and derive formulas to investigate how matter perturbation evolves with time in subhorizon scales which leads to structure formation, and also analyze the influence of quintessence to the structure formation. From analytical exploration, we obtain the growth rate of matter perturbation and the existence of quintessence as a dark energy that slows down the growth of structure formation of the universe.

  12. A Summary of Some Discrete-Event System Control Problems

    NASA Astrophysics Data System (ADS)

    Rudie, Karen

    A summary of the area of control of discrete-event systems is given. In this research area, automata and formal language theory is used as a tool to model physical problems that arise in technological and industrial systems. The key ingredients to discrete-event control problems are a process that can be modeled by an automaton, events in that process that cannot be disabled or prevented from occurring, and a controlling agent that manipulates the events that can be disabled to guarantee that the process under control either generates all the strings in some prescribed language or as many strings as possible in some prescribed language. When multiple controlling agents act on a process, decentralized control problems arise. In decentralized discrete-event systems, it is presumed that the agents effecting control cannot each see all event occurrences. Partial observation leads to some problems that cannot be solved in polynomial time and some others that are not even decidable.

  13. The Visual Effects of Intraocular Colored Filters

    PubMed Central

    Hammond, Billy R.

    2012-01-01

    Modern life is associated with a myriad of visual problems, most notably refractive conditions such as myopia. Human ingenuity has addressed such problems using strategies such as spectacle lenses or surgical correction. There are other visual problems, however, that have been present throughout our evolutionary history and are not as easily solved by simply correcting refractive error. These problems include issues like glare disability and discomfort arising from intraocular scatter, photostress with the associated transient loss in vision that arises from short intense light exposures, or the ability to see objects in the distance through a veil of atmospheric haze. One likely biological solution to these more long-standing problems has been the use of colored intraocular filters. Many species, especially diurnal, incorporate chromophores from numerous sources (e.g., often plant pigments called carotenoids) into ocular tissues to improve visual performance outdoors. This review summarizes information on the utility of such filters focusing on chromatic filtering by humans. PMID:24278692

  14. New discretization and solution techniques for incompressible viscous flow problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.; Liu, C. H.

    1983-01-01

    Several topics arising in the finite element solution of the incompressible Navier-Stokes equations are considered. Specifically, the question of choosing finite element velocity/pressure spaces is addressed, particularly from the viewpoint of achieving stable discretizations leading to convergent pressure approximations. The role of artificial viscosity in viscous flow calculations is studied, emphasizing work by several researchers for the anisotropic case. The last section treats the problem of solving the nonlinear systems of equations which arise from the discretization. Time marching methods and classical iterative techniques, as well as some modifications are mentioned.

  15. Multigrid Algorithms for the Solution of Linear Complementarity Problems Arising from Free Boundary Problems.

    DTIC Science & Technology

    1980-10-01

    faster than previous algorithms. Indeed, with only minor modifications, the standard multigrid programs solve the LCP with essentially the same efficiency... Lemna 2.2. Let Uk be the solution of the LCP (2.3), and let uk > 0 be an approximate solu- tion obtained after one or more Gk projected sweeps. Let...in Figure 3.2, Ivu IIG decreased from .293 10 to .110 10 with the expenditure of (99.039-94.400) = 4.639 work units. While minor variations do arise, a

  16. Accurate detection of hierarchical communities in complex networks based on nonlinear dynamical evolution

    NASA Astrophysics Data System (ADS)

    Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng

    2018-04-01

    One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.

  17. Analytical-numerical solution of a nonlinear integrodifferential equation in econometrics

    NASA Astrophysics Data System (ADS)

    Kakhktsyan, V. M.; Khachatryan, A. Kh.

    2013-07-01

    A mixed problem for a nonlinear integrodifferential equation arising in econometrics is considered. An analytical-numerical method is proposed for solving the problem. Some numerical results are presented.

  18. Practical statistics in pain research.

    PubMed

    Kim, Tae Kyun

    2017-10-01

    Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.

  19. Contextualized Mathematics Problems and Transfer of Knowledge: Establishing Problem Spaces and Boundaries

    ERIC Educational Resources Information Center

    McGraw, Rebecca; Patterson, Cody L.

    2017-01-01

    In this study, we examine how inservice secondary mathematics teachers working together on a contextualized problem negotiate issues arising from the ill-structured nature of the problem such as what assumptions one may make, what real-world considerations should be taken into account, and what constitutes a satisfactory solution. We conceptualize…

  20. Elimination of artificial grid distortion and hourglass-type motions by means of Lagrangian subzonal masses and pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caramana, E.J.; Shashkov, M.J.

    1997-12-31

    The bane of Lagrangian hydrodynamics calculations is premature breakdown of the grid topology that results in severe degradation of accuracy and run termination often long before the assumption of Lagrangian zonal mass ceased to be valid. At short spatial grid scales this is usually referred to by the terms hourglass mode or keystone motion associated in particular with underconstrained grids such as quadrilaterals and hexahedrons in two and three dimensions, respectively. At longer spatial scales relative to the grid spacing there is what is referred to ubiquitously as spurious vorticity, or the long-thin zone problem. In both cases the resultmore » is anomalous grid distortion and tangling that has nothing to do with the actual solution, as would be the case for turbulent flow. In this work the authors show how such motions can be eliminated by the proper use of subzonal Lagrangian masses, and associated densities and pressures. These subzonal masses arise in a natural way from the fact that they require the mass associated with the nodal grid point to be constant in time. This is addition to the usual assumption of constant, Lagrangian zonal mass in staggered grid hydrodynamics scheme. The authors show that with proper discretization of subzonal forces resulting from subzonal pressures, hourglass motion and spurious vorticity can be eliminated for a very large range of problems. Finally the authors are presenting results of calculations of many test problems.« less

  1. Design and Analysis of Windmill Simulation and Pole by Solidwork Program

    NASA Astrophysics Data System (ADS)

    Mulyana, Tatang; Sebayang, Darwin; R, Akmal Muamar. D.; A, Jauharah H. D.; Yahya Shomit, M.

    2018-03-01

    The Indonesian state of archipelago has great wind energy potential. For micro-scale power generation, the energy obtained from the windmill can be connected directly to the electrical load and can be used without problems. However, for macro-scale power generation, problems will arise such as the design of vane shapes, there should be a simulation and an accurate experiment to produce blades with a special shape that can capture wind energy. In addition, daily and yearly wind and wind rate calculations are also required to ensure the best latitude and longitude positions for building windmills. This paper presents a solution to solve the problem of how to produce a windmill which in the builder is very practical and very mobile can be moved its location. Before a windmill prototype is built it should have obtained the best windmill design result. Therefore, the simulation of the designed windmill is of crucial importance. Solid simulation express is a tool that serves to generate simulation of a design. Some factors that can affect a design result include the power part and the rest part of the part, material selection, the load is given, the security of the design power made, and changes in shape due to treat the load given to the design made. In this paper, static and thermal simulations of windmills have been designed. Based on the simulation result on the designed windmill, it shows that the design has been made very satisfactory so that it can be done prototyping fabrication process.

  2. Reconciling large- and small-scale structure in Twin Higgs models

    DOE PAGES

    Prilepina, Valentina; Tsai, Yuhsin

    2017-09-08

    Here, we study possible extensions of the Twin Higgs model that solve the Hierarchy problem and simultaneously address problems of the large- and small-scale structures of the Universe. Besides naturally providing dark matter (DM) candidates as the lightest charged twin fermions, the twin sector contains a light photon and neutrinos, which can modify structure formation relative to the prediction from the ΛCDM paradigm. We focus on two viable scenarios. First, we study a Fraternal Twin Higgs model in which the spin-3/2 baryonmore » $$\\hat{Ω}$$~($$\\hat{b}$$$\\hat{b}$$$\\hat{b}$$) and the lepton twin tau $$\\hat{τ}$$ contribute to the dominant and subcomponent dark matter densities. A non-decoupled scattering between the twin tau and twin neutrino arising from a gauged twin lepton number symmetry provides a drag force that damps the density inhomogeneity of a dark matter subcomponent. Next, we consider the possibility of introducing a twin hydrogen atom $$\\hat{H}$$ as the dominant DM component. After recombination, a small fraction of the twin protons and leptons remains ionized during structure formation, and their scattering to twin neutrinos through a gauged U(1) B-L force provides the mechanism that damps the density inhomogeneity. Both scenarios realize the Partially Acoustic dark matter (PAcDM) scenario and explain the σ 8 discrepancy between the CMB and weak lensing results. Moreover, the self-scattering neutrino behaves as a dark fluid that enhances the size of the Hubble rate H 0 to accommodate the local measurement result while satisfying the CMB constraint. For the small-scale structure, the scattering of $$\\hat{Ω}$$ ’s and $$\\hat{H}$$’s through the twin photon exchange generates a self-interacting dark matter (SIDM) model that solves the mass deficit problem from dwarf galaxy to galaxy cluster scales. Furthermore, when varying general choices of the twin photon coupling, bounds from the dwarf galaxy and the cluster merger observations can set an upper limit on the twin electric coupling.« less

  3. Reconciling large- and small-scale structure in Twin Higgs models

    NASA Astrophysics Data System (ADS)

    Prilepina, Valentina; Tsai, Yuhsin

    2017-09-01

    We study possible extensions of the Twin Higgs model that solve the Hierarchy problem and simultaneously address problems of the large- and small-scale structures of the Universe. Besides naturally providing dark matter (DM) candidates as the lightest charged twin fermions, the twin sector contains a light photon and neutrinos, which can modify structure formation relative to the prediction from the ΛCDM paradigm. We focus on two viable scenarios. First, we study a Fraternal Twin Higgs model in which the spin-3/2 baryon \\widehat{Ω}˜ (\\widehat{b}\\widehat{b}\\widehat{b}) and the lepton twin tau \\widehat{τ} contribute to the dominant and subcomponent dark matter densities. A non-decoupled scattering between the twin tau and twin neutrino arising from a gauged twin lepton number symmetry provides a drag force that damps the density inhomogeneity of a dark matter subcomponent. Next, we consider the possibility of introducing a twin hydrogen atom Ĥ as the dominant DM component. After recombination, a small fraction of the twin protons and leptons remains ionized during structure formation, and their scattering to twin neutrinos through a gauged U(1) B-L force provides the mechanism that damps the density inhomogeneity. Both scenarios realize the Partially Acoustic dark matter (PAcDM) scenario and explain the σ 8 discrepancy between the CMB and weak lensing results. Moreover, the self-scattering neutrino behaves as a dark fluid that enhances the size of the Hubble rate H 0 to accommodate the local measurement result while satisfying the CMB constraint. For the small-scale structure, the scattering of \\widehat{Ω} 's and Ĥ's through the twin photon exchange generates a self-interacting dark matter (SIDM) model that solves the mass deficit problem from dwarf galaxy to galaxy cluster scales. Furthermore, when varying general choices of the twin photon coupling, bounds from the dwarf galaxy and the cluster merger observations can set an upper limit on the twin electric coupling.

  4. From the Golden Rectangle and Fibonacci to Pedagogy and Problem Posing

    ERIC Educational Resources Information Center

    Brown, Stephen I.

    1976-01-01

    Beginning with an analysis of the golden rectangle, the author shows how a series of problems for student investigation arise from queries concerning changes in conditions and analogous situations. (SD)

  5. Problems in Recording the Electrocardiogram.

    ERIC Educational Resources Information Center

    Webster, John G.

    The unwanted signals that arise in electrocardiography are discussed. A technical background of electrocardiography is given, along with teaching techniques that educate students of medical instrumentation to solve the problems caused by these signals. (MJH)

  6. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  7. Affinity adsorption of cells to surfaces and strategies for cell detachment.

    PubMed

    Hubble, John

    2007-01-01

    The use of bio-specific interactions for the separation and recovery of bio-molecules is now widely established and in many cases the technique has successfully crossed the divide between bench and process scale operation. Although the major specificity advantage of affinity-based separations also applies to systems intended for cell fractionation, developments in this area have been slower. Many of the problems encountered result from attempts to take techniques developed for molecular systems and, with only minor modification to the conditions used, apply them for the separation of cells. This approach tends to ignore or at least trivialise the problems, which arise from the heterogeneous nature of a cell suspension and the multivalent nature of the cell/surface interaction. To develop viable separation processes on a larger scale, effective contacting strategies are required in separators that also allow detachment or recovery protocols that overcome the enhanced binding strength generated by multivalent interactions. The effects of interaction valency on interaction strength needs to be assessed and approaches developed to allow effective detachment and recovery of adsorbed cells without compromising cell viability. This article considers the influence of operating conditions on cell attachment and the extent to which multivalent interactions determine the strength of cell binding and subsequent detachment.

  8. The NOνA Module Factory Quality Assurance System

    NASA Astrophysics Data System (ADS)

    Smith, Alex; the NOνA Collaboration

    The NOνA experiment will measure neutrino oscillations using a long-baseline beam, a ∼220-ton near detector and a ∼14-kiloton far detector. Production of ∼12500 modules to build these detectors is an industrial scale operation requiring careful quality assurance to meet the stringent technical specifications. Unlike a typical industrial operation, this project will use primarily a part time labor force of ∼200 University of Minnesota undergraduate students managed by a small team of full time employees. The quality assurance system is involved in nearly every aspect of the production: assembly, scheduling, training, payroll, materials, machine maintenance, test data, and safety compliance. The quality assurance data collected during the assembly process allows us to quickly identify and correct any problems that arise.

  9. Axion predictions in SO(10) × U(1)PQ models

    NASA Astrophysics Data System (ADS)

    Ernst, Anne; Ringwald, Andreas; Tamarit, Carlos

    2018-02-01

    Non-supersymmetric Grand Unified SO(10) × U(1)PQ models have all the ingredients to solve several fundamental problems of particle physics and cosmology — neutrino masses and mixing, baryogenesis, the non-observation of strong CP violation, dark matter, inflation — in one stroke. The axion — the pseudo Nambu-Goldstone boson arising from the spontaneous breaking of the U(1)PQ Peccei-Quinn symmetry — is the prime dark matter candidate in this setup. We determine the axion mass and the low energy couplings of the axion to the Standard Model particles, in terms of the relevant gauge symmetry breaking scales. We work out the constraints imposed on the latter by gauge coupling unification. We discuss the cosmological and phenomenological implications.

  10. Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp

    This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and tomore » evaluate the efficiency of the algorithm.« less

  11. Electronic shift register memory based on molecular electron-transfer reactions

    NASA Technical Reports Server (NTRS)

    Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.

    1989-01-01

    The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.

  12. Supernatural MSSM

    NASA Astrophysics Data System (ADS)

    Du, Guangle; Li, Tianjun; Nanopoulos, D. V.; Raza, Shabbar

    2015-07-01

    We point out that the electroweak fine-tuning problem in the supersymmetric standard models (SSMs) is mainly due to the high energy definition of the fine-tuning measure. We propose supernatural supersymmetry which has an order one high energy fine-tuning measure automatically. The key point is that all the mass parameters in the SSMs arise from a single supersymmetry breaking parameter. In this paper, we show that there is no supersymmetry electroweak fine-tuning problem explicitly in the minimal SSM (MSSM) with no-scale supergravity and Giudice-Masiero mechanism. We demonstrate that the Z -boson mass, the supersymmetric Higgs mixing parameter μ at the unification scale, and the sparticle spectrum can be given as functions of the universal gaugino mass M1 /2. Because the light stau is the lightest supersymmetric particle (LSP) in the no-scale MSSM, to preserve R parity, we introduce a non-thermally generated axino as the LSP dark matter candidate. We estimate the lifetime of the light stau by calculating its two-body and three-body decays to the LSP axino for several values of axion decay constant fa, and find that the light stau has a lifetime ττ ˜1 in [10-4,100 ] s for an fa range [109,1012] GeV . We show that our next to the LSP stau solutions are consistent with all the current experimental constraints, including the sparticle mass bounds, B-physics bounds, Higgs mass, cosmological bounds, and the bounds on long-lived charged particles at the LHC.

  13. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  14. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  15. Long-term scale adaptive tracking with kernel correlation filters

    NASA Astrophysics Data System (ADS)

    Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui

    2018-04-01

    Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.

  16. Exploiting Symmetry on Parallel Architectures.

    NASA Astrophysics Data System (ADS)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  17. Use of Picard and Newton iteration for solving nonlinear ground water flow equations

    USGS Publications Warehouse

    Mehl, S.

    2006-01-01

    This study examines the use of Picard and Newton iteration to solve the nonlinear, saturated ground water flow equation. Here, a simple three-node problem is used to demonstrate the convergence difficulties that can arise when solving the nonlinear, saturated ground water flow equation in both homogeneous and heterogeneous systems with and without nonlinear boundary conditions. For these cases, the characteristic types of convergence patterns are examined. Viewing these convergence patterns as orbits of an attractor in a dynamical system provides further insight. It is shown that the nonlinearity that arises from nonlinear head-dependent boundary conditions can cause more convergence difficulties than the nonlinearity that arises from flow in an unconfined aquifer. Furthermore, the effects of damping on both convergence and convergence rate are investigated. It is shown that no single strategy is effective for all problems and how understanding pitfalls and merits of several methods can be helpful in overcoming convergence difficulties. Results show that Picard iterations can be a simple and effective method for the solution of nonlinear, saturated ground water flow problems.

  18. Institutional Resource Requirements, Management, and Accountability.

    ERIC Educational Resources Information Center

    Matlock, John; Humphries, Frederick S.

    A detailed resource management study was conducted at Tennessee State University, and resource management problems at other higher education institutions were identified through the exchange of data and studies. Resource requirements and management problems unique to black institutions were examined, as were the problems that arise from regional…

  19. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W. (Editor); Hardin, J. C. (Editor)

    1997-01-01

    The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.

  20. A New Approach to Isolating External Magnetic Field Components in Spacecraft Measurements of the Earth's Magnetic Field Using Global Positioning System observables

    NASA Technical Reports Server (NTRS)

    Raymond, C.; Hajj, G.

    1994-01-01

    We review the problem of separating components of the magnetic field arising from sources in the Earth's core and lithosphere, from those contributions arising external to the Earth, namely ionospheric and magnetospheric fields, in spacecraft measurements of the Earth's magnetic field.

  1. Exotic superconductivity with enhanced energy scales in materials with three band crossings

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Ping; Nandkishore, Rahul M.

    2018-04-01

    Three band crossings can arise in three-dimensional quantum materials with certain space group symmetries. The low energy Hamiltonian supports spin one fermions and a flat band. We study the pairing problem in this setting. We write down a minimal BCS Hamiltonian and decompose it into spin-orbit coupled irreducible pairing channels. We then solve the resulting gap equations in channels with zero total angular momentum. We find that in the s-wave spin singlet channel (and also in an unusual d-wave `spin quintet' channel), superconductivity is enormously enhanced, with a possibility for the critical temperature to be linear in interaction strength. Meanwhile, in the p-wave spin triplet channel, the superconductivity exhibits features of conventional BCS theory due to the absence of flat band pairing. Three band crossings thus represent an exciting new platform for realizing exotic superconducting states with enhanced energy scales. We also discuss the effects of doping, nonzero temperature, and of retaining additional terms in the k .p expansion of the Hamiltonian.

  2. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2018-07-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  3. Electrical and structural investigations, and ferroelectric domains in nanoscale structures

    NASA Astrophysics Data System (ADS)

    Alexe, Marin

    2005-03-01

    Generally speaking material properties are expected to change as the characteristic dimension of a system approaches at the nanometer scale. In the case of ferroelectric materials fundamental problems such as the super-paraelectric limit, influence of the free surface and/or of the interface and bulk defects on ferroelectric switching, etc. arise when scaling the systems into the sub-100 nm range. In order to study these size effects, fabrication methods of high quality nanoscale ferroelectric crystals as well as AFM-based investigations methods have been developed in the last few years. The present talk will briefly review self-patterning and self- assembly fabrication methods, including chemical routes, morphological instability of ultrathin films, and self-assembly lift-off, employed up to the date to fabricate ferroelectric nanoscale structures with lateral size in the range of few tens of nanometers. Moreover, in depth structural and electrical investigations of interfaces performed to differentiate between intrinsic and extrinsic size effects will be also presented.

  4. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2017-11-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  5. Solution algorithms for nonlinear transient heat conduction analysis employing element-by-element iterative strategies

    NASA Technical Reports Server (NTRS)

    Winget, J. M.; Hughes, T. J. R.

    1985-01-01

    The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.

  6. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two years of the project, we have successfully extended STAT to determine the relative progress of different MPI processes. We have shown that the STAT, which is now included in the debugging tools distributed by Cray with their large-scale systems, substantially reduces the scale at which traditional debugging techniques are applied. We have extended CBI to large-scale systems and developed new compiler based analyses that reduce its instrumentation overhead. Our results demonstrate that CBI can identify the source of errors in large-scale applications. Finally, we have developed MPIecho, a new technique that will reduce the time required to perform key correctness analyses, such as the detection of writes to unallocated memory. Overall, our research results are the foundations for new debugging paradigms that will improve application scientist productivity by reducing the time to determine which package or module contains the root cause of a problem that arises at all scales of our high end systems. While we have made substantial progress in the first two years of CoPS research, significant work remains. While STAT provides scalable debugging assistance for incorrect application runs, we could apply its techniques to assertions in order to observe deviations from expected behavior. Further, we must continue to refine STAT's techniques to represent behavioral equivalence classes efficiently as we expect systems with millions of threads in the next year. We are exploring new CBI techniques that can assess the likelihood that execution deviations from past behavior are the source of erroneous execution. Finally, we must develop usable correctness analyses that apply the MPIecho parallelization strategy in order to locate coding errors. We expect to make substantial progress on these directions in the next year but anticipate that significant work will remain to provide usable, scalable debugging paradigms.« less

  7. Engineering the earth system

    NASA Astrophysics Data System (ADS)

    Keith, D. W.

    2005-12-01

    The post-war growth of the earth sciences has been fueled, in part, by a drive to quantify environmental insults in order to support arguments for their reduction, yet paradoxically the knowledge gained is grants us ever greater capability to deliberately engineer environmental processes on a planetary scale. Increased capability can arises though seemingly unconnected scientific advances. Improvements in numerical weather prediction such as the use of adjoint models in analysis/forecast systems, for example, means that weather modification can be accomplished with smaller control inputs. Purely technological constraints on our ability to engineer earth systems arise from our limited ability to measure and predict system responses and from limits on our ability to manage large engineering projects. Trends in all three constraints suggest a rapid growth in our ability to engineer the planet. What are the implications of our growing ability to geoengineer? Will we see a reemergence of proposals to engineer our way out of the climate problem? How can we avoid the moral hazard posed by the knowledge that geoengineering might provide a backstop to climate damages? I will speculate about these issues, and suggest some institutional factors that may provide a stronger constraint on the use of geoengineering than is provided by any purely technological limit.

  8. Positive Discipline A to Z: 1001 Solutions to Everyday Parenting Problems.

    ERIC Educational Resources Information Center

    Nelsen, Jane; And Others

    This book is a parenting reference work that offers background on common disciplinary problems and parenting issues, advice on how to handle problems and issues as they arise, and insight into how to avoid disciplinary problems in the future. The book is divided into three sections: Basic Positive Discipline Parenting Tools, Positive Discipline…

  9. Cognitive Abilities Adjustment and Parenting Practices in Preschoolers with Disruption Conduct Problems

    ERIC Educational Resources Information Center

    Fernandez-Parra, A.; Lopez-Rubio, S.; Mata, S.; Calero, M. D.; Vives, M. C.; Carles, R.; Navarro, E.

    2013-01-01

    Introduction: Conduct problems arising in infancy are one of the main reasons for which parents seek psychological assistance. Although these problems usually begin when the child has started school, in recent years a group of children has been identified who begin to manifest such problems from their earliest infancy and whose prognosis seems to…

  10. New discretization and solution techniques for incompressible viscous flow problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.; Liu, C. H.

    1983-01-01

    This paper considers several topics arising in the finite element solution of the incompressible Navier-Stokes equations. Specifically, the question of choosing finite element velocity/pressure spaces is addressed, particularly from the viewpoint of achieving stable discretizations leading to convergent pressure approximations. Following this, the role of artificial viscosity in viscous flow calculations is studied, emphasizing recent work by several researchers for the anisotropic case. The last section treats the problem of solving the nonlinear systems of equations which arise from the discretization. Time marching methods and classical iterative techniques, as well as some recent modifications are mentioned.

  11. Responding to Adolescent Suicide.

    ERIC Educational Resources Information Center

    Phi Delta Kappa Educational Foundation, Bloomington, IN.

    This publication is designed to help educators deal with the problems that arise after an adolescent's suicide. It recommends that teachers should be able to detect differences in students' responses to emotional problems. Following a preface and a brief review of the extent of the problem, the first chapter discusses which adolescents are…

  12. Introducing the Hero Complex and the Mythic Iconic Pathway of Problem Gambling

    ERIC Educational Resources Information Center

    Nixon, Gary; Solowoniuk, Jason

    2009-01-01

    Early research into the motivations behind problem gambling reflected separate paradigms of thought splitting our understanding of the gambler into divergent categories. However, over the past 25 years, problem gambling is now best understood to arise from biological, environmental, social, and psychological processes, and is now encapsulated…

  13. Esperanto and International Language Problems: A Research Bibliography.

    ERIC Educational Resources Information Center

    Tonkin, Humphrey R.

    This bibliography is intended both for the researcher and for the occasional student of international language problems, particularly as these relate to the international language Esperanto. The book is divided into two main sections: Part One deals with problems arising from communication across national boundaries and the search for a solution…

  14. Microstructural Characterization of Base Metal Alloys with Conductive Native Oxides for Electrical Contact Applications

    NASA Astrophysics Data System (ADS)

    Senturk, Bilge Seda

    Metallic contacts are a ubiquitous method of connecting electrical and electronic components/systems. These contacts are usually fabricated from base metals because they are inexpensive, have high bulk electrical conductivities and exhibit excellent formability. Unfortunately, such base metals oxidize in air under ambient conditions, and the characteristics of the native oxide scales leads to contact resistances orders of magnitude higher than those for mating bare metal surface. This is a critical technological issue since the development of unacceptably high contact resistances over time is now by far the most common cause of failure in electrical/electronic devices and systems. To overcome these problems, several distinct approaches are developed for alloying base metals to promote the formation of self-healing inherently conductive native oxide scales. The objective of this dissertation study is to demonstrate the viability of these approaches through analyzing the data from Cu-9La (at%) and Fe-V binary alloy systems. The Cu-9 La alloy structure consists of eutectic colonies tens of microns in diameter wherein a rod-like Cu phase lies within a Cu6La matrix phase. The thin oxide scale formed on the Cu phase was found to be Cu2O as expected while the thicker oxide scale formed on the Cu6La phase was found to be a polycrystalline La-rich Cu2O. The enhanced electrical conductivity in the native oxide scale of the Cu-9La alloy arises from heavy n-type doping of the Cu2O lattice by La3+. The Fe-V alloy structures consist of a mixture of large elongated and equiaxed grains. A thin polycrystalline Fe3O4 oxide scale formed on all of the Fe-V alloys. The electrical conductivities of the oxide scales formed on the Fe-V alloys are higher than that formed on pure Fe. It is inferred that this enhanced conductivity arises from doping of the magnetite with V+4 which promotes electron-polaron hopping. Thus, it has been demonstrated that even in simple binary alloy systems one can obtain a dramatic reduction in the contact resistances of alloy oxidized surfaces as compared with those of the pure base metals.

  15. Cutting planes for the multistage stochastic unit commitment problem

    DOE PAGES

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    2016-04-20

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  16. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernán A.

    2015-08-01

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  17. Influence maximization in complex networks through optimal percolation.

    PubMed

    Morone, Flaviano; Makse, Hernán A

    2015-08-06

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  18. Cutting planes for the multistage stochastic unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  19. Errors, Error, and Text in Multidialect Setting.

    ERIC Educational Resources Information Center

    Candler, W. J.

    1979-01-01

    This article discusses the various dialects of English spoken in Liberia and analyzes the problems of Liberian students in writing compositions in English. Errors arise mainly from differences in culture and cognition, not from superficial linguistic problems. (CFM)

  20. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  1. A scenario analysis of the future residential requirements for people with mental health problems in Eindhoven

    PubMed Central

    2011-01-01

    Background Despite large-scale investments in mental health care in the community since the 1990 s, a trend towards reinstitutionalization has been visible since 2002. Since many mental health care providers regard this as an undesirable trend, the question arises: In the coming 5 years, what types of residence should be organized for people with mental health problems? The purpose of this article is to provide mental health care providers, public housing corporations, and local government with guidelines for planning organizational strategy concerning types of residence for people with mental health problems. Methods A scenario analysis was performed in four steps: 1) an exploration of the external environment; 2) the identification of key uncertainties; 3) the development of scenarios; 4) the translation of scenarios into guidelines for planning organizational strategy. To explore the external environment a document study was performed, and 15 semi-structured interviews were conducted. During a workshop, a panel of experts identified two key uncertainties in the external environment, and formulated four scenarios. Results The study resulted in four scenarios: 1) Integrated and independent living in the community with professional care; 2) Responsible healthcare supported by society; 3) Differentiated provision within the walls of the institution; 4) Residence in large-scale institutions but unmet need for care. From the range of aspects within the different scenarios, the panel was able to work out concrete guidelines for planning organizational strategy. Conclusions In the context of residence for people with mental health problems, the focus should be on investment in community care and their re-integration into society. A joint effort is needed to achieve this goal. This study shows that scenario analysis leads to useful guidelines for planning organizational strategy in mental health care. PMID:21211015

  2. An adaptive large neighborhood search heuristic for Two-Echelon Vehicle Routing Problems arising in city logistics

    PubMed Central

    Hemmelmayr, Vera C.; Cordeau, Jean-François; Crainic, Teodor Gabriel

    2012-01-01

    In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP. PMID:23483764

  3. An adaptive large neighborhood search heuristic for Two-Echelon Vehicle Routing Problems arising in city logistics.

    PubMed

    Hemmelmayr, Vera C; Cordeau, Jean-François; Crainic, Teodor Gabriel

    2012-12-01

    In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP.

  4. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  5. Improving Teaching Quality and Problem Solving Ability through Contextual Teaching and Learning in Differential Equations: A Lesson Study Approach

    ERIC Educational Resources Information Center

    Khotimah, Rita Pramujiyanti; Masduki

    2016-01-01

    Differential equations is a branch of mathematics which is closely related to mathematical modeling that arises in real-world problems. Problem solving ability is an essential component to solve contextual problem of differential equations properly. The purposes of this study are to describe contextual teaching and learning (CTL) model in…

  6. Impacts of insect disturbance on the structure, composition, and functioning of oak-pine forests

    NASA Astrophysics Data System (ADS)

    Medvigy, D.; Schafer, K. V.; Clark, K. L.

    2011-12-01

    Episodic disturbance is an essential feature of terrestrial ecosystems, and strongly modulates their structure, composition, and functioning. However, dynamic global vegetation models that are commonly used to make ecosystem and terrestrial carbon budget predictions rarely have an explicit representation of disturbance. One reason why disturbance is seldom included is that disturbance tends to operate on spatial scales that are much smaller than typical model resolutions. In response to this problem, the Ecosystem Demography model 2 (ED2) was developed as a way of tracking the fine-scale heterogeneity arising from disturbances. In this study, we used ED2 to simulate an oak-pine forest that experiences episodic defoliation by gypsy moth (Lymantria dispar L). The model was carefully calibrated against site-level data, and then used to simulate changes in ecosystem composition, structure, and functioning on century time scales. Compared to simulations that include gypsy moth defoliation, we show that simulations that ignore defoliation events lead to much larger ecosystem carbon stores and a larger fraction of deciduous trees relative to evergreen trees. Furthermore, we find that it is essential to preserve the fine-scale nature of the disturbance. Attempts to "smooth out" the defoliation event over an entire grid cells led to large biases in ecosystem structure and functioning.

  7. Fill-Tube-Induced Mass Perturbations on X-Ray-Driven, Ignition-Scale, Inertial-Confinement-Fusion Capsule Shells and the Implications for Ignition Experiments

    DOE PAGES

    Bennett, G. R.; Herrmann, M. C.; Edwards, M. J.; ...

    2007-11-13

    We present on the first inertial-confinement-fusion ignition facility, the target capsule will be DT filled through a long, narrow tube inserted into the shell. μg-scale shell perturbations Δm' arising from multiple, 10–50 μm-diameter, hollow SiO 2 tubes on x-ray-driven, ignition-scale, 1-mg capsules have been measured on a subignition device. Finally, simulations compare well with observation, whence it is corroborated that Δm' arises from early x-ray shadowing by the tube rather than tube mass coupling to the shell, and inferred that 10–20 μm tubes will negligibly affect fusion yield on a full-ignition facility.

  8. Neutrino masses, scale-dependent growth, and redshift-space distortions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernández, Oscar F., E-mail: oscarh@physics.mcgill.ca

    2017-06-01

    Massive neutrinos leave a unique signature in the large scale clustering of matter. We investigate the wavenumber dependence of the growth factor arising from neutrino masses and use a Fisher analysis to determine the aspects of a galaxy survey needed to measure this scale dependence.

  9. Chemistry and the Internal Combustion Engine II: Pollution Problems.

    ERIC Educational Resources Information Center

    Hunt, C. B.

    1979-01-01

    Discusses pollution problems which arise from the use of internal combustion (IC) engines in the United Kingdom (UK). The IC engine exhaust emissions, controlling IC engine pollution in the UK, and some future developments are also included. (HM)

  10. Neutrino Masses in the Landscape and Global-Local Dualities in Eternal Inflation

    NASA Astrophysics Data System (ADS)

    Mainemer Katz, Dan

    In this dissertation we study two topics in Theoretical Cosmology: one more formal, the other more phenomenological. We work in the context of eternally inflating cosmologies. These arise in any fundamental theory that contains at least one stable or metastable de Sitter vacuum. Each topic is presented in a different chapter: Chapter 1 deals with the measure problem in eternal inflation. Global-local duality is the equivalence of seemingly different regulators in eternal inflation. For example, the light- cone time cutoff (a global measure, which regulates time) makes the same predictions as the causal patch (a local measure that cuts off space). We show that global-local duality is far more general. It rests on a redundancy inherent in any global cutoff: at late times, an attractor regime is reached, characterized by the unlimited exponential self-reproduction of a certain fundamental region of spacetime. An equivalent local cutoff can be obtained by restricting to this fundamental region. We derive local duals to several global cutoffs of interest. The New Scale Factor Cutoff is dual to the Short Fat Geodesic, a geodesic of fixed infinitesimal proper width. Vilenkin's CAH Cutoff is equivalent to the Hubbletube, whose width is proportional to the local Hubble volume. The famous youngness problem of the Proper Time Cutoff can be readily understood by considering its local dual, the Incredible Shrinking Geodesic. The chapter closely follows our paper. Chapter 2 deals with the question of whether neutrino masses could be anthropically explained. The sum of active neutrino masses is well constrained, 58 meV ≤ mupsilon [is approximately less than] 0.23 eV, but the origin of this scale is not well understood. Here we investigate the possibility that it arises by environmental selection in a large landscape of vacua. Earlier work had noted the detrimental effects of neutrinos on large scale structure. However, using Boltzmann codes to compute the smoothed density contrast on Mpc scales, we find that dark matter halos form abundantly for mupsilon [is approximately greater than] 10eV. This finding rules out an anthropic origin of mupsilon, unless a different catastrophic boundary can be identified. Here we argue that galaxy formation becomes inefficient for mupsilon [is approximately greater than] 10 eV. We show that in this regime, structure forms late and is dominated by cluster scales, as in a top-down scenario. This is catastrophic: baryonic gas will cool too slowly to form stars in an abundance comparable to our universe. With this novel cooling boundary, we find that the anthropic prediction for mupsilon agrees at better than 2sigma with current observational bounds. A degenerate hierarchy is mildly preferred. The chapter closely follows our paper.

  11. "Que vienen los lobos!" (Breve nota sobre el plural de los apellidos) ("May The Wolves Come:" [A Brief Note on the Plural of Surnames])

    ERIC Educational Resources Information Center

    Vivaldi, Gonzalo Martin

    1975-01-01

    This article discusses the problems that arise with the formation of plural forms of surnames in Spanish, problems both with morphology and with ambiguity. Suggestions as to how to lessen problems are made. (Text is in Spanish.) (CLK)

  12. Multiplex congruence network of natural numbers.

    PubMed

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-31

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  13. Multiplex congruence network of natural numbers

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-01

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  14. Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen

    2016-03-31

    In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less

  15. Common origin of 3.55 keV x-ray line and gauge coupling unification with left-right dark matter

    NASA Astrophysics Data System (ADS)

    Borah, Debasish; Dasgupta, Arnab; Patra, Sudhanwa

    2017-12-01

    We present a minimal left-right dark matter framework that can simultaneously explain the recently observed 3.55 keV x-ray line from several galaxy clusters and gauge coupling unification at high energy scale. Adopting a minimal dark matter strategy, we consider both left and right handed triplet fermionic dark matter candidates which are stable by virtue of a remnant Z2≃(-1 )B -L symmetry arising after the spontaneous symmetry breaking of left-right gauge symmetry to that of the standard model. A scalar bitriplet field is incorporated whose first role is to allow radiative decay of right handed triplet dark matter into the left handed one and a photon with energy 3.55 keV. The other role this bitriplet field at TeV scale plays is to assist in achieving gauge coupling unification at a high energy scale within a nonsupersymmetric S O (10 ) model while keeping the scale of left-right gauge symmetry around the TeV corner. Apart from solving the neutrino mass problem and giving verifiable new contributions to neutrinoless double beta decay and charged lepton flavor violation, the model with TeV scale gauge bosons can also give rise to interesting collider signatures like diboson excess, dilepton plus two jets excess reported recently in the large hadron collider data.

  16. Kinetic feature of dipolarization fronts produced by interchange instability in the magnetotail

    NASA Astrophysics Data System (ADS)

    Lyu, Haoyu

    2017-04-01

    A two-dimensional extended MHD simulation is performed to study the kinetic feature of depolarization fronts (DF) in the scale of the ion inertial length / ion Larmor radius. The interchange instability, arising due to the force imbalance between the tailward gradient of thermal pressure and Earthward magnetic curvature force, self-consistently produces the DF in the near-Earth region. Numerical investigations indicate that the DF is a tangential discontinuity, which means that the normal plasma velocity across the DF should be zero in the reference system that is static with the DF structure. The electric system, including electric field and current, is determined by Hall effect arising in the scale of ion inertial length. Hall effect not only mainly contributes on the electric field normal to the tangent plane of the DF, increases the current along the tangent plane of the DF, but also makes the DF structure asymmetric. The drifting motion of the large-scale DF structure is determined by the FLR effect arising in the scale of ion Larmor radius. The ion magnetization velocity induced by the FLR effect is towards to duskward at the subsolar point of the DF, but the y component of velocity in the region after the DF, which dominantly results in the drifting motion of the whole mushroom structure towards the dawn.

  17. A review of arsenic and its impacts in groundwater of the Ganges-Brahmaputra-Meghna delta, Bangladesh.

    PubMed

    Edmunds, W M; Ahmed, K M; Whitehead, P G

    2015-06-01

    Arsenic in drinking water is the single most important environmental issue facing Bangladesh; between 35 and 77 million of its 156 million inhabitants are considered to be at risk from drinking As-contaminated water. This dominates the list of stress factors affecting health, livelihoods and the ecosystem of the delta region. There is a vast literature on the subject so this review provides a filter of the more important information available on the topic. The arsenic problem arises from the move in the 1980s and 1990s by international agencies to construct tube wells as a source of water free of pathogens, groundwater usually considered a safe source. Since arsenic was not measured during routine chemical analysis and also is difficult to measure at low concentrations it was not until the late 1990s that the widespread natural anomaly of high arsenic was discovered and confirmed. The problem was exacerbated by the fact that the medical evidence of arsenicosis only appears slowly. The problem arises in delta regions because of the young age of the sediments deposited by the GBM river system. The sediments contain minerals such as biotite which undergo slow "diagenetic" reactions as the sediments become compacted, and which, under the reducing conditions of the groundwater, release in the form of toxic As(3+). The problem is restricted to sediments of Holocene age and groundwater of a certain depth (mainly 30-150 m), coinciding with the optimum well depth. The problem is most serious in a belt across southern Bangladesh, but within 50 m of the coast the problem is only minor because of use of deep groundwater; salinity in shallow groundwater here is the main issue for drinking water. The Government of Bangladesh adopted a National Arsenic Policy and Mitigation Action Plan in 2004 for providing arsenic safe water to all the exposed population, to provide medical care for those who have visible symptoms of arsenicosis. There is as yet no national monitoring program in place. Various mitigation strategies have been tested, but generally the numerous small scale technological remedies have proved unworkable at village level. The current statistics show that use of deep groundwater (below 150 m) is the main source of arsenic mitigation over most of the arsenic affected areas as well as rainwater harvesting in certain location.

  18. Dynamics of the middle atmosphere as observed by the ARISE project

    NASA Astrophysics Data System (ADS)

    Blanc, E.

    2015-12-01

    It has been strongly demonstrated that variations in the circulation of the middle atmosphere influence weather and climate all the way to the Earth's surface. A key part of this coupling occurs through the propagation and breaking of planetary and gravity waves. However, limited observations prevent to faithfully reproduce the dynamics of the middle atmosphere in numerical weather prediction and climate models. The main challenge of the ARISE (Atmospheric dynamics InfraStructure in Europe) project is to combine existing national and international observation networks including: the International infrasound monitoring system developed for the CTBT (Comprehensive nuclear-Test-Ban Treaty) verification, the NDACC (Network for the Detection of Atmospheric Composition Changes) lidar network, European observation infrastructures at mid latitudes (OHP observatory), tropics (Maïdo observatory), high latitudes (ALOMAR and EISCAT), infrasound stations which form a dense European network and satellites. The ARISE network is unique by its coverage (polar to equatorial regions in the European longitude sector), its altitude range (from troposphere to mesosphere and ionosphere) and the involved scales both in time (from seconds to tens of years) and space (from tens of meters to thousands of kilometers). Advanced data products are produced with the scope to assimilate data in the Weather Prediction models to improve future forecasts over weeks and seasonal time scales. ARISE observations are especially relevant for the monitoring of extreme events such as thunderstorms, volcanoes, meteors and at larger scales, deep convection and stratospheric warming events for physical processes description and study of long term evolution with climate change. Among the applications, ARISE fosters integration of innovative methods for remote detection of non-instrumented volcanoes including distant eruption characterization to provide notifications with reliable confidence indices to the civil aviation.

  19. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  20. Magnetic drops in a soft-magnetic cylinder

    NASA Astrophysics Data System (ADS)

    Hertel, Riccardo; Kirschner, Jürgen

    2004-07-01

    Magnetization reversal in a cylindrical ferromagnetic particle seems to be a simple textbook problem in magnetism. But at a closer look, the magnetization reversal dynamics in a cylinder is far from being trivial. The difficulty arises from the central axis, where the magnetization switches in a discontinuous fashion. Micromagnetic computer simulations allow for a detailed description of the evolution of the magnetic structure on the sub-nanosecond time scale. The switching process involves the injection of a magnetic point singularity (Bloch point) into the cylinder. Further point singularities may be generated and annihilated periodically during the reversal process. This results in the temporary formation of micromagnetic drops, i.e., isolated, non-reversed regions. This surprising feature in dynamic micromagnetism is due to different mobilities of domain wall and Bloch point.

  1. Potential Use of Antiviral Agents in Polio Eradication

    PubMed Central

    De Palma, Armando M.; Pürstinger, Gerhard; Wimmer, Eva; Patick, Amy K.; Andries, Koen; Rombaut, Bart; De Clercq, Erik

    2008-01-01

    In 1988, the World Health Assembly launched the Global Polio Eradication Initiative, which aimed to use large-scale vaccination with the oral vaccine to eradicate polio worldwide by the year 2000. Although important progress has been made, polio remains endemic in several countries. Also, the current control measures will likely be inadequate to deal with problems that may arise in the postpolio era. A panel convoked by the National Research Council concluded that the use of antiviral drugs may be essential in the polio eradication strategy. We here report on a comparative study of the antipoliovirus activity of a selection of molecules that have previously been reported to be inhibitors of picornavirus replication and discuss their potential use, alone or in combination, for the treatment or prophylaxis of poliovirus infection. PMID:18394270

  2. Synovial sarcoma of the neck associated with previous head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mischler, N.E.; Chuprevich, T.; Tormey, D.C.

    1978-08-01

    Synovial sarcoma is a rare neoplasm that uncommonly arises in the neck. Fourteen years after facial and neck radiation therapy for acne, synovial sarcoma of the neck developed in a young man. Possible radiation-induced benign and malignant neoplasms that arise in the head and neck region, either of thyroid or extrathyroid origin, remain a continuing medical problem.

  3. Obesity: a problem of darwinian proportions?

    PubMed

    Watnick, Suzanne

    2006-10-01

    Obesity has been described as an abnormality arising from the evolution of man, who becomes fat during the time of perpetual plenty. From the perspective of "Darwinian Medicine," if famine is avoided, obesity will prevail. Problems regarding obesity arise within many disciplines, including socioeconomic environments, the educational system, science, law, and government. This article will discuss various ethical aspects of several disciplines regarding obesity, with a focus on scientific inquiry. We will discuss this within the categories: (1) chronic kidney disease predialysis, (2) dialysis, and (3) renal transplantation. This article aims to help nephrologists and their patients navigate through the ethical aspects of obesity and chronic kidney disease.

  4. Development and validation of the Alcohol Myopia Scale.

    PubMed

    Lac, Andrew; Berger, Dale E

    2013-09-01

    Alcohol myopia theory conceptualizes the ability of alcohol to narrow attention and how this demand on mental resources produces the impairments of self-inflation, relief, and excess. The current research was designed to develop and validate a scale based on this framework. People who were alcohol users rated items representing myopic experiences arising from drinking episodes in the past month. In Study 1 (N = 260), the preliminary 3-factor structure was supported by exploratory factor analysis. In Study 2 (N = 289), the 3-factor structure was substantiated with confirmatory factor analysis, and it was superior in fit to an empirically indefensible 1-factor structure. The final 14-item scale was evaluated with internal consistency reliability, discriminant validity, convergent validity, criterion validity, and incremental validity. The alcohol myopia scale (AMS) illuminates conceptual underpinnings of this theory and yields insights for understanding the tunnel vision that arises from intoxication.

  5. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  6. Visual integration dysfunction in schizophrenia arises by the first psychotic episode and worsens with illness duration.

    PubMed

    Keane, Brian P; Paterno, Danielle; Kastner, Sabine; Silverstein, Steven M

    2016-05-01

    Visual integration dysfunction characterizes schizophrenia, but prior studies have not yet established whether the problem arises by the first psychotic episode or worsens with illness duration. To investigate the issue, we compared chronic schizophrenia patients (SZs), first episode psychosis patients (FEs), and well-matched healthy controls on a brief but sensitive psychophysical task in which subjects attempted to locate an integrated shape embedded in noise. Task difficulty depended on the number of noise elements co-presented with the shape. For half of the experiment, the entire display was scaled down in size to produce a high spatial frequency (HSF) condition, which has been shown to worsen patient integration deficits. Catch trials-in which the circular target appeared without noise-were also added so as to confirm that subjects were paying adequate attention. We found that controls integrated contours under noisier conditions than FEs, who, in turn, integrated better than SZs. These differences, which were at times large in magnitude (d = 1.7), clearly emerged only for HSF displays. Catch trial accuracy was above 95% for each group and could not explain the foregoing differences. Prolonged illness duration predicted poorer HSF integration across patients, but age had little effect on controls, indicating that the former factor was driving the effect in patients. Taken together, a brief psychophysical task efficiently demonstrates large visual integration impairments in schizophrenia. The deficit arises by the first psychotic episode, worsens with illness duration, and may serve as a biomarker of illness progression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Closed solutions to a differential-difference equation and an associated plate solidification problem.

    PubMed

    Layeni, Olawanle P; Akinola, Adegbola P; Johnson, Jesse V

    2016-01-01

    Two distinct and novel formalisms for deriving exact closed solutions of a class of variable-coefficient differential-difference equations arising from a plate solidification problem are introduced. Thereupon, exact closed traveling wave and similarity solutions to the plate solidification problem are obtained for some special cases of time-varying plate surface temperature.

  8. POEMS in Newton's Aerodynamic Frustum

    ERIC Educational Resources Information Center

    Sampedro, Jaime Cruz; Tetlalmatzi-Montiel, Margarita

    2010-01-01

    The golden mean is often naively seen as a sign of optimal beauty but rarely does it arise as the solution of a true optimization problem. In this article we present such a problem, demonstrating a close relationship between the golden mean and a special case of Newton's aerodynamical problem for the frustum of a cone. Then, we exhibit a parallel…

  9. How Do They Solve It? An Insight into the Learner's Approach to the Mechanism of Physics Problem Solving

    ERIC Educational Resources Information Center

    Hegde, Balasubrahmanya; Meera, B. N.

    2012-01-01

    A perceived difficulty is associated with physics problem solving from a learner's viewpoint, arising out of a multitude of reasons. In this paper, we have examined the microstructure of students' thought processes during physics problem solving by combining the analysis of responses to multiple-choice questions and semistructured student…

  10. Corrosion control and disinfection studies in spacecraft water systems. [considering Saturn 5 orbital workshop

    NASA Technical Reports Server (NTRS)

    Shea, T. G.

    1974-01-01

    Disinfection and corrosion control in the water systems of the Saturn 5 Orbital Workshop Program are considered. Within this framework, the problem areas of concern are classified into four general areas: disinfection; corrosion; membrane-associated problems of disinfectant uptake and diffusion; and taste and odor problems arising from membrane-disinfectant interaction.

  11. Regarding tracer transport in Mars' winter atmosphere in the presence of nearly stationary, forced planetary waves

    NASA Technical Reports Server (NTRS)

    Hollingsworth, Jeffrey L.; Haberle, R. M.; Houben, Howard C.

    1993-01-01

    Large-scale transport of volatiles and condensates on Mars, as well as atmospheric dust, is ultimately driven by the planet's global-scale atmospheric circulation. This circulation arises in part from the so-called mean meridional (Hadley) circulation that is associated with rising/poleward motion in low latitudes and sinking/equatorward motion in middle and high latitudes. Intimately connected to the mean circulation is an eddy-driven component due to large-scale wave activity in the planet's atmosphere. During winter this wave activity arises both from traveling weather systems (i.e., barotropic and baroclinic disturbances) and from 'forced' disturbances (e.g., the thermal tides and surface-forced planetary waves). Possible contributions to the effective (net) transport circulation from forced planetary waves are investigated.

  12. Psychotherapy with Older Dying Persons.

    ERIC Educational Resources Information Center

    Dye, Carol J.

    Psychotherapy with older dying patients can lead to problems of countertransference for the clinician. Working with dying patients requires flexibility to adapt basic therapeutics to the institutional setting. Goals of psychotherapy must be reconceptualized for dying clients. The problems of countertransference arise because clinicians themselves…

  13. Spelling: A Visual Skill.

    ERIC Educational Resources Information Center

    Hendrickson, Homer

    1988-01-01

    Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…

  14. Inequalities, assessment and computer algebra

    NASA Astrophysics Data System (ADS)

    Sangwin, Christopher J.

    2015-01-01

    The goal of this paper is to examine single variable real inequalities that arise as tutorial problems and to examine the extent to which current computer algebra systems (CAS) can (1) automatically solve such problems and (2) determine whether students' own answers to such problems are correct. We review how inequalities arise in contemporary curricula. We consider the formal mathematical processes by which such inequalities are solved, and we consider the notation and syntax through which solutions are expressed. We review the extent to which current CAS can accurately solve these inequalities, and the form given to the solutions by the designers of this software. Finally, we discuss the functionality needed to deal with students' answers, i.e. to establish equivalence (or otherwise) of expressions representing unions of intervals. We find that while contemporary CAS accurately solve inequalities there is a wide variety of notation used.

  15. An iterative Riemann solver for systems of hyperbolic conservation law s, with application to hyperelastic solid mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Gregory H.

    2003-08-06

    In this paper we present a general iterative method for the solution of the Riemann problem for hyperbolic systems of PDEs. The method is based on the multiple shooting method for free boundary value problems. We demonstrate the method by solving one-dimensional Riemann problems for hyperelastic solid mechanics. Even for conditions representative of routine laboratory conditions and military ballistics, dramatic differences are seen between the exact and approximate Riemann solution. The greatest discrepancy arises from misallocation of energy between compressional and thermal modes by the approximate solver, resulting in nonphysical entropy and temperature estimates. Several pathological conditions arise in commonmore » practice, and modifications to the method to handle these are discussed. These include points where genuine nonlinearity is lost, degeneracies, and eigenvector deficiencies that occur upon melting.« less

  16. Complexity seems to open a way towards a new Aristotelian-Thomistic ontology.

    PubMed

    Strumia, Alberto

    2007-01-01

    Today's sciences seem to converge all towards very similar foundational questions. Such claims, both of epistemological and ontological nature, seem to rediscover, in a new fashion some of the most relevant topics of ancient Greek and Mediaeval philosophy of nature, logic and metaphysics, such as the problem of the relationship between the whole and its parts (non redictionism), the problems of the paradoxes arising from the attempt to conceive the entity like an univocal concept (analogy and analogia entis), the problem of the mind-body relationship and that of an adequate cognitive theory (abstraction and immaterial nature of the mind), the complexity of some physical, chemical and biological systems and global properties arising from information (matter-form theory), etc. Medicine too is involved in some of such relevant questions and cannot avoid to take them into a special account.

  17. Losers in the 'Rock-Paper-Scissors' game: The role of non-hierarchical competition and chaos as biodiversity sustaining agents in aquatic systems

    EPA Science Inventory

    Processes occurring within small areas (patch-scale) that influence species richness and spatial heterogeneity of larger areas (landscape-scale) have long been an interest of ecologists. This research focused on the role of patch-scale deterministic chaos arising in phytoplankton...

  18. Wave induced density modification in RF sheaths and close to wave launchers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Eester, D., E-mail: d.van.eester@fz-juelich.de; Crombé, K.; Department of Applied Physics, Ghent University, Ghent

    2015-12-10

    With the return to full metal walls - a necessary step towards viable fusion machines - and due to the high power densities of current-day ICRH (Ion Cyclotron Resonance Heating) or RF (radio frequency) antennas, there is ample renewed interest in exploring the reasons for wave-induced sputtering and formation of hot spots. Moreover, there is experimental evidence on various machines that RF waves influence the density profile close to the wave launchers so that waves indirectly influence their own coupling efficiency. The present study presents a return to first principles and describes the wave-particle interaction using a 2-time scale modelmore » involving the equation of motion, the continuity equation and the wave equation on each of the time scales. Through the changing density pattern, the fast time scale dynamics is affected by the slow time scale events. In turn, the slow time scale density and flows are modified by the presence of the RF waves through quasilinear terms. Although finite zero order flows are identified, the usual cold plasma dielectric tensor - ignoring such flows - is adopted as a first approximation to describe the wave response to the RF driver. The resulting set of equations is composed of linear and nonlinear equations and is tackled in 1D in the present paper. Whereas the former can be solved using standard numerical techniques, the latter require special handling. At the price of multiple iterations, a simple ’derivative switch-on’ procedure allows to reformulate the nonlinear problem as a sequence of linear problems. Analytical expressions allow a first crude assessment - revealing that the ponderomotive potential plays a role similar to that of the electrostatic potential arising from charge separation - but numerical implementation is required to get a feeling of the full dynamics. A few tentative examples are provided to illustrate the phenomena involved.« less

  19. Assessing the Effect of an Old and New Methodology for Scale Conversion on Examinee Scores

    ERIC Educational Resources Information Center

    Rizavi, Saba; Smith, Robert; Carey, Jill

    2002-01-01

    Research has been done to look at the benefits of BILOG over LOGIST as well as the potential issues that can arise if transition from LOGIST to BILOG is desired. A serious concern arises when comparability is required between previously calibrated LOGIST parameter estimates and currently calibrated BILOG estimates. It is imperative to obtain an…

  20. Identifying Unique Ethical Challenges of Indigenous Field-Workers: A Commentary on Alexander and Richman's "Ethical Dilemmas in Evaluations Using Indigenous Research Workers"

    ERIC Educational Resources Information Center

    Smith, Nick L.

    2008-01-01

    In contrast with nonindigenous workers, to what extent do unique ethical problems arise when indigenous field-workers participate in field studies? Three aspects of study design and operation are considered: data integrity issues, risk issues, and protection issues. Although many of the data quality issues that arise with the use of indigenous…

  1. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  2. NASA Aviation Safety Reporting System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Problems in briefing of relief by air traffic controllers are discussed, including problems that arise when duty positions are changed by controllers. Altimeter reading and setting errors as factors in aviation safety are discussed, including problems associated with altitude-including instruments. A sample of reports from pilots and controllers is included, covering the topics of ATIS broadcasts an clearance readback problems. A selection of Alert Bulletins, with their responses, is included.

  3. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  4. How Seductive Are Decorative Elements in Learning Materials?

    ERIC Educational Resources Information Center

    Rey, Gunter Daniel

    2012-01-01

    The seductive detail effect arises when people learn more deeply from a multimedia presentation when interesting but irrelevant adjuncts are excluded. However, previous studies about this effect are rather inconclusive and contained various methodical problems. The recent experiment attempted to overcome these methodical problems. Undergraduate…

  5. Phantom Effects in Multilevel Compositional Analysis: Problems and Solutions

    ERIC Educational Resources Information Center

    Pokropek, Artur

    2015-01-01

    This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…

  6. Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Xiao-Chuan; Keyes, David; Yang, Chao

    2014-09-29

    The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementationmore » since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.« less

  7. Magnetogenesis in matter—Ekpyrotic bouncing cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koley, Ratna; Samtani, Sidhartha, E-mail: ratna.physics@presiuniv.ac.in, E-mail: samtanisidhartha@gmail.com

    In the recent past there have been many attempts to associate the generation of primordial magnetic seed fields with the inflationary era, but with limited success. We thus take a different approach by using a model for nonsingular bouncing cosmology. A coupling of the electromagnetic Lagrangian F {sub μν} F {sup μν} with a non background scalar field has been considered for the breaking of conformal invariance. We have shown that non singular bouncing cosmology supports magnetogenesis evading the long standing back reaction and strong coupling problems which have plagued inflationary magnetogenesis. In this model, we have achieved a scalemore » invariant power spectrum for the parameter range compatible with observed CMB anisotropies. The desired strength of the magnetic field has also been obtained that goes in accordance with present observations. It is also important to note that no BKL instability arises within this parameter range. The energy scales for different stages of evolution of the bouncing model are so chosen that they solve certain problems of standard Big Bang cosmology as well.« less

  8. Wavelets in electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Modisette, Jason Perry

    1997-09-01

    Ab initio calculations of the electronic structure of bulk materials and large clusters are not possible on today's computers using current techniques. The storage and diagonalization of the Hamiltonian matrix are the limiting factors in both memory and execution time. The scaling of both quantities with problem size can be reduced by using approximate diagonalization or direct minimization of the total energy with respect to the density matrix in conjunction with a localized basis. Wavelet basis members are much more localized than conventional bases such as Gaussians or numerical atomic orbitals. This localization leads to sparse matrices of the operators that arise in SCF multi-electron calculations. We have investigated the construction of the one-electron Hamiltonian, and also the effective one- electron Hamiltonians that appear in density-functional and Hartree-Fock theories. We develop efficient methods for the generation of the kinetic energy and potential matrices, the Hartree and exchange potentials, and the local exchange-correlation potential of the LDA. Test calculations are performed on one-electron problems with a variety of potentials in one and three dimensions.

  9. Explicit parametric solutions of lattice structures with proper generalized decomposition (PGD) - Applications to the design of 3D-printed architectured materials

    NASA Astrophysics Data System (ADS)

    Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro

    2018-01-01

    Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.

  10. An Ethical Issue Scale for Community Pharmacy Setting (EISP): Development and Validation.

    PubMed

    Crnjanski, Tatjana; Krajnovic, Dusanka; Tadic, Ivana; Stojkov, Svetlana; Savic, Mirko

    2016-04-01

    Many problems that arise when providing pharmacy services may contain some ethical components and the aims of this study were to develop and validate a scale that could assess difficulties of ethical issues, as well as the frequency of those occurrences in everyday practice of community pharmacists. Development and validation of the scale was conducted in three phases: (1) generating items for the initial survey instrument after qualitative analysis; (2) defining the design and format of the instrument; (3) validation of the instrument. The constructed Ethical Issue scale for community pharmacy setting has two parts containing the same 16 items for assessing the difficulty and frequency thereof. The results of the 171 completely filled out scales were analyzed (response rate 74.89%). The Cronbach's α value of the part of the instrument that examines difficulties of the ethical situations was 0.83 and for the part of the instrument that examined frequency of the ethical situations was 0.84. Test-retest reliability for both parts of the instrument was satisfactory with all Interclass correlation coefficient (ICC) values above 0.6, (for the part that examines severity ICC = 0.809, for the part that examines frequency ICC = 0.929). The 16-item scale, as a self assessment tool, demonstrated a high degree of content, criterion, and construct validity and test-retest reliability. The results support its use as a research tool to asses difficulty and frequency of ethical issues in community pharmacy setting. The validated scale needs to be further employed on a larger sample of pharmacists.

  11. Can power-law scaling and neuronal avalanches arise from stochastic dynamics?

    PubMed

    Touboul, Jonathan; Destexhe, Alain

    2010-02-11

    The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.

  12. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  13. Transformational leadership can improve workforce competencies.

    PubMed

    Thompson, Juliana

    2012-03-01

    Staffing problems can arise because of poor delegation skills or a failure by leaders to respond appropriately to economic factors and patient demographics. Training dilemmas, meanwhile, can arise because of managers' confusion about what constitutes 'training' and what constitutes 'education', and where responsibility of provision lies, with the consequence that they neglect these activities. This article uses Kouzes and Posner's (2009) transformational leadership model to show how managers can respond. Leaders who challenge budgets, consider new ways of working and engage effectively with the workforce can improve productivity and care, while those who invest in appropriate learning will have a highly trained workforce. The author explains how integration of leadership roles and management functions can lead to innovative problem solving.

  14. Euthanasia from the perspective of hospice care.

    PubMed

    Gillett, G

    1994-01-01

    The hospice believes in the concept of a gentle and harmonious death. In most hospice settings there is also a rejection of active euthanasia. This set of two apparently conflicting principles can be defended on the basis of two arguments. The first is that doctors should not foster the intent to kill as part of their moral and clinical character. This allows proper sensitivity to the complex and difficult situation that arises in many of the most difficult terminal care situations. The second argument turns on the seduction of technological solutions to human problems and the slippery slope that may arise in the presence of a quick and convenient way of dealing with problems of death and dying.

  15. Solving Integer Programs from Dependence and Synchronization Problems

    DTIC Science & Technology

    1993-03-01

    DEFF.NSNE Solving Integer Programs from Dependence and Synchronization Problems Jaspal Subhlok March 1993 CMU-CS-93-130 School of Computer ScienceT IC...method Is an exact and efficient way of solving integer programming problems arising in dependence and synchronization analysis of parallel programs...7/;- p Keywords: Exact dependence tesing, integer programming. parallelilzng compilers, parallel program analysis, synchronization analysis Solving

  16. The King and Prisoner Puzzle: A Way of Introducing the Components of Logical Structures

    ERIC Educational Resources Information Center

    Roh, Kyeong Hah; Lee, Yong Hah; Tanner, Austin

    2016-01-01

    The purpose of this paper is to provide issues related to student understanding of logical components that arise when solving word problems. We designed a logic problem called the King and Prisoner Puzzle--a linguistically simple, yet logically challenging problem. In this paper, we describe various student solutions to the puzzle and discuss the…

  17. The application of NASTRAN at Sperry Univac Holland. [analysis of engineering, modelling, and use of program system

    NASA Technical Reports Server (NTRS)

    Koopmans, G.

    1973-01-01

    Very divergent problems arising with different calculations indicate that NASTRAN is not always accessible for common use. Problems with engineering, modelling, and use of the program system are analysed and a way of solution is outlined. Related to this, some supplementary modifications are made at Sperry Univac Holland to facilitate the program for the less skilled user. The implementation of a new element also gives an insight into the use of NASTRAN at Sperry Univac Holland. As the users of Univac computers are from very different kinds of industries like shipbuilders, petrochemical industries, and building industries, the variety of problems coming from these users is very large. This variety results in experience not with one special kind of calculation nor one special kind of construction, but with a wide area of problems arising in the use of NASTRAN. These problems can roughly be divided into three different groups: (1) recognition of what is to be calculated and how, (2) construction of a model, and (3) handling the NASTRAN program. These are the basic problems for every less skilled user of NASTRAN and the Application/Research Department of Sperry Univac has to give reasonable answers to these questions.

  18. [Errors in wound management].

    PubMed

    Filipović, Marinko; Novinscak, Tomislav

    2014-10-01

    Chronic ulcers have adverse effects on the patient quality of life and productivity, thus posing financial burden upon the healthcare system. Chronic wound healing is a complex process resulting from the interaction of the patient general health status, wound related factors, medical personnel skill and competence, and therapy related products. In clinical practice, considerable improvement has been made in the treatment of chronic wounds, which is evident in the reduced rate of the severe forms of chronic wounds in outpatient clinics. However, in spite of all the modern approaches, efforts invested by medical personnel and agents available for wound care, numerous problems are still encountered in daily practice. Most frequently, the problems arise from inappropriate education, of young personnel in particular, absence of multidisciplinary approach, and inadequate communication among the personnel directly involved in wound treatment. To perceive them more clearly, the potential problems or complications in the management of chronic wounds can be classified into the following groups: problems mostly related to the use of wound coverage and other etiology related specificities of wound treatment; problems related to incompatibility of the agents used in wound treatment; and problems arising from failure to ensure aseptic and antiseptic performance conditions.

  19. Cognitive reserve as a moderator of responsiveness to an online problem-solving intervention for adolescents with complicated mild to severe traumatic brain injury

    PubMed Central

    Karver, Christine L.; Wade, Shari L.; Cassedy, Amy; Taylor, H. Gerry; Brown, Tanya M.; Kirkwood, Michael W.; Stancin, Terry

    2013-01-01

    Children and adolescents with traumatic brain injury (TBI) often experience behavior difficulties that may arise from problem-solving deficits and impaired self-regulation. However, little is known about the relationship of neurocognitive ability to post-TBI behavioral recovery. To address this question, we examined whether verbal intelligence, as estimated by Vocabulary scores from the Wechsler Abbreviated Scale of Intelligence, predicted improvements in behavior and executive functioning following a problem-solving intervention for adolescents with TBI. 132 adolescents with complicated mild to serve TBI were randomly assigned to a 6 month web-based problem-solving intervention (CAPS; n = 65) or to an internet resource comparison (IRC; n = 67) group. Vocabulary moderated the association between treatment group and improvements in meta-cognitive abilities. Examination of the mean estimates indicated that for those with lower Vocabulary scores, pre-intervention Metacognition Index scores from the Behavior Rating Inventory of Executive Function (BRIEF) did not differ between the groups, but post-intervention scores were significantly lower (more improved) for those in the CAPS group. These findings suggest that low verbal intelligence was associated with greater improvements in executive functioning following the CAPS intervention and that verbal intelligence may have an important role in response to intervention for TBI. Understanding predictors of responsiveness to interventions allows clinicians to tailor treatments to individuals, thus improving efficacy. PMID:23710617

  20. On the scalability of the Albany/FELIX first-order Stokes approximation ice sheet solver for large-scale simulations of the Greenland and Antarctic ice sheets

    DOE PAGES

    Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; ...

    2015-01-01

    We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditionermore » results in faster linear solve times but the ILU preconditioner exhibits better scalability. In addition, a weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. We show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.« less

  1. Least-squares finite element solution of 3D incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, Tsung-Liang; Povinelli, Louis A.

    1992-01-01

    Although significant progress has been made in the finite element solution of incompressible viscous flow problems. Development of more efficient methods is still needed before large-scale computation of 3D problems becomes feasible. This paper presents such a development. The most popular finite element method for the solution of incompressible Navier-Stokes equations is the classic Galerkin mixed method based on the velocity-pressure formulation. The mixed method requires the use of different elements to interpolate the velocity and the pressure in order to satisfy the Ladyzhenskaya-Babuska-Brezzi (LBB) condition for the existence of the solution. On the other hand, due to the lack of symmetry and positive definiteness of the linear equations arising from the mixed method, iterative methods for the solution of linear systems have been hard to come by. Therefore, direct Gaussian elimination has been considered the only viable method for solving the systems. But, for three-dimensional problems, the computer resources required by a direct method become prohibitively large. In order to overcome these difficulties, a least-squares finite element method (LSFEM) has been developed. This method is based on the first-order velocity-pressure-vorticity formulation. In this paper the LSFEM is extended for the solution of three-dimensional incompressible Navier-Stokes equations written in the following first-order quasi-linear velocity-pressure-vorticity formulation.

  2. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  3. Creating a national citizen engagement process for energy policy

    PubMed Central

    Pidgeon, Nick; Demski, Christina; Butler, Catherine; Parkhill, Karen; Spence, Alexa

    2014-01-01

    This paper examines some of the science communication challenges involved when designing and conducting public deliberation processes on issues of national importance. We take as our illustrative case study a recent research project investigating public values and attitudes toward future energy system change for the United Kingdom. National-level issues such as this are often particularly difficult to engage the public with because of their inherent complexity, derived from multiple interconnected elements and policy frames, extended scales of analysis, and different manifestations of uncertainty. With reference to the energy system project, we discuss ways of meeting a series of science communication challenges arising when engaging the public with national topics, including the need to articulate systems thinking and problem scale, to provide balanced information and policy framings in ways that open up spaces for reflection and deliberation, and the need for varied methods of facilitation and data synthesis that permit access to participants’ broader values. Although resource intensive, national-level deliberation is possible and can produce useful insights both for participants and for science policy. PMID:25225393

  4. Creating a national citizen engagement process for energy policy.

    PubMed

    Pidgeon, Nick; Demski, Christina; Butler, Catherine; Parkhill, Karen; Spence, Alexa

    2014-09-16

    This paper examines some of the science communication challenges involved when designing and conducting public deliberation processes on issues of national importance. We take as our illustrative case study a recent research project investigating public values and attitudes toward future energy system change for the United Kingdom. National-level issues such as this are often particularly difficult to engage the public with because of their inherent complexity, derived from multiple interconnected elements and policy frames, extended scales of analysis, and different manifestations of uncertainty. With reference to the energy system project, we discuss ways of meeting a series of science communication challenges arising when engaging the public with national topics, including the need to articulate systems thinking and problem scale, to provide balanced information and policy framings in ways that open up spaces for reflection and deliberation, and the need for varied methods of facilitation and data synthesis that permit access to participants' broader values. Although resource intensive, national-level deliberation is possible and can produce useful insights both for participants and for science policy.

  5. Synchronicity in predictive modelling: a new view of data assimilation

    NASA Astrophysics Data System (ADS)

    Duane, G. S.; Tribbia, J. J.; Weiss, J. B.

    2006-11-01

    The problem of data assimilation can be viewed as one of synchronizing two dynamical systems, one representing "truth" and the other representing "model", with a unidirectional flow of information between the two. Synchronization of truth and model defines a general view of data assimilation, as machine perception, that is reminiscent of the Jung-Pauli notion of synchronicity between matter and mind. The dynamical systems paradigm of the synchronization of a pair of loosely coupled chaotic systems is expected to be useful because quasi-2D geophysical fluid models have been shown to synchronize when only medium-scale modes are coupled. The synchronization approach is equivalent to standard approaches based on least-squares optimization, including Kalman filtering, except in highly non-linear regions of state space where observational noise links regimes with qualitatively different dynamics. The synchronization approach is used to calculate covariance inflation factors from parameters describing the bimodality of a one-dimensional system. The factors agree in overall magnitude with those used in operational practice on an ad hoc basis. The calculation is robust against the introduction of stochastic model error arising from unresolved scales.

  6. A temps nouveaux, solutions nouvelles: quelques propositions (New Times, New Solutions: Some Proposals).

    ERIC Educational Resources Information Center

    Capelle, Guy

    1983-01-01

    Serious problems in education in Latin America arising from political, economic, and social change periodically put in question the status, objectives, and manner of French second-language instruction. A number of solutions to general and specific pedagogical problems are proposed. (MSE)

  7. Understanding Gender-Based Wage Discrimination: Legal Interpretation and Trends of Pay Equity in Higher Education.

    ERIC Educational Resources Information Center

    Luna, Gaye

    1990-01-01

    Traces the history of laws and litigation concerning pay equity issues, also referred to as wage equity and comparable worth. Suggests that universities and colleges identify possible problems and take voluntary corrective measures before pay-equity problems arise. (MLF)

  8. Reflective Questions, Self-Questioning and Managing Professionally Situated Practice

    ERIC Educational Resources Information Center

    Malthouse, Richard; Watts, Mike; Roffey-Barentsen, Jodi

    2015-01-01

    Reflective self-questioning arises within the workplace when people are confronted with professional problems and situations. This paper focuses on reflective and "situated reflective" questions in terms of self-questioning and professional workplace problem solving. In our view, the situational context, entailed by the setting, social…

  9. Bright-White Beetle Scales Optimise Multiple Scattering of Light

    NASA Astrophysics Data System (ADS)

    Burresi, Matteo; Cortese, Lorenzo; Pattelli, Lorenzo; Kolle, Mathias; Vukusic, Peter; Wiersma, Diederik S.; Steiner, Ullrich; Vignolini, Silvia

    2014-08-01

    Whiteness arises from diffuse and broadband reflection of light typically achieved through optical scattering in randomly structured media. In contrast to structural colour due to coherent scattering, white appearance generally requires a relatively thick system comprising randomly positioned high refractive-index scattering centres. Here, we show that the exceptionally bright white appearance of Cyphochilus and Lepidiota stigma beetles arises from a remarkably optimised anisotropy of intra-scale chitin networks, which act as a dense scattering media. Using time-resolved measurements, we show that light propagating in the scales of the beetles undergoes pronounced multiple scattering that is associated with the lowest transport mean free path reported to date for low-refractive-index systems. Our light transport investigation unveil high level of optimisation that achieves high-brightness white in a thin low-mass-per-unit-area anisotropic disordered nanostructure.

  10. H→γγ as a Triangle Anomaly: Possible Implications for the Hierarchy Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Gouvea, Andre; Kile, Jennifer; Vega-Morales, Roberto

    2013-06-24

    The Standard Model calculation of H→γγ has the curious feature of being finite but regulator-dependent. While dimensional regularization yields a result which respects the electromagnetic Ward identities, additional terms which violate gauge invariance arise if the calculation is done setting d = 4. This discrepancy between the d=4 – ϵ and d = 4 results is recognized as a true ambiguity which must be resolved using physics input; as dimensional regularization respects gauge invariance, the d = 4 – ϵ calculation is accepted as the correct SM result. However, here we point out another possibility; working in analogy with the gauge chiral anomaly, we note that it is possible that the individual diagrams do violate the electromagnetic Ward identities, but that the gauge-invariance-violating terms cancel when all contributions to H→γγ, both from the SM and from new physics, are included. We thus examine the consequences of the hypothesis that the d = 4 calculation is valid, but that such a cancellation occurs. We work in general renormalizable gauge, thus avoiding issues with momentum routing ambiguities. We point out that the gauge-invariance-violating terms in d = 4 arise not just for the diagram containing a SMmore » $$W^{\\pm}$$ boson, but also for general fermion and scalar loops, and relate these terms to a lack of shift invariance in Higgs tadpole diagrams. We then derive the analogue of "anomaly cancellation conditions", and find consequences for solutions to the hierarchy problem. In particular, we find that supersymmetry obeys these conditions, even if it is softly broken at an arbitrarily high scale.« less

  11. Expanding the scope of health information systems. Challenges and developments.

    PubMed

    Kuhn, K A; Wurst, S H R; Bott, O J; Giuse, D A

    2006-01-01

    To identify current challenges and developments in health information systems. Reports on HIS, eHealth and process support were analyzed, core problems and challenges were identified. Health information systems are extending their scope towards regional networks and health IT infrastructures. Integration, interoperability and interaction design are still today's core problems. Additional problems arise through the integration of genetic information into the health care process. There are noticeable trends towards solutions for these problems.

  12. Can ethnography save the life of medical ethics?

    PubMed

    Hoffmaster, B

    1992-12-01

    Since its inception contemporary medical ethics has been regarded by many of its practitioners as 'applied ethics', that is, the application of philosophical theories to the moral problems that arise in health care. This 'applied ethics' model of medical ethics is, however, beset with internal and external difficulties. The internal difficulties point out that the model is intrinsically flawed. The external difficulties arise because the model does not fit work in the field. Indeed, the strengths of that work are its highly nuanced, particularized analyses of cases and issues and its appreciation of the circumstances and contexts that generate and structure these cases and issues. A shift away from a theory-driven 'applied ethics' to a more situational, contextual approach to medical ethics opens the way for ethnographic studies of moral problems in health care as well as a conception of moral theory that is more responsive to the empirical dimensions of those problems.

  13. Communicating Scientific Findings to Lawyers, Policy-Makers, and the Public (Invited)

    NASA Astrophysics Data System (ADS)

    Thompson, W.; Velsko, S. P.

    2013-12-01

    This presentation will summarize the authors' collaborative research on inferential errors, bias and communication difficulties that have arisen in the area of WMD forensics. This research involves analysis of problems that have arisen in past national security investigations, interviews with scientists from various disciplines whose work has been used in WMD investigations, interviews with policy-makers, and psychological studies of lay understanding of forensic evidence. Implications of this research for scientists involved in nuclear explosion monitoring will be discussed. Among the issues covered will be: - Potential incompatibilities between the questions policy makers pose and the answers that experts can provide. - Common misunderstandings of scientific and statistical data. - Advantages and disadvantages of various methods for describing and characterizing the strength of scientific findings. - Problems that can arise from excessive hedging or, alternatively, insufficient qualification of scientific conclusions. - Problems that can arise from melding scientific and non-scientific evidence in forensic assessments.

  14. The Fragmentation Criteria in Local Vertically Stratified Self-gravitating Disk Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baehr, Hans; Klahr, Hubert; Kratter, Kaitlin M., E-mail: baehr@mpia.de

    Massive circumstellar disks are prone to gravitational instabilities, which trigger the formation of spiral arms that can fragment into bound clumps under the right conditions. Two-dimensional simulations of self-gravitating disks are useful starting points for studying fragmentation because they allow high-resolution simulations of thin disks. However, convergence issues can arise in 2D from various sources. One of these sources is the 2D approximation of self-gravity, which exaggerates the effect of self-gravity on small scales when the potential is not smoothed to account for the assumed vertical extent of the disk. This effect is enhanced by increased resolution, resulting in fragmentationmore » at longer cooling timescales β . If true, it suggests that the 3D simulations of disk fragmentation may not have the same convergence problem and could be used to examine the nature of fragmentation without smoothing self-gravity on scales similar to the disk scale height. To that end, we have carried out local 3D self-gravitating disk simulations with simple β cooling with fixed background irradiation to determine if 3D is necessary to properly describe disk fragmentation. Above a resolution of ∼40 grid cells per scale height, we find that our simulations converge with respect to the cooling timescale. This result converges in agreement with analytic expectations which place a fragmentation boundary at β {sub crit} = 3.« less

  15. A Split Forcing Technique to Reduce Log-layer Mismatch in Wall-modeled Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Deleon, Rey; Senocak, Inanc

    2016-11-01

    The conventional approach to sustain a flow field in a periodic channel flow seems to be the culprit behind the log-law mismatch problem that has been reported in many studies hybridizing Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) techniques, commonly referred to as hybrid RANS-LES. To address this issue, we propose a split-forcing approach that relies only on the conservation of mass principle. We adopt a basic hybrid RANS-LES technique on a coarse mesh with wall-stress boundary conditions to simulate turbulent channel flows at friction Reynolds numbers of 2000 and 5200 and demonstrate good agreement with benchmark data. We also report a duality in velocity scale that is a specific consequence of the split forcing framework applied to hybrid RANS-LES. The first scale is the friction velocity derived from the wall shear stress. The second scale arises in the core LES region, a value different than at the wall. Second-order turbulence statistics agree well with the benchmark data when normalized by the core friction velocity, whereas the friction velocity at the wall remains the appropriate scale for the mean velocity profile. Based on our findings, we suggest reevaluating more sophisticated hybrid RANS-LES approaches within the split-forcing framework. Work funded by National Science Foundation under Grant No. 1056110 and 1229709. First author acknowledges the University of Idaho President's Doctoral Scholars Award.

  16. [Current problems arising from not having biosafety level 4 laboratories in Japan--qualitative study of infectious disease experts].

    PubMed

    Yamamoto, Yuko; Horiguchi, Itsuko; Marui, Eiji

    2009-09-01

    No public consensus exists yet on handling Biosafety Level 4 agents and no laboratory is operational at BSL4 in Japan. A discussion that includes neighboring residents and experts should be initiated to communicate risks. In this article, we present the current situation and prioritize problems we presently face. A three-stage Delphi survey was conducted. The subjects were twenty-two persons with extensive experience and knowledge of infectious diseases. Seven projections and issues were made with regard to the problems arising from the lack of an operational BSL4 laboratory. These were tabulated by the KJ method. The top seven projections were scored, such that the top received 7 points and the last received 1 point. A total of 51 projections were obtained for the first part of the survey, 39 for the second, and 29 for the last. The projection with the highest score was that it is impossible to cope with newly emerging infectious diseases. The second was that complete diagnoses are impossible without a BSL4 laboratory. All projections and issues were divided into the following four main groups: issues for researchers and laboratory staff, clinical practice and research on BSL4 agents, domestic and global security, and Japan's international position. We clarified possible problem arising from not having BSL4 laboratories in Japan. The identification of projections by the Delphi survey in this study should be considered as one of many attempts to develop effective risk communication strategies.

  17. Mandala Networks: ultra-small-world and highly sparse graphs

    PubMed Central

    Sampaio Filho, Cesar I. N.; Moreira, André A.; Andrade, Roberto F. S.; Herrmann, Hans J.; Andrade, José S.

    2015-01-01

    The increasing demands in security and reliability of infrastructures call for the optimal design of their embedded complex networks topologies. The following question then arises: what is the optimal layout to fulfill best all the demands? Here we present a general solution for this problem with scale-free networks, like the Internet and airline networks. Precisely, we disclose a way to systematically construct networks which are robust against random failures. Furthermore, as the size of the network increases, its shortest path becomes asymptotically invariant and the density of links goes to zero, making it ultra-small world and highly sparse, respectively. The first property is ideal for communication and navigation purposes, while the second is interesting economically. Finally, we show that some simple changes on the original network formulation can lead to an improved topology against malicious attacks. PMID:25765450

  18. Efficient multitasking of Choleski matrix factorization on CRAY supercomputers

    NASA Technical Reports Server (NTRS)

    Overman, Andrea L.; Poole, Eugene L.

    1991-01-01

    A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.

  19. Families of Graph Algorithms: SSSP Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-08-28

    Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less

  20. Mechanism for thermal relic dark matter of strongly interacting massive particles.

    PubMed

    Hochberg, Yonit; Kuflik, Eric; Volansky, Tomer; Wacker, Jay G

    2014-10-24

    We present a new paradigm for achieving thermal relic dark matter. The mechanism arises when a nearly secluded dark sector is thermalized with the standard model after reheating. The freeze-out process is a number-changing 3→2 annihilation of strongly interacting massive particles (SIMPs) in the dark sector, and points to sub-GeV dark matter. The couplings to the visible sector, necessary for maintaining thermal equilibrium with the standard model, imply measurable signals that will allow coverage of a significant part of the parameter space with future indirect- and direct-detection experiments and via direct production of dark matter at colliders. Moreover, 3→2 annihilations typically predict sizable 2→2 self-interactions which naturally address the "core versus cusp" and "too-big-to-fail" small-scale structure formation problems.

  1. Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.; Ratnayake, Nalin A.

    2011-01-01

    As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.

  2. Accelerated Cartesian expansions for the rapid solution of periodic multiscale problems

    DOE PAGES

    Baczewski, Andrew David; Dault, Daniel L.; Shanker, Balasubramaniam

    2012-07-03

    We present an algorithm for the fast and efficient solution of integral equations that arise in the analysis of scattering from periodic arrays of PEC objects, such as multiband frequency selective surfaces (FSS) or metamaterial structures. Our approach relies upon the method of Accelerated Cartesian Expansions (ACE) to rapidly evaluate the requisite potential integrals. ACE is analogous to FMM in that it can be used to accelerate the matrix vector product used in the solution of systems discretized using MoM. Here, ACE provides linear scaling in both CPU time and memory. Details regarding the implementation of this method within themore » context of periodic systems are provided, as well as results that establish error convergence and scalability. In addition, we also demonstrate the applicability of this algorithm by studying several exemplary electrically dense systems.« less

  3. The Common Patterns of Nature

    PubMed Central

    Frank, Steven A.

    2010-01-01

    We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344

  4. Graph Design via Convex Optimization: Online and Distributed Perspectives

    NASA Astrophysics Data System (ADS)

    Meng, De

    Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.

  5. An O([Formula: see text]) algorithm for sorting signed genomes by reversals, transpositions, transreversals and block-interchanges.

    PubMed

    Yu, Shuzhi; Hao, Fanchang; Leong, Hon Wai

    2016-02-01

    We consider the problem of sorting signed permutations by reversals, transpositions, transreversals, and block-interchanges. The problem arises in the study of species evolution via large-scale genome rearrangement operations. Recently, Hao et al. gave a 2-approximation scheme called genome sorting by bridges (GSB) for solving this problem. Their result extended and unified the results of (i) He and Chen - a 2-approximation algorithm allowing reversals, transpositions, and block-interchanges (by also allowing transversals) and (ii) Hartman and Sharan - a 1.5-approximation algorithm allowing reversals, transpositions, and transversals (by also allowing block-interchanges). The GSB result is based on introduction of three bridge structures in the breakpoint graph, the L-bridge, T-bridge, and X-bridge that models goodreversal, transposition/transreversal, and block-interchange, respectively. However, the paper by Hao et al. focused on proving the 2-approximation GSB scheme and only mention a straightforward [Formula: see text] algorithm. In this paper, we give an [Formula: see text] algorithm for implementing the GSB scheme. The key idea behind our faster GSB algorithm is to represent cycles in the breakpoint graph by their canonical sequences, which greatly simplifies the search for these bridge structures. We also give some comparison results (running time and computed distances) against the original GSB implementation.

  6. Addressing Cultural Diversity: Effects of a Problem-Based Intercultural Learning Unit

    ERIC Educational Resources Information Center

    Busse, Vera; Krause, Ulrike-Marie

    2015-01-01

    This article explores to what extent a problem-based learning unit in combination with cooperative learning and affectively oriented teaching methods facilitates intercultural learning. As part of the study, students reflected on critical incidents, which display misunderstandings or conflicts that arise as a result of cultural differences. In…

  7. A Separate Reality: The Problem of Uncooperative Experiments.

    ERIC Educational Resources Information Center

    Lersten, Ken

    The problem of the uncooperative experiment arises with the use of human subjects. Evidence shows that typical volunteer subjects have the following characteristics: better education, higher paying jobs, greater need for approval, lower authoritarianism, higher I.Q. score, and better adjustment to personal questions than nonvolunteers. Data also…

  8. Introducing Mathematics to Information Problem-Solving Tasks: Surface or Substance?

    ERIC Educational Resources Information Center

    Erickson, Ander

    2017-01-01

    This study employs a cross-case analysis in order to explore the demands and opportunities that arise when information problem-solving tasks are introduced into college mathematics classes. Professors at three universities collaborated with me to develop statistics-related activities that required students to engage in research outside the…

  9. Classroom Crisis Intervention through Contracting: A Moral Development Model.

    ERIC Educational Resources Information Center

    Smaby, Marlowe H.; Tamminen, Armas W.

    1981-01-01

    A counselor can arbitrate problem situations using a systematic approach to classroom intervention which includes meetings with the teacher and students. This crisis intervention model based on moral development can be more effective than reliance on guidance activities disconnected from the actual classroom settings where the problems arise.…

  10. Lexical Frames and Reported Speech

    ERIC Educational Resources Information Center

    Williams, Howard

    2004-01-01

    This paper addresses a problem of lexical choice that arises for ESL/EFL learners in the writing of research papers, critiques, interview reports, or any other sort of discourse that requires source attribution. The problem falls naturally into two parts. One part concerns the general lack of linguistic resources typically available (for various…

  11. RACIAL IMBALANCE AND EDUCATIONAL PLANNING.

    ERIC Educational Resources Information Center

    CONROY, VINCENT F.

    HARVARD'S ADMINISTRATIVE CAREER PROGRAM FACED THE GROWING PROBLEM OF NEGRO ENROLLMENT IN THE PUBLIC SCHOOLS. NOTING THE FREQUENCY AND INTENSITY WITH WHICH THE PROBLEM WAS ARISING AT THE NATIONAL LEVEL, A GROUP OF LAWYERS AND EDUCATORS CONVENED TO WORK OUT THE LEGAL ASPECTS OF SCHOOL INTEGRATION. FUNDS FROM THE FORD FOUNDATION INITIATED THE…

  12. Polyomino Problems to Confuse Computers

    ERIC Educational Resources Information Center

    Coffin, Stewart

    2009-01-01

    Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…

  13. Digital Maps, Matrices and Computer Algebra

    ERIC Educational Resources Information Center

    Knight, D. G.

    2005-01-01

    The way in which computer algebra systems, such as Maple, have made the study of complex problems accessible to undergraduate mathematicians with modest computational skills is illustrated by some large matrix calculations, which arise from representing the Earth's surface by digital elevation models. Such problems are often considered to lie in…

  14. Development of a process control computer device for the adaptation of flexible wind tunnel walls

    NASA Technical Reports Server (NTRS)

    Barg, J.

    1982-01-01

    In wind tunnel tests, the problems arise of determining the wall pressure distribution, calculating the wall contour, and controlling adjustment of the walls. This report shows how these problems have been solved for the high speed wind tunnel of the Technical University of Berlin.

  15. CONGRESS ON SCIENCE TEACHING AND ECONOMIC GROWTH.

    ERIC Educational Resources Information Center

    Inter-Union Commission on the Teaching of Science, Paris (France).

    REPORTED ARE THE ACTIVITIES OF THE CONGRESS ORGANIZED BY THE INTER-UNION COMMISSION ON SCIENCE TEACHING (CEIS) OF THE INTERNATIONAL COUNCIL OF SCIENTIFIC UNIONS (ICSU). STUDIED WERE PROBLEMS ARISING IN SEVERAL BRANCHES OF KNOWLEDGE DUE TO BOTH INCREASED NUMBERS OF STUDENTS AND SHORTAGE OF TEACHERS. OF PARTICULAR INTEREST WERE THE PROBLEMS OF…

  16. IMPACTS OF MATERIAL SUBSTITUTION IN AUTOMOBILE MANUFACTURE ON RESOURCE RECOVERY. VOLUME II. APPENDICES A-E

    EPA Science Inventory

    Probable changes in the mix of materials used to manufacture automobiles were examined to determine if economic or technical problems in recycling could arise such that the 'abandoned automobile problem' would be resurrected. Future trends in materials composition of the automobi...

  17. On the Inclusion of Difference Equation Problems and Z Transform Methods in Sophomore Differential Equation Classes

    ERIC Educational Resources Information Center

    Savoye, Philippe

    2009-01-01

    In recent years, I started covering difference equations and z transform methods in my introductory differential equations course. This allowed my students to extend the "classical" methods for (ordinary differential equation) ODE's to discrete time problems arising in many applications.

  18. Ethical considerations in revision rhinoplasty.

    PubMed

    Wayne, Ivan

    2012-08-01

    The problems that arise when reviewing another surgeon's work, the financial aspects of revision surgery, and the controversies that present in marketing and advertising will be explored. The technological advances of computer imaging and the Internet have introduced new problems that require our additional consideration. Copyright © 2012 by Thieme Medical Publishers, Inc.

  19. Turbulent kinetic energy and a possible hierarchy of length scales in a generalization of the Navier-Stokes alpha theory.

    PubMed

    Fried, Eliot; Gurtin, Morton E

    2007-05-01

    We present a continuum-mechanical formulation and generalization of the Navier-Stokes alpha theory based on a general framework for fluid-dynamical theories with gradient dependencies. Our flow equation involves two additional problem-dependent length scales alpha and beta. The first of these scales enters the theory through the internal kinetic energy, per unit mass, alpha2|D|2, where D is the symmetric part of the gradient of the filtered velocity. The remaining scale is associated with a dissipative hyperstress which depends linearly on the gradient of the filtered vorticity. When alpha and beta are equal, our flow equation reduces to the Navier-Stokes alpha equation. In contrast to the original derivation of the Navier-Stokes alpha equation, which relies on Lagrangian averaging, our formulation delivers boundary conditions. For a confined flow, our boundary conditions involve an additional length scale l characteristic of the eddies found near walls. Based on a comparison with direct numerical simulations for fully developed turbulent flow in a rectangular channel of height 2h, we find that alphabeta approximately Re(0.470) and lh approximately Re(-0.772), where Re is the Reynolds number. The first result, which arises as a consequence of identifying the internal kinetic energy with the turbulent kinetic energy, indicates that the choice alpha=beta required to reduce our flow equation to the Navier-Stokes alpha equation is likely to be problematic. The second result evinces the classical scaling relation eta/L approximately Re(-3/4) for the ratio of the Kolmogorov microscale eta to the integral length scale L . The numerical data also suggests that l < or = beta . We are therefore led to conjecture a tentative hierarchy, l < or = beta < alpha , involving the three length scales entering our theory.

  20. The legacy of Charles Marlatt and efforts to limit plant pest invasions

    Treesearch

    Andrew M. Liebhold; Robert L. Griffin

    2016-01-01

    The problem of invasions by non-native plant pests has come to dominate the field of applied entomology. Most of the damaging insect pests of agriculture and forestry are non-native (Sailer 1978, Aukema et al. 2010) and this is a problem being faced around the world. This problem did not arise overnight; instead, there has been a steady accumulation of non-native...

  1. Singularities in Free Surface Flows

    NASA Astrophysics Data System (ADS)

    Thete, Sumeet Suresh

    Free surface flows where the shape of the interface separating two or more phases or liquids are unknown apriori, are commonplace in industrial applications and nature. Distribution of drop sizes, coalescence rate of drops, and the behavior of thin liquid films are crucial to understanding and enhancing industrial practices such as ink-jet printing, spraying, separations of chemicals, and coating flows. When a contiguous mass of liquid such as a drop, filament or a film undergoes breakup to give rise to multiple masses, the topological transition is accompanied with a finite-time singularity . Such singularity also arises when two or more masses of liquid merge into each other or coalesce. Thus the dynamics close to singularity determines the fate of about-to-form drops or films and applications they are involved in, and therefore needs to be analyzed precisely. The primary goal of this thesis is to resolve and analyze the dynamics close to singularity when free surface flows experience a topological transition, using a combination of theory, experiments, and numerical simulations. The first problem under consideration focuses on the dynamics following flow shut-off in bottle filling applications that are relevant to pharmaceutical and consumer products industry, using numerical techniques based on Galerkin Finite Element Methods (GFEM). The second problem addresses the dual flow behavior of aqueous foams that are observed in oil and gas fields and estimates the relevant parameters that describe such flows through a series of experiments. The third problem aims at understanding the drop formation of Newtonian and Carreau fluids, computationally using GFEM. The drops are formed as a result of imposed flow rates or expanding bubbles similar to those of piezo actuated and thermal ink-jet nozzles. The focus of fourth problem is on the evolution of thinning threads of Newtonian fluids and suspensions towards singularity, using computations based on GFEM and experimental techniques. The aim of fifth problem is to analyze the coalescence dynamics of drops through a combination of GFEM and scaling theory. Lastly, the sixth problem concerns the thinning and rupture dynamics of thin films of Newtonian and power-law fluids using scaling theory based on asymptotic analysis and the predictions of this theory are corroborated using computations based on GFEM.

  2. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.

  3. Dynamics of the middle atmosphere as observed by the ARISE project

    NASA Astrophysics Data System (ADS)

    Blanc, Elisabeth

    2015-04-01

    The atmosphere is a complex system submitted to disturbances in a wide range of scales, including high frequency sources as volcanoes, thunderstorms, tornadoes and at larger scales, gravity waves from deep convection or wind over mountains, atmospheric tides and planetary waves. These waves affect the different atmospheric layers submitted to different temperature and wind systems which strongly control the general atmospheric circulation. The full description of gravity and planetary waves constitutes a challenge for the development of future models of atmosphere and climate. The objective of this paper is to present a review of recent advances obtained in this topic, especially in the framework of the ARISE (Atmospheric dynamics Research InfraStructure in Europe) project

  4. Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation

    NASA Technical Reports Server (NTRS)

    Mook, D. J.; Lew, Jiann-Shiun

    1991-01-01

    Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.

  5. Point-particle effective field theory I: classical renormalization and the inverse-square potential

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Hayman, Peter; Williams, M.; Zalavári, László

    2017-04-01

    Singular potentials (the inverse-square potential, for example) arise in many situations and their quantum treatment leads to well-known ambiguities in choosing boundary conditions for the wave-function at the position of the potential's singularity. These ambiguities are usually resolved by developing a self-adjoint extension of the original prob-lem; a non-unique procedure that leaves undetermined which extension should apply in specific physical systems. We take the guesswork out of this picture by using techniques of effective field theory to derive the required boundary conditions at the origin in terms of the effective point-particle action describing the physics of the source. In this picture ambiguities in boundary conditions boil down to the allowed choices for the source action, but casting them in terms of an action provides a physical criterion for their determination. The resulting extension is self-adjoint if the source action is real (and involves no new degrees of freedom), and not otherwise (as can also happen for reasonable systems). We show how this effective-field picture provides a simple framework for understanding well-known renormalization effects that arise in these systems, including how renormalization-group techniques can resum non-perturbative interactions that often arise, particularly for non-relativistic applications. In particular we argue why the low-energy effective theory tends to produce a universal RG flow of this type and describe how this can lead to the phenomenon of reaction catalysis, in which physical quantities (like scattering cross sections) can sometimes be surprisingly large compared to the underlying scales of the source in question. We comment in passing on the possible relevance of these observations to the phenomenon of the catalysis of baryon-number violation by scattering from magnetic monopoles.

  6. Further issues in determining the readability of self-report items: comment on McHugh and Behar (2009).

    PubMed

    Schinka, John A

    2012-10-01

    Issues regarding the readability of self-report assessment instruments, methods for establishing the reading ability level of respondents, and guidelines for development of scales designed for marginal readers have been inconsistently addressed in the literature. A recent study by McHugh and Behar (2009) provided new findings relevant to these issues. McHugh and Behar calculated indices of readability separately for the instructions and the item sets of 105 self-report measures of anxiety and depression. Results revealed substantial variability in readability among the measures, with most measures being written at or above the mean reading grade level in the United States. These results were consistent with those reported previously by Schinka and Borum (1993, 1994) in analyses of the readability of commonly used self-report psychopathology and personality inventories. In their discussion, McHugh and Behar addressed implications of their findings for clinical assessment and for scale development. I expand on their comments by addressing the failure to consider vocabulary difficulty, a major shortcoming of readability indices that examine only text complexity. I demonstrate how vocabulary difficulty influences readability and discuss additional considerations and possible solutions for addressing the gap between scale readability and the reading skill level of the self-report respondent. The work of McHugh and Behar clearly demonstrates that the issues of reading ability that arise in collecting self-report data are neither simple nor straightforward. Comments are offered to focus attention on the problems identified by their work. These problems will require additional effort on the part of researchers and clinicians in order to obtain reliable, valid estimates of clinical status. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  7. Program Aids Visualization Of Data

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1995-01-01

    Living Color Frame System (LCFS) computer program developed to solve some problems that arise in connection with generation of real-time graphical displays of numerical data and of statuses of systems. Need for program like LCFS arises because computer graphics often applied for better understanding and interpretation of data under observation and these graphics become more complicated when animation required during run time. Eliminates need for custom graphical-display software for application programs. Written in Turbo C++.

  8. Screening and transport in 2D semiconductor systems at low temperatures

    PubMed Central

    Das Sarma, S.; Hwang, E. H.

    2015-01-01

    Low temperature carrier transport properties in 2D semiconductor systems can be theoretically well-understood within RPA-Boltzmann theory as being limited by scattering from screened Coulomb disorder arising from random quenched charged impurities in the environment. In this work, we derive a number of analytical formula, supported by realistic numerical calculations, for the relevant density, mobility, and temperature range where 2D transport should manifest strong intrinsic (i.e., arising purely from electronic effects) metallic temperature dependence in different semiconductor materials arising entirely from the 2D screening properties, thus providing an explanation for why the strong temperature dependence of the 2D resistivity can only be observed in high-quality and low-disorder 2D samples and also why some high-quality 2D materials manifest much weaker metallicity than other materials. We also discuss effects of interaction and disorder on the 2D screening properties in this context as well as compare 2D and 3D screening functions to comment why such a strong intrinsic temperature dependence arising from screening cannot occur in 3D metallic carrier transport. Experimentally verifiable predictions are made about the quantitative magnitude of the maximum possible low-temperature metallicity in 2D systems and the scaling behavior of the temperature scale controlling the quantum to classical crossover. PMID:26572738

  9. Environmental Hazards of Nuclear Wastes

    ERIC Educational Resources Information Center

    Micklin, Philip P.

    1974-01-01

    Present methods for storage of radioactive wastes produced at nuclear power facilities are described. Problems arising from present waste management are discussed and potential solutions explored. (JP)

  10. Stochastic species abundance models involving special copulas

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  11. [Specific problems posed by carbohydrate utilization in the rainbow trout].

    PubMed

    Bergot, F

    1979-01-01

    Carbohydrate incorporation in trout diets arises problems both at digestive and metabolic levels. Digestive utilization of carbohydrate closely depends on their molecular weight. In addition, in the case of complex carbohydrates (starches), different factors such as the level of incorporation, the amount consumed and the physical state of starch influence the digestibility. The measurement of digestibility in itself is confronted with methodological difficulties. The way the feces are collected can affect the digestion coefficient. Dietary carbohydrates actually serve as a source of energy. Nevertheless, above a certain level in the diet, intolerance phenomena may appear. The question that arises now is to establish the optimal part that carbohydrates can take in the metabolizable energy of a given diet.

  12. Addressing the computational cost of large EIT solutions.

    PubMed

    Boyle, Alistair; Borsic, Andrea; Adler, Andy

    2012-05-01

    Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.

  13. Electromotive force due to magnetohydrodynamic fluctuations in sheared rotating turbulence

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2015-11-02

    Here, this article presents a calculation of the mean electromotive force arising from general small-scale magnetohydrodynamical turbulence, within the framework of the second-order correlation approximation. With the goal of improving understanding of the accretion disk dynamo, effects arising through small-scale magnetic fluctuations, velocity gradients, density and turbulence stratification, and rotation, are included. The primary result, which supplements numerical findings, is that an off-diagonal turbulent resistivity due to magnetic fluctuations can produce large-scale dynamo action-the magnetic analog of the "shear-current" effect. In addition, consideration of alpha effects in the stratified regions of disks gives the puzzling result that there is nomore » strong prediction for a sign of alpha, since the effects due to kinetic and magnetic fluctuations, as well as those due to shear and rotation, are each of opposing signs and tend to cancel each other.« less

  14. New Old Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dvali, Gia

    2003-10-03

    We propose a new class of inflationary solutions to the standard cosmological problems (horizon, flatness, monopole,...), based on a modification of old inflation. These models do not require a potential which satisfies the normal inflationary slow-roll conditions. Our universe arises from a single tunneling event as the inflaton leaves the false vacuum. Subsequent dynamics (arising from either the oscillations of the inflaton field or thermal effects) keep a second field trapped in a false minimum, resulting in an evanescent period of inflation (with roughly 50 e-foldings) inside the bubble. This easily allows the bubble to grow sufficiently large to containmore » our present horizon volume. Reheating is accomplished when the inflaton driving the last stage of inflation rolls down to the true vacuum, and adiabatic density perturbations arise from moduli-dependent Yukawa couplings of the inflaton to matter fields. Our scenario has several robust predictions, including virtual absence of gravity waves, a possible absence of tilt in scalar perturbations, and a higher degree of non-Gaussianity than other models. It also naturally incorporates a solution to the cosmological moduli problem.« less

  15. Quality Problem-Based Learning Experiences for Students: Design Deliberations among Teachers from Diverse Disciplines.

    ERIC Educational Resources Information Center

    Butler, Susan McAleenan

    This qualitative study, investigating the claims, concerns, and issues arising within the design stages of problem-based learning (PBL) curriculum units, was conducted during two masters-level classes during the summer of 1999. A hermeneutic dialectic discourse among veteran teachers (who were novice PBL curriculum designers) was facilitated by…

  16. Peculiarities of Students of Pedagogical Specialties Training in Preventive Work with Juveniles Delinquents

    ERIC Educational Resources Information Center

    Moskalenko, Maxim R.; Dorozhkin, Evgenij M.; Ozhiganova, Maria V.; Murzinova, Yana A.; Syssa, Daria O.

    2016-01-01

    The relevance of the problem under investigation is due to the high significance of preventive work with juvenile delinquents to society. The article aims to study the problems arising while developing students' competencies in professional activities for the prevention of the infringing behavior of juvenile delinquents, as well as the…

  17. A Solution to the Mysteries of Morality

    ERIC Educational Resources Information Center

    DeScioli, Peter; Kurzban, Robert

    2013-01-01

    We propose that moral condemnation functions to guide bystanders to choose the same side as other bystanders in disputes. Humans interact in dense social networks, and this poses a problem for bystanders when conflicts arise: which side, if any, to support. Choosing sides is a difficult strategic problem because the outcome of a conflict…

  18. Transfer of Learning Transformed

    ERIC Educational Resources Information Center

    Larsen-Freeman, Diane

    2013-01-01

    Instruction is motivated by the assumption that students can transfer their learning, or apply what they have learned in school to another setting. A common problem arises when the expected transfer does not take place, what has been referred to as the inert knowledge problem. More than an academic inconvenience, the failure to transfer is a major…

  19. The Plasticity of Adolescent Cognitions: Data from a Novel Cognitive Bias Modification Training Task

    ERIC Educational Resources Information Center

    Lau, Jennifer Y. F.; Molyneaux, Emma; Telman, Machteld D.; Belli, Stefano

    2011-01-01

    Many adult anxiety problems emerge in adolescence. Investigating how adolescent anxiety arises and abates is critical for understanding and preventing adult psychiatric problems. Drawing threat interpretations from ambiguous material is linked to adolescent anxiety but little research has clarified the causal nature of this relationship. Work in…

  20. Coorientational Accuracy during Regional Development of Energy Resources: Problems in Agency-Public Communication.

    ERIC Educational Resources Information Center

    Bowes, John E.; Stamm, Keith R.

    This paper presents a progress report from a research program aimed at elucidating communication problems which arise among citizens and government agencies during the development of regional environmental policy. The eventual objective of the program is to develop a paradigm for evaluative research in communication that will provide for the…

  1. Anger/Frustration, Task Persistence, and Conduct Problems in Childhood: A Behavioral Genetic Analysis

    ERIC Educational Resources Information Center

    Deater-Deckard, Kirby; Petrill, Stephen A.; Thompson, Lee A.

    2007-01-01

    Background: Individual differences in conduct problems arise in part from proneness to anger/frustration and poor self-regulation of behavior. However, the genetic and environmental etiology of these connections is not known. Method: Using a twin design, we examined genetic and environmental covariation underlying the well-documented correlations…

  2. A chance constraint estimation approach to optimizing resource management under uncertainty

    Treesearch

    Michael Bevers

    2007-01-01

    Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...

  3. Zones of Intervention: Teaching and Learning at All Places and at All Times

    ERIC Educational Resources Information Center

    Taylor, Jonathan E.; McKissac, Jonathan C.

    2014-01-01

    This article identifies four distinct zones in which workplace problems can be addressed through education and training. These zones enable educators to address workplace learning more widely and broadly. Very often, problems arising in the workplace are dealt with through training in the classroom, but other options exist. The theoretical…

  4. Variational formulation for Black-Scholes equations in stochastic volatility models

    NASA Astrophysics Data System (ADS)

    Gyulov, Tihomir B.; Valkov, Radoslav L.

    2012-11-01

    In this note we prove existence and uniqueness of weak solutions to a boundary value problem arising from stochastic volatility models in financial mathematics. Our settings are variational in weighted Sobolev spaces. Nevertheless, as it will become apparent our variational formulation agrees well with the stochastic part of the problem.

  5. An inverse problem for a semilinear parabolic equation arising from cardiac electrophysiology

    NASA Astrophysics Data System (ADS)

    Beretta, Elena; Cavaterra, Cecilia; Cerutti, M. Cristina; Manzoni, Andrea; Ratti, Luca

    2017-10-01

    In this paper we develop theoretical analysis and numerical reconstruction techniques for the solution of an inverse boundary value problem dealing with the nonlinear, time-dependent monodomain equation, which models the evolution of the electric potential in the myocardial tissue. The goal is the detection of an inhomogeneity \

  6. Community-University Research Partnerships: Devising a Model for Ethical Engagement

    ERIC Educational Resources Information Center

    Silka, Linda; Renault-Caragianes, Paulette

    2006-01-01

    Profound changes taking place in communities and in universities are bringing researchers and community members new opportunities for joint research endeavors and new problems that must be resolved. In such partnerships, questions about shared decision making--about the ethics of collaboration--arise at every stage: Who decides which problems are…

  7. The Changing Demographics of the Hispanic Family.

    ERIC Educational Resources Information Center

    McKay, Emily Gantz

    Hispanics will become the largest United States minority population sometime early in the next century. A problem that arises with attempts to provide Hispanic people with better opportunities is the lack of adequate data on Hispanic socioeconomic status. Those data which do exist focus on problems of the individual, yet one of the greatest…

  8. Projected Issues in the Practice of Educational Administration: The English Context.

    ERIC Educational Resources Information Center

    Browning, Peter

    Current and incipient problems in educational administration in England can be grouped into two areas: those resulting from reorganization and those arising from social and political factors. Many problems faced by educational administrators result from school reorganization that is a result of an increase in the number of comprehensive schools.…

  9. Focal-Plane Alignment Sensing

    DTIC Science & Technology

    1993-02-01

    amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection

  10. Damage Control: Closing Problematic Sequences in Hearing-Impaired Interaction

    ERIC Educational Resources Information Center

    Skelt, Louise

    2007-01-01

    When a problem of understanding arises for a hearing-impaired recipient in the course of a conversation, and is detected, repairing that problem is only one of several possible courses of action for participants. Another possibility is the collaborative closing of the part of the conversation which has proved problematic for understanding, to…

  11. A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models

    NASA Astrophysics Data System (ADS)

    Roul, Pradip; Warbhe, Ujwal

    2017-08-01

    The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).

  12. The geometry of discombinations and its applications to semi-inverse problems in anelasticity

    PubMed Central

    Yavari, Arash; Goriely, Alain

    2014-01-01

    The geometrical formulation of continuum mechanics provides us with a powerful approach to understand and solve problems in anelasticity where an elastic deformation is combined with a non-elastic component arising from defects, thermal stresses, growth effects or other effects leading to residual stresses. The central idea is to assume that the material manifold, prescribing the reference configuration for a body, has an intrinsic, non-Euclidean, geometrical structure. Residual stresses then naturally arise when this configuration is mapped into Euclidean space. Here, we consider the problem of discombinations (a new term that we introduce in this paper), that is, a combined distribution of fields of dislocations, disclinations and point defects. Given a discombination, we compute the geometrical characteristics of the material manifold (curvature, torsion, non-metricity), its Cartan's moving frames and structural equations. This identification provides a powerful algorithm to solve semi-inverse problems with non-elastic components. As an example, we calculate the residual stress field of a cylindrically symmetric distribution of discombinations in an infinite circular cylindrical bar made of an incompressible hyperelastic isotropic elastic solid. PMID:25197257

  13. Price schedules coordination for electricity pool markets

    NASA Astrophysics Data System (ADS)

    Legbedji, Alexis Motto

    2002-04-01

    We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.

  14. On multiple crack identification by ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Brigante, M.; Sumbatyan, M. A.

    2018-04-01

    The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.

  15. A bicriteria heuristic for an elective surgery scheduling problem.

    PubMed

    Marques, Inês; Captivo, M Eugénia; Vaz Pato, Margarida

    2015-09-01

    Resource rationalization and reduction of waiting lists for surgery are two main guidelines for hospital units outlined in the Portuguese National Health Plan. This work is dedicated to an elective surgery scheduling problem arising in a Lisbon public hospital. In order to increase the surgical suite's efficiency and to reduce the waiting lists for surgery, two objectives are considered: maximize surgical suite occupation and maximize the number of surgeries scheduled. This elective surgery scheduling problem consists of assigning an intervention date, an operating room and a starting time for elective surgeries selected from the hospital waiting list. Accordingly, a bicriteria surgery scheduling problem arising in the hospital under study is presented. To search for efficient solutions of the bicriteria optimization problem, the minimization of a weighted Chebyshev distance to a reference point is used. A constructive and improvement heuristic procedure specially designed to address the objectives of the problem is developed and results of computational experiments obtained with empirical data from the hospital are presented. This study shows that by using the bicriteria approach presented here it is possible to build surgical plans with very good performance levels. This method can be used within an interactive approach with the decision maker. It can also be easily adapted to other hospitals with similar scheduling conditions.

  16. Generalized continuum modeling of scale-dependent crystalline plasticity

    NASA Astrophysics Data System (ADS)

    Mayeur, Jason R.

    The use of metallic material systems (e.g. pure metals, alloys, metal matrix composites) in a wide range of engineering applications from medical devices to electronic components to automobiles continues to motivate the development of improved constitutive models to meet increased performance demands while minimizing cost. Emerging technologies often incorporate materials in which the dominant microstructural features have characteristic dimensions reaching into the submicron and nanometer regime. Metals comprised of such fine microstructures often exhibit unique and size-dependent mechanical response, and classical approaches to constitutive model development at engineering (continuum) scales, being local in nature, are inadequate for describing such behavior. Therefore, traditional modeling frameworks must be augmented and/or reformulated to account for such phenomena. Crystal plasticity constitutive models have proven quite capable of capturing first-order microstructural effects such as grain orientation (elastic/plastic anisotropy), grain morphology, phase distribution, etc. on the deformation behavior of both single and polycrystals, yet suffer from the same limitations as other local continuum theories with regard to capturing scale-dependent mechanical response. This research is focused on the development, numerical implementation, and application of a generalized (nonlocal) theory of single crystal plasticity capable of describing the scale-dependent mechanical response of both single and polycrystalline metals that arises as a result of heterogeneous deformation. This research developed a dislocation-based theory of micropolar single crystal plasticity. The majority of nonlocal crystal plasticity theories are predicated on the connection between gradients of slip and geometrically necessary dislocations. Due to the diversity of existing nonlocal crystal plasticity theories, a review, summary, and comparison of representative model classes is presented in Chapter 2 from a unified dislocation-based perspective. The discussion of the continuum crystal plasticity theories is prefaced by a brief review of discrete dislocation plasticity, which facilitates the comparison of certain model aspects and also serves as a reference for latter segments of the research which make connection to this constitutive description. Chapter 2 has utility not only as a literature review, but also as a synthesis and analysis of competing and alternative nonlocal crystal plasticity modeling strategies from a common viewpoint. The micropolar theory of single crystal plasticity is presented in Chapter 3. Two different types of flow criteria are considered - the so-called single and multicriterion theories, and several variations of the dislocation-based strength models appropriate for each theory are presented and discussed. The numerical implementation of the two-dimensional version of the constitutive theory is given in Chapter 4. A user element subroutine for the implicit commercial finite element code Abaqus/Standard is developed and validated through the solution of initial-boundary value problems with closed-form solutions. Convergent behavior of the subroutine is also demonstrated for an initial-boundary value problem exhibiting strain localization. In Chapter 5, the models are employed to solve several standard initial-boundary value problems for heterogeneously deforming single crystals including simple shearing of a semi-infinite constrained thin film, pure bending of thin films, and simple shearing of a metal matrix composite with elastic inclusions. The simulation results are compared to those obtained from the solution of equivalent boundary value problems using discrete dislocation dynamics and alternative generalized crystal plasticity theories. Comparison and calibration with respect to the former provides guidance in the specification of non-traditional material parameters that arise in the model formulation and demonstrates its effectiveness at capturing the heterogeneous deformation fields and size-dependent mechanical behavior predicted by a finer scale constitutive description. Finally, in Chapter 6, the models are applied to simulate the deformation behavior of small polycrystalline ensembles. Several grain boundary constitutive descriptions are explored and the response characteristics are analyzed with respect to experimental observations as well as results obtained from discrete dislocation dynamics and alternative nonlocal crystal plasticity theories. Particular attention is focused on how the various grain boundary descriptions serve to either locally concentrate or diffuse deformation heterogeneity as a function of grain size.

  17. Contraceptive services for adolescents in Latin America: facts, problems and perspectives.

    PubMed

    Pons, J E

    1999-12-01

    This review presents facts about sexual and contraceptive behavior of Latin American adolescents, analyzes barriers to contraception, and summarizes present perspectives. Between 13 and 30% of Latin American adolescent women live in union before their 20th birthday and between 46 and 63% have had sexual relations. The prevalence of contraceptive use among adolescents at risk of pregnancy remains very low. The pill is the best known contraceptive method. When sexual activity becomes a permanent practice, contraceptive use increases but remains low. Barriers to contraception can be identified as: (1) arising from adolescents themselves (moral objections, alleged medical reasons, lack of confidence in adults and in the health system, promiscuity; (2) arising from the sexual partner (partner's opposition, masculine irresponsibility); (3) arising from adults (moral objections, fear of sex education, adult control and power of decision-making); (4) arising from the health system (inappropriateness of services, regulatory barriers, gender inequality); (5) arising from health professionals (medical barriers to contraceptive use, discomfort with sexual matters); (6) arising from the educational system (educational failure, teachers' reluctance); and (7) arising from other social agents (religious opposition, media ambivalent messages, fund restraints). There have been improvements in recent years, including the achievements of groups working for and with adolescents, and the support from distinguished personalities.

  18. Customer-centered problem solving.

    PubMed

    Samelson, Q B

    1999-11-01

    If there is no single best way to attract new customers and retain current customers, there is surely an easy way to lose them: fail to solve the problems that arise in nearly every buyer-supplier relationship, or solve them in an unsatisfactory manner. Yet, all too frequently, companies do just that. Either we deny that a problem exists, we exert all our efforts to pin the blame elsewhere, or we "Band-Aid" the problem instead of fixing it, almost guaranteeing that we will face it again and again.

  19. Robust Consumption-Investment Problem on Infinite Horizon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, Dariusz, E-mail: dariusz.zawisza@im.uj.edu.pl

    In our paper we consider an infinite horizon consumption-investment problem under a model misspecification in a general stochastic factor model. We formulate the problem as a stochastic game and finally characterize the saddle point and the value function of that game using an ODE of semilinear type, for which we provide a proof of an existence and uniqueness theorem for its solution. Such equation is interested on its own right, since it generalizes many other equations arising in various infinite horizon optimization problems.

  20. Bethe-Salpeter Eigenvalue Solver Package (BSEPACK) v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SHAO, MEIYEU; YANG, CHAO

    2017-04-25

    The BSEPACK contains a set of subroutines for solving the Bethe-Salpeter Eigenvalue (BSE) problem. This type of problem arises in this study of optical excitation of nanoscale materials. The BSE problem is a structured non-Hermitian eigenvalue problem. The BSEPACK software can be used to compute all or subset of eigenpairs of a BSE Hamiltonian. It can also be used to compute the optical absorption spectrum without computing BSE eigenvalues and eigenvectors explicitly. The package makes use of the ScaLAPACK, LAPACK and BLAS.

  1. The Soccer-Ball Problem

    NASA Astrophysics Data System (ADS)

    Hossenfelder, Sabine

    2014-07-01

    The idea that Lorentz-symmetry in momentum space could be modified but still remain observer-independent has received quite some attention in the recent years. This modified Lorentz-symmetry, which has been argued to arise in Loop Quantum Gravity, is being used as a phenomenological model to test possibly observable effects of quantum gravity. The most pressing problem in these models is the treatment of multi-particle states, known as the 'soccer-ball problem'. This article briefly reviews the problem and the status of existing solution attempts.

  2. A framework for managing runoff and pollution in the rural landscape using a Catchment Systems Engineering approach.

    PubMed

    Wilkinson, M E; Quinn, P F; Barber, N J; Jonczyk, J

    2014-01-15

    Intense farming plays a key role in increasing local scale runoff and erosion rates, resulting in water quality issues and flooding problems. There is potential for agricultural management to become a major part of improved strategies for controlling runoff. Here, a Catchment Systems Engineering (CSE) approach has been explored to solve the above problem. CSE is an interventionist approach to altering the catchment scale runoff regime through the manipulation of hydrological flow pathways throughout the catchment. By targeting hydrological flow pathways at source, such as overland flow, field drain and ditch function, a significant component of the runoff generation can be managed in turn reducing soil nutrient losses. The Belford catchment (5.7 km(2)) is a catchment scale study for which a CSE approach has been used to tackle a number of environmental issues. A variety of Runoff Attenuation Features (RAFs) have been implemented throughout the catchment to address diffuse pollution and flooding issues. The RAFs include bunds disconnecting flow pathways, diversion structures in ditches to spill and store high flows, large wood debris structure within the channel, and riparian zone management. Here a framework for applying a CSE approach to the catchment is shown in a step by step guide to implementing mitigation measures in the Belford Burn catchment. The framework is based around engagement with catchment stakeholders and uses evidence arising from field science. Using the framework, the flooding issue has been addressed at the catchment scale by altering the runoff regime. Initial findings suggest that RAFs have functioned as designed to reduce/attenuate runoff locally. However, evidence suggested that some RAFs needed modification and new RAFs be created to address diffuse pollution issues during storm events. Initial findings from these modified RAFs are showing improvements in sediment trapping capacities and reductions in phosphorus, nitrate and suspended sediment losses during storm events. © 2013.

  3. Natural inflation with pseudo Nambu-Goldstone bosons

    NASA Technical Reports Server (NTRS)

    Freese, Katherine; Frieman, Joshua A.; Olinto, Angela V.

    1990-01-01

    It is shown that a pseudo-Nambu-Goldstone boson of given potential can naturally give rise to an epoch of inflation in the early universe. Mass scales which arise in particle physics models with a gauge group that becomes strongly interacting at a certain scales are shown to be conditions for successful inflation. The density fluctuation spectrum is nonscale-invariant, with extra power on large length scales.

  4. Cardiac emergencies and problems of the critical care patient.

    PubMed

    Marr, Celia M

    2004-04-01

    Cardiac disease and dysfunction can occur as a primary disorder(ie, with pathology situated in one or more of the cardiac structures) or can be classified as a secondary problem when it occurs in patients with another primary problem that has affected the heart either directly or indirectly. Primary cardiac problems are encountered in horses presented to emergency clinics; however,this occurs much less frequently in equine critical patients than cardiac problems arising secondary to other conditions. Nevertheless,if primary or secondary cardiac problems are not identified and addressed, they certainly contribute to the morbidity and mortality of critical care patients.

  5. Low frequency acoustic and electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Maccamy, R. C.

    1986-01-01

    This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.

  6. Investigating the Wicked Problems of (Un)sustainability Through Three Case Studies Around the Water-Energy-Food Nexus

    NASA Astrophysics Data System (ADS)

    Metzger, E. P.; Curren, R. R.

    2016-12-01

    Effective engagement with the problems of sustainability begins with an understanding of the nature of the challenges. The entanglement of interacting human and Earth systems produces solution-resistant dilemmas that are often portrayed as wicked problems. As introduced by urban planners Rittel and Webber (1973), wicked problems are "dynamically complex, ill-structured, public problems" arising from complexity in both biophysical and socio-economic systems. The wicked problem construct is still in wide use across diverse contexts, disciplines, and sectors. Discourse about wicked problems as related to sustainability is often connected to discussion of complexity or complex systems. In preparation for life and work in an uncertain, dynamic and hyperconnected world, students need opportunities to investigate real problems that cross social, political and disciplinary divides. They need to grapple with diverse perspectives and values, and collaborate with others to devise potential solutions. Such problems are typically multi-casual and so intertangled with other problems that they cannot be resolved using the expertise and analytical tools of any single discipline, individual, or organization. We have developed a trio of illustrative case studies that focus on energy, water and food, because these resources are foundational, interacting, and causally connected in a variety of ways with climate destabilization. The three interrelated case studies progress in scale from the local and regional, to the national and international and include: 1) the 2010 Gulf of Mexico oil spill with examination of the multiple immediate and root causes of the disaster, its ecological, social, and economic impacts, and the increasing risk and declining energy return on investment associated with the relentless quest for fossil fuels; 2) development of Australia's innovative National Water Management System; and 3) changing patterns of food production and the intertwined challenge of managing transnational water resources in the rapidly growing Mekong Region of Southeast Asia. .

  7. Exometabolome analysis reveals hypoxia at the up-scaling of a Saccharomyces cerevisiae high-cell density fed-batch biopharmaceutical process

    PubMed Central

    2014-01-01

    Background Scale-up to industrial production level of a fermentation process occurs after optimization at small scale, a critical transition for successful technology transfer and commercialization of a product of interest. At the large scale a number of important bioprocess engineering problems arise that should be taken into account to match the values obtained at the small scale and achieve the highest productivity and quality possible. However, the changes of the host strain’s physiological and metabolic behavior in response to the scale transition are still not clear. Results Heterogeneity in substrate and oxygen distribution is an inherent factor at industrial scale (10,000 L) which affects the success of process up-scaling. To counteract these detrimental effects, changes in dissolved oxygen and pressure set points and addition of diluents were applied to 10,000 L scale to enable a successful process scale-up. A comprehensive semi-quantitative and time-dependent analysis of the exometabolome was performed to understand the impact of the scale-up on the metabolic/physiological behavior of the host microorganism. Intermediates from central carbon catabolism and mevalonate/ergosterol synthesis pathways were found to accumulate in both the 10 L and 10,000 L scale cultures in a time-dependent manner. Moreover, excreted metabolites analysis revealed that hypoxic conditions prevailed at the 10,000 L scale. The specific product yield increased at the 10,000 L scale, in spite of metabolic stress and catabolic-anabolic uncoupling unveiled by the decrease in biomass yield on consumed oxygen. Conclusions An optimized S. cerevisiae fermentation process was successfully scaled-up to an industrial scale bioreactor. The oxygen uptake rate (OUR) and overall growth profiles were matched between scales. The major remaining differences between scales were wet cell weight and culture apparent viscosity. The metabolic and physiological behavior of the host microorganism at the 10,000 L scale was investigated with exometabolomics, indicating that reduced oxygen availability affected oxidative phosphorylation cascading into down- and up-stream pathways producing overflow metabolism. Our study revealed striking metabolic and physiological changes in response to hypoxia exerted by industrial bioprocess up-scaling. PMID:24593159

  8. Behavioural problems in school age children with cerebral palsy.

    PubMed

    Brossard-Racine, Marie; Hall, Nick; Majnemer, Annette; Shevell, Michael I; Law, Mary; Poulin, Chantal; Rosenbaum, Peter

    2012-01-01

    Although behavioural problems are frequent in children with Cerebral Palsy (CP), the exact nature of these difficulties and their relationship with intrinsic or extrinsic factors are just beginning to be explored. To describe and characterize behavioural problems in children with CP and to determine the nature of any relationships with child and family characteristics. In this cross-sectional study, children with CP between 6 and 12 years of age were recruited. Children were assessed using the Leiter Intelligence Test, the Gross Motor Function Measure, the Strengths and Difficulties Questionnaire (SDQ), the Vineland Adaptive Behavior Scales and questionnaires on demographic factors. Parents' level of stress was measured with the Parenting Stress Index. Seventy-six parents completed the SDQ. Using the Total Difficulties Scores, 39.4% of the sample scored in the borderline to clinically abnormal range. Peer problems were the most common (55.3%). High parental stress was consistently associated with behavioural difficulties across all domains of the SDQ. Not surprisingly, better socialization skills and a lower parental stress were correlated with more positive behaviours. Behavioural difficulties are common in children with CP and appear not to be associated with socio-demographic variables and physical and cognitive characteristics. These difficulties are an important correlate of parental distress. This study emphasizes the need to recognize and address behavioural difficulties that may arise so as to optimize the health and well-being of children with CP and their families. Copyright © 2011 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  9. Stable time filtering of strongly unstable spatially extended systems

    PubMed Central

    Grote, Marcus J.; Majda, Andrew J.

    2006-01-01

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant–Friedrichs–Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection–diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system. PMID:16682626

  10. Stable time filtering of strongly unstable spatially extended systems.

    PubMed

    Grote, Marcus J; Majda, Andrew J

    2006-05-16

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant-Friedrichs-Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection-diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system.

  11. Random multispace quantization as an analytic mechanism for BioHashing of biometric and random identity inputs.

    PubMed

    Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L

    2006-12-01

    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.

  12. High performance organic transistor active-matrix driver developed on paper substrate

    NASA Astrophysics Data System (ADS)

    Peng, Boyu; Ren, Xiaochen; Wang, Zongrong; Wang, Xinyu; Roberts, Robert C.; Chan, Paddy K. L.

    2014-09-01

    The fabrication of electronic circuits on unconventional substrates largely broadens their application areas. For example, green electronics achieved through utilization of biodegradable or recyclable substrates, can mitigate the solid waste problems that arise at the end of their lifespan. Here, we combine screen-printing, high precision laser drilling and thermal evaporation, to fabricate organic field effect transistor (OFET) active-matrix (AM) arrays onto standard printer paper. The devices show a mobility and on/off ratio as high as 0.56 cm2V-1s-1 and 109 respectively. Small electrode overlap gives rise to a cut-off frequency of 39 kHz, which supports that our AM array is suitable for novel practical applications. We demonstrate an 8 × 8 AM light emitting diode (LED) driver with programmable scanning and information display functions. The AM array structure has excellent potential for scaling up.

  13. Innovative nanocompounds for cutaneous administration of classical antifungal drugs: a systematic review.

    PubMed

    Santos, Rafael Silva; Loureiro, Kahynna; Rezende, Polyana; Nalone, Luciana; Barbosa, Raquel de Melo; Santini, Antonello; Santos, Ana Cláudia; da Silva, Classius F; Souto, Eliana Barbosa; de Souza, Damião Pergentino; Amaral, Ricardo Guimarães; Severino, Patrícia

    2018-06-01

    Nanomedicine manipulates materials at atomic, molecular, and supramolecular scale, with at least one dimension within the nanometer range, for biomedical applications. The resulting nanoparticles have been consistently shown beneficial effects for antifungal drugs delivery, overcoming the problems of low bioavailability and high toxicity of these drugs. Due to their unique features, namely the small mean particle size, nanoparticles contribute to the enhanced drug absorption and uptake by the target cells, potentiating the therapeutic drug effect. The topical route is desirable due to the adverse effects arising from oral administration. This review provides a comprehensive analysis of the use of nano compounds for the current treatment of topical fungal infections. A special emphasis is given to the employment of lipid nanoparticles, due to their recognized efficacy, versatility and biocompatibility, attracting the major attention as novel topical nanocompounds used for the administration of antifungal drugs.

  14. A compendium of chameleon constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical andmore » laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.« less

  15. The dynamics of meaningful social interactions and the emergence of collective knowledge

    PubMed Central

    Dankulov, Marija Mitrović; Melnik, Roderick; Tadić, Bosiljka

    2015-01-01

    Collective knowledge as a social value may arise in cooperation among actors whose individual expertise is limited. The process of knowledge creation requires meaningful, logically coordinated interactions, which represents a challenging problem to physics and social dynamics modeling. By combining two-scale dynamics model with empirical data analysis from a well-known Questions & Answers system Mathematics, we show that this process occurs as a collective phenomenon in an enlarged network (of actors and their artifacts) where the cognitive recognition interactions are properly encoded. The emergent behavior is quantified by the information divergence and innovation advancing of knowledge over time and the signatures of self-organization and knowledge sharing communities. These measures elucidate the impact of each cognitive element and the individual actor’s expertise in the collective dynamics. The results are relevant to stochastic processes involving smart components and to collaborative social endeavors, for instance, crowdsourcing scientific knowledge production with online games. PMID:26174482

  16. Experimental performances of a battery thermal management system using a phase change material

    NASA Astrophysics Data System (ADS)

    Hémery, Charles-Victor; Pra, Franck; Robin, Jean-François; Marty, Philippe

    2014-12-01

    Li-ion batteries are leading candidates for mobility because electric vehicles (EV) are an environmentally friendly mean of transport. With age, Li-ion cells show a more resistive behavior leading to extra heat generation. Another kind of problem called thermal runway arises when the cell is too hot, what happens in case of overcharge or short circuit. In order to evaluate the effect of these defects at the whole battery scale, an air-cooled battery module was built and tested, using electrical heaters instead of real cells for safety reasons. A battery thermal management system based on a phase change material is developed in that study. This passive system is coupled with an active liquid cooling system in order to initialize the battery temperature at the melting of the PCM. This initialization, or PCM solidification, can be performed during a charge for example, in other words when the energy from the network is available.

  17. Resummation of high order corrections in Higgs boson plus jet production at the LHC

    DOE PAGES

    Sun, Peng; Isaacson, Joshua; Yuan, C. -P.; ...

    2017-02-22

    We study the effect of multiple parton radiation to Higgs boson plus jet production at the LHC. The large logarithms arising from the small imbalance in the transverse momentum of the Higgs boson plus jet final state system are resummed to all orders in the expansion of the strong interaction coupling at the accuracy of Next-to-Leading Logarithm (NLL), by applying the transverse momentum dependent (TMD) factorization formalism. We show that the appropriate resummation scale should be the jet transverse momentum, rather than the partonic center of mass energy which has been normally used in the TMD resummation formalism. Furthermore, themore » transverse momentum distribution of the Higgs boson, particularly near the lower cut-off applied on the jet transverse momentum, can only be reliably predicted by the resummation calculation which is free of the so-called Sudakov-shoulder singularity problem, present in fixed-order calculations.« less

  18. Recurrence due to periodic multisoliton fission in the defocusing nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Deng, Guo; Li, Sitai; Biondini, Gino; Trillo, Stefano

    2017-11-01

    We address the degree of universality of the Fermi-Pasta-Ulam recurrence induced by multisoliton fission from a harmonic excitation by analyzing the case of the semiclassical defocusing nonlinear Schrödinger equation, which models nonlinear wave propagation in a variety of physical settings. Using a suitable Wentzel-Kramers-Brillouin approach to the solution of the associated scattering problem we accurately predict, in a fully analytical way, the number and the features (amplitude and velocity) of solitonlike excitations emerging post-breaking, as a function of the dispersion smallness parameter. This also permits us to predict and analyze the near-recurrences, thereby inferring the universal character of the mechanism originally discovered for the Korteweg-deVries equation. We show, however, that important differences exist between the two models, arising from the different scaling rules obeyed by the soliton velocities.

  19. Perfect mixing of immiscible macromolecules at fluid interfaces

    NASA Astrophysics Data System (ADS)

    Sheiko, Sergei S.; Zhou, Jing; Arnold, Jamie; Neugebauer, Dorota; Matyjaszewski, Krzysztof; Tsitsilianis, Constantinos; Tsukruk, Vladimir V.; Carrillo, Jan-Michael Y.; Dobrynin, Andrey V.; Rubinstein, Michael

    2013-08-01

    The difficulty of mixing chemically incompatible substances—in particular macromolecules and colloidal particles—is a canonical problem limiting advances in fields ranging from health care to materials engineering. Although the self-assembly of chemically different moieties has been demonstrated in coordination complexes, supramolecular structures, and colloidal lattices among other systems, the mechanisms of mixing largely rely on specific interfacing of chemically, physically or geometrically complementary objects. Here, by taking advantage of the steric repulsion between brush-like polymers tethered to surface-active species, we obtained long-range arrays of perfectly mixed macromolecules with a variety of polymer architectures and a wide range of chemistries without the need of encoding specific complementarity. The net repulsion arises from the significant increase in the conformational entropy of the brush-like polymers with increasing distance between adjacent macromolecules at fluid interfaces. This entropic-templating assembly strategy enables long-range patterning of thin films on sub-100 nm length scales.

  20. Ensemble learning in fixed expansion layer networks for mitigating catastrophic forgetting.

    PubMed

    Coop, Robert; Mishtal, Aaron; Arel, Itamar

    2013-10-01

    Catastrophic forgetting is a well-studied attribute of most parameterized supervised learning systems. A variation of this phenomenon, in the context of feedforward neural networks, arises when nonstationary inputs lead to loss of previously learned mappings. The majority of the schemes proposed in the literature for mitigating catastrophic forgetting were not data driven and did not scale well. We introduce the fixed expansion layer (FEL) feedforward neural network, which embeds a sparsely encoding hidden layer to help mitigate forgetting of prior learned representations. In addition, we investigate a novel framework for training ensembles of FEL networks, based on exploiting an information-theoretic measure of diversity between FEL learners, to further control undesired plasticity. The proposed methodology is demonstrated on a basic classification task, clearly emphasizing its advantages over existing techniques. The architecture proposed can be enhanced to address a range of computational intelligence tasks, such as regression problems and system control.

  1. Assessing organizational change in multisector community health alliances.

    PubMed

    Alexander, Jeffrey A; Hearld, Larry R; Shi, Yunfeng

    2015-02-01

    The purpose of this article was to identify some common organizational features of multisector health care alliances (MHCAs) and the analytic challenges presented by those characteristics in assessing organizational change. Two rounds of an Internet-based survey of participants in 14 MHCAs. We highlight three analytic challenges that can arise when quantitatively studying the organizational characteristics of MHCAs-assessing change in MHCA organization, assessment of construct reliability, and aggregation of individual responses to reflect organizational characteristics. We illustrate these issues using a leadership effectiveness scale (12 items) validated in previous research and data from 14 MHCAs participating in the Robert Wood Johnson Foundation's Aligning Forces for Quality (AF4Q) program. High levels of instability and turnover in MHCA membership create challenges in using survey data to study changes in key organizational characteristics of MHCAs. We offer several recommendations to diagnose the source and extent of these problems. © Health Research and Educational Trust.

  2. The dynamics of meaningful social interactions and the emergence of collective knowledge

    NASA Astrophysics Data System (ADS)

    Dankulov, Marija Mitrović; Melnik, Roderick; Tadić, Bosiljka

    2015-07-01

    Collective knowledge as a social value may arise in cooperation among actors whose individual expertise is limited. The process of knowledge creation requires meaningful, logically coordinated interactions, which represents a challenging problem to physics and social dynamics modeling. By combining two-scale dynamics model with empirical data analysis from a well-known Questions & Answers system Mathematics, we show that this process occurs as a collective phenomenon in an enlarged network (of actors and their artifacts) where the cognitive recognition interactions are properly encoded. The emergent behavior is quantified by the information divergence and innovation advancing of knowledge over time and the signatures of self-organization and knowledge sharing communities. These measures elucidate the impact of each cognitive element and the individual actor’s expertise in the collective dynamics. The results are relevant to stochastic processes involving smart components and to collaborative social endeavors, for instance, crowdsourcing scientific knowledge production with online games.

  3. Forces shaping the antibiotic resistome.

    PubMed

    Perry, Julie A; Wright, Gerard D

    2014-12-01

    Antibiotic resistance has become a problem of global scale. Resistance arises through mutation or through the acquisition of resistance gene(s) from other bacteria in a process called horizontal gene transfer (HGT). While HGT is recognized as an important factor in the dissemination of resistance genes in clinical pathogens, its role in the environment has been called into question by a recent study published in Nature. The authors found little evidence of HGT in soil using a culture-independent functional metagenomics approach, which is in contrast to previous work from the same lab showing HGT between the environment and human microbiome. While surprising at face value, these results may be explained by the lack of selective pressure in the environment studied. Importantly, this work suggests the need for careful monitoring of environmental antibiotic pollution and stringent antibiotic stewardship in the fight against resistance. © 2014 WILEY Periodicals, Inc.

  4. High performance organic transistor active-matrix driver developed on paper substrate

    PubMed Central

    Peng, Boyu; Ren, Xiaochen; Wang, Zongrong; Wang, Xinyu; Roberts, Robert C.; Chan, Paddy K. L.

    2014-01-01

    The fabrication of electronic circuits on unconventional substrates largely broadens their application areas. For example, green electronics achieved through utilization of biodegradable or recyclable substrates, can mitigate the solid waste problems that arise at the end of their lifespan. Here, we combine screen-printing, high precision laser drilling and thermal evaporation, to fabricate organic field effect transistor (OFET) active-matrix (AM) arrays onto standard printer paper. The devices show a mobility and on/off ratio as high as 0.56 cm2V−1s−1 and 109 respectively. Small electrode overlap gives rise to a cut-off frequency of 39 kHz, which supports that our AM array is suitable for novel practical applications. We demonstrate an 8 × 8 AM light emitting diode (LED) driver with programmable scanning and information display functions. The AM array structure has excellent potential for scaling up. PMID:25234244

  5. Investigation of Nitride Morphology After Self-Aligned Contact Etch

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Keil, J.; Helmer, B. A.; Chien, T.; Gopaladasu, P.; Kim, J.; Shon, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Self-Aligned Contact (SAC) etch has emerged as a key enabling technology for the fabrication of very large-scale memory devices. However, this is also a very challenging technology to implement from an etch viewpoint. The issues that arise range from poor oxide etch selectivity to nitride to problems with post etch nitride surface morphology. Unfortunately, the mechanisms that drive nitride loss and surface behavior remain poorly understood. Using a simple langmuir site balance model, SAC nitride etch simulations have been performed and compared to actual etched results. This approach permits the study of various etch mechanisms that may play a role in determining nitride loss and surface morphology. Particle trajectories and fluxes are computed using Monte-Carlo techniques and initial data obtained from double Langmuir probe measurements. Etched surface advancement is implemented using a shock tracking algorithm. Sticking coefficients and etch yields are adjusted to obtain the best agreement between actual etched results and simulated profiles.

  6. The dynamics of meaningful social interactions and the emergence of collective knowledge.

    PubMed

    Dankulov, Marija Mitrović; Melnik, Roderick; Tadić, Bosiljka

    2015-07-15

    Collective knowledge as a social value may arise in cooperation among actors whose individual expertise is limited. The process of knowledge creation requires meaningful, logically coordinated interactions, which represents a challenging problem to physics and social dynamics modeling. By combining two-scale dynamics model with empirical data analysis from a well-known Questions &Answers system Mathematics, we show that this process occurs as a collective phenomenon in an enlarged network (of actors and their artifacts) where the cognitive recognition interactions are properly encoded. The emergent behavior is quantified by the information divergence and innovation advancing of knowledge over time and the signatures of self-organization and knowledge sharing communities. These measures elucidate the impact of each cognitive element and the individual actor's expertise in the collective dynamics. The results are relevant to stochastic processes involving smart components and to collaborative social endeavors, for instance, crowdsourcing scientific knowledge production with online games.

  7. Resummation of high order corrections in Higgs boson plus jet production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Peng; Isaacson, Joshua; Yuan, C. -P.

    We study the effect of multiple parton radiation to Higgs boson plus jet production at the LHC. The large logarithms arising from the small imbalance in the transverse momentum of the Higgs boson plus jet final state system are resummed to all orders in the expansion of the strong interaction coupling at the accuracy of Next-to-Leading Logarithm (NLL), by applying the transverse momentum dependent (TMD) factorization formalism. We show that the appropriate resummation scale should be the jet transverse momentum, rather than the partonic center of mass energy which has been normally used in the TMD resummation formalism. Furthermore, themore » transverse momentum distribution of the Higgs boson, particularly near the lower cut-off applied on the jet transverse momentum, can only be reliably predicted by the resummation calculation which is free of the so-called Sudakov-shoulder singularity problem, present in fixed-order calculations.« less

  8. Working Mothers

    MedlinePlus

    ... valued and supported by family, friends, and coworkers. Conflicts Problems can arise if a woman does not ... is earning more money than the other. Such conflicts can strain the marriage and may make the ...

  9. Development of a coupled expert system for the spacecraft attitude control problem

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G.; Schaffer, J.; Hsieh, B.-J.; Padalkar, S.; Rodriguezmoscoso, J.; Vinz, F.; Fernandez, K.

    1987-01-01

    A majority of the current expert systems focus on the symbolic-oriented logic and inference mechanisms of artificial intelligence (AI). Common rule-based systems employ empirical associations and are not well suited to deal with problems often arising in engineering. Described is a prototype expert system which combines both symbolic and numeric computing. The expert system's configuration is presented and its application to a spacecraft attitude control problem is discussed.

  10. Evaluation of emerging factors blocking filtration of high-adjunct-ratio wort.

    PubMed

    Ma, Ting; Zhu, Linjiang; Zheng, Feiyun; Li, Yongxian; Li, Qi

    2014-08-20

    Corn starch has become a common adjunct for beer brewing in Chinese breweries. However, with increasing ratio of corn starch, problems like poor wort filtration performance arise, which will decrease production capacity of breweries. To solve this problem, factors affecting wort filtration were evaluated, such as the size of corn starch particle, special yellow floats formed during liquefaction of corn starch, and residual substance after liquefaction. The effects of different enzyme preparations including β-amylase and β-glucanase on filtration rate were also evaluated. The results indicate that the emerging yellow floats do not severely block filtration, while the fine and uniform-shape corn starch particle and its incompletely hydrolyzed residue after liquefaction are responsible for filtration blocking. Application of β-amylase preparation increased the filtration rate of liquefied corn starch. This study is useful for our insight into the filtration blocking problem arising in the process of high-adjunct-ratio beer brewing and also provides a feasible solution using enzyme preparations.

  11. Eigenvalue problems for Beltrami fields arising in a three-dimensional toroidal magnetohydrodynamic equilibrium problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, S. R.; Hole, M. J.; Dewar, R. L.

    2007-05-15

    A generalized energy principle for finite-pressure, toroidal magnetohydrodynamic (MHD) equilibria in general three-dimensional configurations is proposed. The full set of ideal-MHD constraints is applied only on a discrete set of toroidal magnetic surfaces (invariant tori), which act as barriers against leakage of magnetic flux, helicity, and pressure through chaotic field-line transport. It is argued that a necessary condition for such invariant tori to exist is that they have fixed, irrational rotational transforms. In the toroidal domains bounded by these surfaces, full Taylor relaxation is assumed, thus leading to Beltrami fields {nabla}xB={lambda}B, where {lambda} is constant within each domain. Two distinctmore » eigenvalue problems for {lambda} arise in this formulation, depending on whether fluxes and helicity are fixed, or boundary rotational transforms. These are studied in cylindrical geometry and in a three-dimensional toroidal region of annular cross section. In the latter case, an application of a residue criterion is used to determine the threshold for connected chaos.« less

  12. A stabilized element-based finite volume method for poroelastic problems

    NASA Astrophysics Data System (ADS)

    Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo

    2018-07-01

    The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.

  13. αAMG based on Weighted Matching for Systems of Elliptic PDEs Arising From Displacement and Mixed Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Ambra, P.; Vassilevski, P. S.

    2014-05-30

    Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less

  14. Theory of signs and statistical approach to big data in assessing the relevance of clinical biomarkers of inflammation and oxidative stress.

    PubMed

    Ghezzi, Pietro; Davies, Kevin; Delaney, Aidan; Floridi, Luciano

    2018-03-06

    Biomarkers are widely used not only as prognostic or diagnostic indicators, or as surrogate markers of disease in clinical trials, but also to formulate theories of pathogenesis. We identify two problems in the use of biomarkers in mechanistic studies. The first problem arises in the case of multifactorial diseases, where different combinations of multiple causes result in patient heterogeneity. The second problem arises when a pathogenic mediator is difficult to measure. This is the case of the oxidative stress (OS) theory of disease, where the causal components are reactive oxygen species (ROS) that have very short half-lives. In this case, it is usual to measure the traces left by the reaction of ROS with biological molecules, rather than the ROS themselves. Borrowing from the philosophical theories of signs, we look at the different facets of biomarkers and discuss their different value and meaning in multifactorial diseases and system medicine to inform their use in patient stratification in personalized medicine.

  15. Statistical effects in large N supersymmetric gauge theories

    NASA Astrophysics Data System (ADS)

    Czech, Bartlomiej Stanislaw

    This thesis discusses statistical simplifications arising in supersymmetric gauge theories in the limit of large rank. Applications involve the physics of black holes and the problem of predicting the low energy effective theory from a landscape of string vacua. The first part of this work uses the AdS/CFT correspondence to explain properties of black holes. We establish that in the large charge sector of toric quiver gauge theories there exists a typical state whose structure is closely mimicked by almost all other states. Then, working in the settings of the half-BPS sector of N = 4 super-Yang-Mills theory, we show that in the dual gravity theory semiclassical observations cannot distinguish a pair of geometries corresponding to two generic heavy states. Finally, we argue on general grounds that these conclusions are exponentially enhanced in quantum cosmological settings. The results establish that one may consistently account for the entropy of a black hole with heavy states in the dual field theory and suggest that the usual properties of black holes arise as artifacts of imposing a semiclassical description on a quantum system. In the second half we develop new tools to determine the infrared behavior of quiver gauge theories in a certain class. We apply the dynamical results to a toy model of the landscape of effective field theories defined at some high energy scale, and derive firm statistical predictions for the low energy effective theory.

  16. Branching instability in expanding bacterial colonies.

    PubMed

    Giverso, Chiara; Verani, Marco; Ciarletta, Pasquale

    2015-03-06

    Self-organization in developing living organisms relies on the capability of cells to duplicate and perform a collective motion inside the surrounding environment. Chemical and mechanical interactions coordinate such a cooperative behaviour, driving the dynamical evolution of the macroscopic system. In this work, we perform an analytical and computational analysis to study pattern formation during the spreading of an initially circular bacterial colony on a Petri dish. The continuous mathematical model addresses the growth and the chemotactic migration of the living monolayer, together with the diffusion and consumption of nutrients in the agar. The governing equations contain four dimensionless parameters, accounting for the interplay among the chemotactic response, the bacteria-substrate interaction and the experimental geometry. The spreading colony is found to be always linearly unstable to perturbations of the interface, whereas branching instability arises in finite-element numerical simulations. The typical length scales of such fingers, which align in the radial direction and later undergo further branching, are controlled by the size parameters of the problem, whereas the emergence of branching is favoured if the diffusion is dominant on the chemotaxis. The model is able to predict the experimental morphologies, confirming that compact (resp. branched) patterns arise for fast (resp. slow) expanding colonies. Such results, while providing new insights into pattern selection in bacterial colonies, may finally have important applications for designing controlled patterns. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  17. A genetic algorithm used for solving one optimization problem

    NASA Astrophysics Data System (ADS)

    Shipacheva, E. N.; Petunin, A. A.; Berezin, I. M.

    2017-12-01

    A problem of minimizing the length of the blank run for a cutting tool during cutting of sheet materials into shaped blanks is discussed. This problem arises during the preparation of control programs for computerized numerical control (CNC) machines. A discrete model of the problem is analogous in setting to the generalized travelling salesman problem with limitations in the form of precursor conditions determined by the technological features of cutting. A certain variant of a genetic algorithm for solving this problem is described. The effect of the parameters of the developed algorithm on the solution result for the problem with limitations is investigated.

  18. A Cognitive Map of Human Performance Technology: A Study of Domain Expertise.

    ERIC Educational Resources Information Center

    Villachica, Steven W.; Lohr, Linda L.; Summers, Laura; Lowell, Nate; Roberts, Stephanie; Javeri, Manisha; Hunt, Erin; Mahoney, Chris; Conn, Cyndie

    Most representations of academic disciplines have been created when experts depict or report what they know; however, there are potential problems that can arise when practitioners rely on expert self-report. One way to avoid potential problems associated with expert self-report is to employ cognitive task analysis methods. The Pathfinder Scaling…

  19. Development and Evaluation of an Interactive Mobile Learning Environment with Shared Display Groupware

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Lin, Yi Lung

    2010-01-01

    When using mobile devices in support of learning activities, students gain mobility, but problems arise when group members share information. The small size of the mobile device screen becomes problematic when it is being used by two or more students to share and exchange information. This problem affects interactions among group members. To…

  20. Administrative Problem-Solving for Writing Programs and Writing Centers: Scenarios in Effective Program Management.

    ERIC Educational Resources Information Center

    Myers-Breslin, Linda

    Addressing the issues and problems faced by writing program administrators (WPAs) and writing center directors (WCDs), and how they can most effectively resolve the political, pedagogical, and financial questions that arise, this book presents essays from experienced WPAs and WCDs at a wide variety of institutions that offer scenarios and case…

  1. Succession Planning in England: New Leaders and New Forms of Leadership

    ERIC Educational Resources Information Center

    Bush, Tony

    2011-01-01

    There are concerns about the supply of head teachers in many countries. In England, this problem arises from demographic changes and the perceived difficulty of the job. The National College responded to this problem by initiating a Succession Planning programme. This article reports the main findings from the external evaluation of the programme…

  2. Effects of Inequality and Poverty vs. Teachers and Schooling on America's Youth

    ERIC Educational Resources Information Center

    Berliner, David C.

    2013-01-01

    Background/Context: This paper arises out of frustration with the results of school reforms carried out over the past few decades. These efforts have failed. They need to be abandoned. In their place must come recognition that income inequality causes many social problems, including problems associated with education. Sadly, compared to all other…

  3. Practical Solutions to Practically Every Problem: The Early Childhood Teacher's Manual. Revised Edition.

    ERIC Educational Resources Information Center

    Saifer, Steffen

    Based on sound developmentally appropriate theory, this revised guide is designed to help early childhood teachers deal with common problems that arise in all aspects of their work. Following an introduction and a list of the 20 most important principles for successful preschool teaching, the guide is divided into nine parts. Part 1 addresses…

  4. An evolution infinity Laplace equation modelling dynamic elasto-plastic torsion

    NASA Astrophysics Data System (ADS)

    Messelmi, Farid

    2017-12-01

    We consider in this paper a parabolic partial differential equation involving the infinity Laplace operator and a Leray-Lions operator with no coercitive assumption. We prove the existence and uniqueness of the corresponding approached problem and we show that at the limit the solution solves the parabolic variational inequality arising in the elasto-plastic torsion problem.

  5. Co-Rumination Cultivates Anxiety: A Genetically Informed Study of Friend Influence during Early Adolescence

    ERIC Educational Resources Information Center

    Dirghangi, Shrija; Kahn, Gilly; Laursen, Brett; Brendgen, Mara; Vitaro, Frank; Dionne, Ginette; Boivin, Michel

    2015-01-01

    This study tested 2 related hypotheses. The first holds that high co-rumination anticipates heightened internalizing problems. The second holds that positive relationships with friends exacerbate the risk for internalizing problems arising from co-rumination. A sample of MZ twins followed from birth (194 girls and 170 boys) completed (a)…

  6. Symmetry of the Adiabatic Condition in the Piston Problem

    ERIC Educational Resources Information Center

    Anacleto, Joaquim; Ferreira, J. M.

    2011-01-01

    This study addresses a controversial issue in the adiabatic piston problem, namely that of the piston being adiabatic when it is fixed but no longer so when it can move freely. It is shown that this apparent contradiction arises from the usual definition of adiabatic condition. The issue is addressed here by requiring the adiabatic condition to be…

  7. The Perceived Benefits and Problems Associated with Teaching Activities Undertaken by Doctoral Students

    ERIC Educational Resources Information Center

    Jordan, Katy; Howe, Christine

    2018-01-01

    Postgraduate students involved in delivering undergraduate teaching while working toward a research degree are known as graduate teaching assistants (GTAs). This study focused upon the problems and benefits arising from this dual role as researchers and teachers, as perceived by GTAs at the University of Cambridge. To this end, GTAs at Cambridge…

  8. Practical Solutions to Practically Every Problem: The Early Childhood Teacher's Manual.

    ERIC Educational Resources Information Center

    Saifer, Steffen

    This book is designed to help early childhood teachers deal with common problems that arise in all aspects of their work. Part one addresses daily dilemmas such as schedule planning, meal and nap times, art, and outdoor play. Part two covers classroom concerns such as the physical environment, curriculum, individual student needs, field trips,…

  9. Measuring forest evapotranspiration--theory and problems

    Treesearch

    Anthony C. Federer; Anthony C. Federer

    1970-01-01

    A satisfactory general method of measuring forest evapotranspiration has yet to be developed. Many procedures have been tried, but only the soil-water budget method and the micrometeorological methods offer any degree of success. This paper is a discussion of these procedures and the problems that arise in applying them. It is designed as a reference for scientists and...

  10. Reflections on the surface energy imbalance problem

    Treesearch

    Ray Leuning; Eva van Gorsela; William J. Massman; Peter R. Isaac

    2012-01-01

    The 'energy imbalance problem' in micrometeorology arises because at most flux measurement sites the sum of eddy fluxes of sensible and latent heat (H + λE) is less than the available energy (A). Either eddy fluxes are underestimated or A is overestimated. Reasons for the imbalance are: (1) a failure to satisfy the fundamental assumption of one-...

  11. Who Is Listening? An Examination of Gender Effects and Employment Choice in Sustainability Education in an Undergraduate Business School

    ERIC Educational Resources Information Center

    Weaven, Scott; Griffin, Deborah; McPhail, Ruth; Smith, Calvin

    2013-01-01

    Whilst universities acknowledge the importance of sustainability education, numerous problems exist in relation to the nature, delivery and outcomes of sustainability instruction. Many of these problems arise due to a lack of understanding about students' perception towards, and knowledge about business sustainability. This article examines…

  12. Estimation of coefficients and boundary parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Murphy, K. A.

    1984-01-01

    Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.

  13. Inconsistent application of environmental laws and policies to California's oak woodlands

    Treesearch

    Gregory A. Giusti; Adina M. Merenlender

    2002-01-01

    We examine inconsistencies in the application of environmental laws and policies to California's oak woodlands and associated resources. Specifically, large-scale vegetation removals receive different levels of environmental oversight depending on location, tree species, and the final land use designation. Hence, situations arise where the scale of impacts to the...

  14. Developing a Strategy for Using Technology-Enhanced Items in Large-Scale Standardized Tests

    ERIC Educational Resources Information Center

    Bryant, William

    2017-01-01

    As large-scale standardized tests move from paper-based to computer-based delivery, opportunities arise for test developers to make use of items beyond traditional selected and constructed response types. Technology-enhanced items (TEIs) have the potential to provide advantages over conventional items, including broadening construct measurement,…

  15. Ab initio nanostructure determination

    NASA Astrophysics Data System (ADS)

    Gujarathi, Saurabh

    Reconstruction of complex structures is an inverse problem arising in virtually all areas of science and technology, from protein structure determination to bulk heterostructure solar cells and the structure of nanoparticles. This problem is cast as a complex network problem where the edges in a network have weights equal to the Euclidean distance between their endpoints. A method, called Tribond, for the reconstruction of the locations of the nodes of the network given only the edge weights of the Euclidean network is presented. The timing results indicate that the algorithm is a low order polynomial in the number of nodes in the network in two dimensions. Reconstruction of Euclidean networks in two dimensions of about one thousand nodes in approximately twenty four hours on a desktop computer using this implementation is done. In three dimensions, the computational cost for the reconstruction is a higher order polynomial in the number of nodes and reconstruction of small Euclidean networks in three dimensions is shown. If a starting network of size five is assumed to be given, then for a network of size 100, the remaining reconstruction can be done in about two hours on a desktop computer. In situations when we have less precise data, modifications of the method may be necessary and are discussed. A related problem in one dimension known as the Optimal Golomb ruler (OGR) is also studied. A statistical physics Hamiltonian to describe the OGR problem is introduced and the first order phase transition from a symmetric low constraint phase to a complex symmetry broken phase at high constraint is studied. Despite the fact that the Hamiltonian is not disordered, the asymmetric phase is highly irregular with geometric frustration. The phase diagram is obtained and it is seen that even at a very low temperature T there is a phase transition at finite and non-zero value of the constraint parameter gamma/mu. Analytic calculations for the scaling of the density and free energy of the ruler are done and they are compared with those from the mean field approach. A scaling law is also derived for the length of OGR, which is consistent with Erdos conjecture and with numerical results.

  16. A homotopy analysis method for the nonlinear partial differential equations arising in engineering

    NASA Astrophysics Data System (ADS)

    Hariharan, G.

    2017-05-01

    In this article, we have established the homotopy analysis method (HAM) for solving a few partial differential equations arising in engineering. This technique provides the solutions in rapid convergence series with computable terms for the problems with high degree of nonlinear terms appearing in the governing differential equations. The convergence analysis of the proposed method is also discussed. Finally, we have given some illustrative examples to demonstrate the validity and applicability of the proposed method.

  17. An Electromagnetically-Controlled Precision Orbital Tracking Vehicle (POTV)

    DTIC Science & Technology

    1992-12-01

    assume that C > B > A. Then 0 1(t) is purely sinusoidal. tk2 (t) is also sinusoidal because the forcing function z(t) is sinusoidal. 03 (t) is more...an unpredictable -manner. The problem arises from the rank deficiency of the G input matrix as shown below. Remember we have shown already that its...rank can never exceed five because rows two, four, and six are linearly dependent. The rank deficiency arises from the "translational part" of the input

  18. Hausdorff dimension of certain sets arising in Engel expansions

    NASA Astrophysics Data System (ADS)

    Fang, Lulu; Wu, Min

    2018-05-01

    The present paper is concerned with the Hausdorff dimension of certain sets arising in Engel expansions. In particular, the Hausdorff dimension of the set is completely determined, where A n (x) can stand for the digit, gap and ratio between two consecutive digits in the Engel expansion of x and ϕ is a positive function defined on natural numbers. These results significantly extend the existing results of Galambos’ open problems on the Hausdorff dimension of sets related to the growth rate of digits.

  19. A data-model integration approach toward improved understanding on wetland functions and hydrological benefits at the catchment scale

    NASA Astrophysics Data System (ADS)

    Yeo, I. Y.; Lang, M.; Lee, S.; Huang, C.; Jin, H.; McCarty, G.; Sadeghi, A.

    2017-12-01

    The wetland ecosystem plays crucial roles in improving hydrological function and ecological integrity for the downstream water and the surrounding landscape. However, changing behaviours and functioning of wetland ecosystems are poorly understood and extremely difficult to characterize. Improved understanding on hydrological behaviours of wetlands, considering their interaction with surrounding landscapes and impacts on downstream waters, is an essential first step toward closing the knowledge gap. We present an integrated wetland-catchment modelling study that capitalizes on recently developed inundation maps and other geospatial data. The aim of the data-model integration is to improve spatial prediction of wetland inundation and evaluate cumulative hydrological benefits at the catchment scale. In this paper, we highlight problems arising from data preparation, parameterization, and process representation in simulating wetlands within a distributed catchment model, and report the recent progress on mapping of wetland dynamics (i.e., inundation) using multiple remotely sensed data. We demonstrate the value of spatially explicit inundation information to develop site-specific wetland parameters and to evaluate model prediction at multi-spatial and temporal scales. This spatial data-model integrated framework is tested using Soil and Water Assessment Tool (SWAT) with improved wetland extension, and applied for an agricultural watershed in the Mid-Atlantic Coastal Plain, USA. This study illustrates necessity of spatially distributed information and a data integrated modelling approach to predict inundation of wetlands and hydrologic function at the local landscape scale, where monitoring and conservation decision making take place.

  20. Prediction of aerodynamic tonal noise from open rotors

    NASA Astrophysics Data System (ADS)

    Sharma, Anupam; Chen, Hsuan-nien

    2013-08-01

    A numerical approach for predicting tonal aerodynamic noise from "open rotors" is presented. "Open rotor" refers to an engine architecture with a pair of counter-rotating propellers. Typical noise spectra from an open rotor consist of dominant tones, which arise due to both the steady loading/thickness and the aerodynamic interaction between the two bladerows. The proposed prediction approach utilizes Reynolds Averaged Navier-Stokes (RANS) Computational Fluid Dynamics (CFD) simulations to obtain near-field description of the noise sources. The near-to-far-field propagation is then carried out by solving the Ffowcs Williams-Hawkings equation. Since the interest of this paper is limited to tone noise, a linearized, frequency domain approach is adopted to solve the wake/vortex-blade interaction problem.This paper focuses primarily on the speed scaling of the aerodynamic tonal noise from open rotors. Even though there is no theoretical mode cut-off due to the absence of nacelle in open rotors, the far-field noise is a strong function of the azimuthal mode order. While the steady loading/thickness noise has circumferential modes of high order, due to the relatively large number of blades (≈10-12), the interaction noise typically has modes of small orders. The high mode orders have very low radiation efficiency and exhibit very strong scaling with Mach number, while the low mode orders show a relatively weaker scaling. The prediction approach is able to capture the speed scaling (observed in experiment) of the overall aerodynamic noise very well.

  1. Use of optical technique for inspection of warpage of IC packages

    NASA Astrophysics Data System (ADS)

    Toh, Siew-Lok; Chau, Fook S.; Ong, Sim Heng

    2001-06-01

    The packaging of IC packages has changed over the years, form dual-in-line, wire-bond, and pin-through-hole in printed wiring board technologies in the 1970s to ball grid array, chip scale and surface mount technologies in the 1990s. Reliability has been a big problem for manufacturers for some moisture-sensitive packages. One of the potential problems in plastic IC packages is moisture-induced popcorn effect which can arise during the reflow process. Shearography is a non-destructive inspection technique that may be used to detect the delamination and warpage of IC packages. It is non-contacting and permits a full-field observation of surface displacement derivatives. Another advantage of this technique is that it is able to give the real-time formation of the fringes which indicate flaws in the IC package under real-time simulation condition of Surface Mount Technology (SMT) IR reflow profile. It is extremely fast and convenient to study the true behavior of the packaging deformation during the SMT process. It can be concluded that shearography has the potential for the real- time detection, in situ and non-destructive inspection of IC packages during the surface mount process.

  2. Evaluation of transboundary environmental issues in Central Europe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engi, D.; Kapustka, L.A.; Williams, B.A.

    1997-05-01

    Central Europe has experienced environmental degradation for hundreds of years. The proximity of countries, their shared resources, and transboundary movement of environmental pollution, create the potential for regional environmental strife. The goal of this project was to identify the sources and sinks of environmental pollution in Central Europe and evaluate the possible impact of transboundary movement of pollution on the countries of Central Europe. In meeting the objectives of identifying sources of contaminants, determining transboundary movement of contaminants, and assessing socio-economic implications, large quantities of disparate data were examined. To facilitate use of the data, the authors refined mapping proceduresmore » that enable processing information from virtually any map or spreadsheet data that can be geo-referenced. Because the procedure is freed from a priori constraints of scale that confound most Geographical Information Systems, they have the capacity to generate new projections and apply sophisticated statistical analyses to the data. The analysis indicates substantial environmental problems. While transboundary pollution issues may spawn conflict among the Central European countries and their neighbors, it appears that common environmental problems facing the entire region have had the effect of bringing the countries together, even though opportunities for deteriorating relationships may still arise.« less

  3. Ni-MH spent batteries: a raw material to produce Ni-Co alloys.

    PubMed

    Lupi, Carla; Pilone, Daniela

    2002-01-01

    Ni-MH spent batteries are heterogeneous and complex materials, so any kind of metallurgical recovery process needs a mechanical pre-treatment at least to separate irony materials and recyclable plastic materials (like ABS) respectively, in order to get additional profit from this saleable scrap, as well as minimize waste arising from the braking separation process. Pyrometallurgical processing is not suitable to treat Ni-MH batteries mainly because of Rare Earths losses in the slag. On the other hand, the hydrometallurgical method, that offers better opportunities in terms of recovery yield and higher purity of Ni, Co, and RE, requires several process steps as shown in technical literature. The main problems during leach liquor purification are the removal of elements such as Mn, Zn, Cd, dissolved during the leaching step, and the separation of Ni from Co. In the present work, the latter problem is overcome by co-deposition of a Ni-35/40%w Co alloy of good quality. The experiments carried out in a laboratory scale pilot-plant show that a current efficiency higher than 91% can be reached in long duration electrowinning tests performed at 50 degrees C and 4.3 catholyte pH.

  4. Reprint of Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-04-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  5. Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-03-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  6. A universal model for solar eruptions.

    PubMed

    Wyper, Peter F; Antiochos, Spiro K; DeVore, C Richard

    2017-04-26

    Magnetically driven eruptions on the Sun, from stellar-scale coronal mass ejections to small-scale coronal X-ray and extreme-ultraviolet jets, have frequently been observed to involve the ejection of the highly stressed magnetic flux of a filament. Theoretically, these two phenomena have been thought to arise through very different mechanisms: coronal mass ejections from an ideal (non-dissipative) process, whereby the energy release does not require a change in the magnetic topology, as in the kink or torus instability; and coronal jets from a resistive process involving magnetic reconnection. However, it was recently concluded from new observations that all coronal jets are driven by filament ejection, just like large mass ejections. This suggests that the two phenomena have physically identical origin and hence that a single mechanism may be responsible, that is, either mass ejections arise from reconnection, or jets arise from an ideal instability. Here we report simulations of a coronal jet driven by filament ejection, whereby a region of highly sheared magnetic field near the solar surface becomes unstable and erupts. The results show that magnetic reconnection causes the energy release via 'magnetic breakout'-a positive-feedback mechanism between filament ejection and reconnection. We conclude that if coronal mass ejections and jets are indeed of physically identical origin (although on different spatial scales) then magnetic reconnection (rather than an ideal process) must also underlie mass ejections, and that magnetic breakout is a universal model for solar eruptions.

  7. No fifth force in a scale invariant universe

    DOE PAGES

    Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.

    2017-03-15

    We revisit the possibility that the Planck mass is spontaneously generated in scale-invariant scalar-tensor theories of gravity, typically leading to a “dilaton.” The fifth force, arising from the dilaton, is severely constrained by astrophysical measurements. We explore the possibility that nature is fundamentally scale invariant and argue that, as a consequence, the fifth-force effects are dramatically suppressed and such models are viable. Finally, we discuss possible obstructions to maintaining scale invariance and how these might be resolved.

  8. Non scale-invariant density perturbations from chaotic extended inflation

    NASA Technical Reports Server (NTRS)

    Mollerach, Silvia; Matarrese, Sabino

    1991-01-01

    Chaotic inflation is analyzed in the frame of scalar-tensor theories of gravity. Fluctuations in the energy density arise from quantum fluctuations of the Brans-Dicke field and of the inflation field. The spectrum of perturbations is studied for a class of models: it is non scale-invarient and, for certain values of the parameters, it has a peak. If the peak appears at astrophysically interesting scales, it may help to reconcile the Cold Dark Matter scenario for structure formation with large scale observations.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Siyao; Lazarian, A.; Yan, Huirong, E-mail: hryan@pku.edu.cn

    We address the problem of the different line widths of coexistent neutrals and ions observed in molecular clouds and explore whether this difference can arise from the effects of magnetohydrodynamic (MHD) turbulence acting on partially ionized gas. Among the three fundamental modes of MHD turbulence, we find that fast and slow modes do not contribute to line width differences. We focus on the Alfvénic component, and consider the damping of Alfvén modes by taking into account both neutral-ion collisions and neutral viscosity. We confirm that the line width difference can be explained by the differential damping of the Alfvénic turbulencemore » in ions and the hydrodynamic turbulence in neutrals, and find it strongly depends on the properties of MHD turbulence. We consider various regimes of turbulence corresponding to different media magnetizations and turbulent drivings. In the case of super-Alfvénic turbulence, when the damping scale of Alfvénic turbulence is below the Alfvénic scale l{sub A}, the line width difference does not depend on magnetic field strength. In other turbulence regimes, however, the dependence is present and evaluation of magnetic field from the observed line width difference is possible.« less

  10. Large-scale optimal control of interconnected natural gas and electrical transmission systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-04-01

    We present a detailed optimal control model that captures spatiotemporal interactions between gas and electric transmission networks. We use the model to study flexibility and economic opportunities provided by coordination. A large-scale case study in the Illinois system reveals that coordination can enable the delivery of significantly larger amounts of natural gas to the power grid. In particular, under a coordinated setting, gas-fired generators act as distributed demand response resources that can be controlled by the gas pipeline operator. This enables more efficient control of pressures and flows in space and time and overcomes delivery bottlenecks. We demonstrate that themore » additional flexibility not only can benefit the gas operator but can also lead to more efficient power grid operations and results in increased revenue for gas-fired power plants. We also use the optimal control model to analyze computational issues arising in these complex models. We demonstrate that the interconnected Illinois system with full physical resolution gives rise to a highly nonlinear optimal control problem with 4400 differential and algebraic equations and 1040 controls that can be solved with a state-of-the-art sparse optimization solver. (C) 2016 Elsevier Ltd. All rights reserved.« less

  11. Metabolic network visualization eliminating node redundance and preserving metabolic pathways

    PubMed Central

    Bourqui, Romain; Cottret, Ludovic; Lacroix, Vincent; Auber, David; Mary, Patrick; Sagot, Marie-France; Jourdan, Fabien

    2007-01-01

    Background The tools that are available to draw and to manipulate the representations of metabolism are usually restricted to metabolic pathways. This limitation becomes problematic when studying processes that span several pathways. The various attempts that have been made to draw genome-scale metabolic networks are confronted with two shortcomings: 1- they do not use contextual information which leads to dense, hard to interpret drawings, 2- they impose to fit to very constrained standards, which implies, in particular, duplicating nodes making topological analysis considerably more difficult. Results We propose a method, called MetaViz, which enables to draw a genome-scale metabolic network and that also takes into account its structuration into pathways. This method consists in two steps: a clustering step which addresses the pathway overlapping problem and a drawing step which consists in drawing the clustered graph and each cluster. Conclusion The method we propose is original and addresses new drawing issues arising from the no-duplication constraint. We do not propose a single drawing but rather several alternative ways of presenting metabolism depending on the pathway on which one wishes to focus. We believe that this provides a valuable tool to explore the pathway structure of metabolism. PMID:17608928

  12. Regional Climate Simulations over North America: Interaction of Local Processes with Improved Large-Scale Flow.

    NASA Astrophysics Data System (ADS)

    Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan

    2005-04-01

    The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.

  13. Lab-scale Lidar Sensing of Diesel Engines Exhausts

    NASA Technical Reports Server (NTRS)

    Borghese, A.

    1992-01-01

    Combustion technology and its environmental concerns are being considered with increasing attention, not only for global-scale effects, but also for toxicological implications, particularly in the lift conditions of traffic-congested areas and industrial sites. Majority combustion by-products (CO, NO(sub x)) and unburned hydrocarbons (HC), are already subject to increasingly severe regulations; however other, non-regulated minority species, mainly soot and heavy aromatic molecules, involve higher health risks, as they are suspected to be agents of serious pathologies and even mutagenic effects. This is but one of the reasons why much research work is being carried out worldwide on the physical properties of these substances. Correspondingly, the need arises to detect their presence in urban environments, with as high a sensitivity as is required by their low concentrations, proper time- and space-resolutions, and 'real-time' capabilities. Lidar techniques are excellent candidates to this purpose, although severe constraints limit their applicability, eye-safety problems and aerosol Mie scattering uncertainties above all. At CNR's Istituto Motori in Napels, a Lidar-like diagnostic system is being developed, aimed primarily at monitoring the dynamic behavior of internal combustion engines, particularly diesel exhausts, and at exploring the feasibility of a so-called 'Downtown Lidar'.

  14. The minimal axion minimal linear σ model

    NASA Astrophysics Data System (ADS)

    Merlo, L.; Pobbe, F.; Rigolin, S.

    2018-05-01

    The minimal SO(5) / SO(4) linear σ model is extended including an additional complex scalar field, singlet under the global SO(5) and the Standard Model gauge symmetries. The presence of this scalar field creates the conditions to generate an axion à la KSVZ, providing a solution to the strong CP problem, or an axion-like-particle. Different choices for the PQ charges are possible and lead to physically distinct Lagrangians. The internal consistency of each model necessarily requires the study of the scalar potential describing the SO(5)→ SO(4), electroweak and PQ symmetry breaking. A single minimal scenario is identified and the associated scalar potential is minimised including counterterms needed to ensure one-loop renormalizability. In the allowed parameter space, phenomenological features of the scalar degrees of freedom, of the exotic fermions and of the axion are illustrated. Two distinct possibilities for the axion arise: either it is a QCD axion with an associated scale larger than ˜ 105 TeV and therefore falling in the category of the invisible axions; or it is a more massive axion-like-particle, such as a 1 GeV axion with an associated scale of ˜ 200 TeV, that may show up in collider searches.

  15. Did ice streams carve martian outflow channels?

    USGS Publications Warehouse

    Lucchitta, B.K.; Anderson, D.M.; Shoji, H.

    1981-01-01

    Outflow channels on Mars1 are long sinuous linear depressions that occur mostly in the equatorial area (??30?? lat.). They differ from small valley networks2 by being larger and arising full born from chaotic terrains. Outflow channels resemble terrestrial stream beds, and their origin has generally been attributed to water3-5 in catastrophic floods6,7 or mudflows8. The catastrophic-flood hypothesis is derived primarily from the morphological similarities of martian outflow channels and features created by the catastrophic Spokane flood that formed the Washington scablands. These similarities have been documented extensively3,6,7, but differences of scale remain a major problemmartian channel features are on the average much larger than their proposed terrestrial analogues. We examine here the problem of channel origin from the perspective of erosional characteristics and the resultant landf orms created by former and present-day ice streams and glaciers on Earth. From morphologic comparisons, an ice-stream origin seems equally well suited to explain the occurrences and form of the outflow channels on Mars, and in contrast with the hydraulic hypothesis, ice streams and ice sheets produce terrestrial features of the same scale as those observed on Mars. ?? 1981 Nature Publishing Group.

  16. Numerical solutions of acoustic wave propagation problems using Euler computations

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.

    1984-01-01

    This paper reports solution procedures for problems arising from the study of engine inlet wave propagation. The first problem is the study of sound waves radiated from cylindrical inlets. The second one is a quasi-one-dimensional problem to study the effect of nonlinearities and the third one is the study of nonlinearities in two dimensions. In all three problems Euler computations are done with a fourth-order explicit scheme. For the first problem results are shown in agreement with experimental data and for the second problem comparisons are made with an existing asymptotic theory. The third problem is part of an ongoing work and preliminary results are presented for this case.

  17. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  18. Sub-seasonal to Seasonal Prediction in the Midst of Uncertainties: Recognizing the Music in What May Seem Like Noises Across this scale

    NASA Astrophysics Data System (ADS)

    Tiwari, P.; Kar, S. C.; Dey, S.; Mohanty, U. C.

    2016-12-01

    Sub-seasonal to Seasonal (S2S) prediction has long been considered a predictability desert and forecasting across this scale has received much less attention than medium and seasonal scale. Hoskins (2013) has suggested that there is an urgent need to understand the phenomena and structures that provide the potential sources of predictability across this scale. Therefore, after a problem on this scale and its associated implications in various sectors (for e.g. agriculture and food security, water and health), the question arises whether strategies of S2S prediction that have proved useful elsewhere can they be adapted to the North Indian plains and complex terrain of Himalayas as well? The aim of the present study is in three-folds. Firstly, it attempts to assess the sub seasonal to seasonal predictive skill of six general circulation models (GCMs) for a period of 31 years (1982-2012) and identify forecast windows of opportunity. Secondly, an attempt has been made to reproduce the information of the GCMs at higher resolution using both dynamical and statistical downscaling approaches along with bias correction. Thirdly, an attempt has been also made to use the S2S prediction for water cycle studies as lives of millions of people in North Indian plains depends on water availability from rivers of western Himalayan origin. Finally, the plausible reasons of model failure, potential sources of predictability across this scale and how S2S framework has played a key role in addressing such issues is highlighted. Key words: S2S, predictability, downscaling, water cycle.

  19. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  20. Power spectrum for the small-scale Universe

    NASA Astrophysics Data System (ADS)

    Widrow, Lawrence M.; Elahi, Pascal J.; Thacker, Robert J.; Richardson, Mark; Scannapieco, Evan

    2009-08-01

    The first objects to arise in a cold dark matter (CDM) universe present a daunting challenge for models of structure formation. In the ultra small-scale limit, CDM structures form nearly simultaneously across a wide range of scales. Hierarchical clustering no longer provides a guiding principle for theoretical analyses and the computation time required to carry out credible simulations becomes prohibitively high. To gain insight into this problem, we perform high-resolution (N = 7203-15843) simulations of an Einstein-de Sitter cosmology where the initial power spectrum is P(k) ~ kn, with -2.5 <= n <= - 1. Self-similar scaling is established for n = -1 and -2 more convincingly than in previous, lower resolution simulations and for the first time, self-similar scaling is established for an n = -2.25 simulation. However, finite box-size effects induce departures from self-similar scaling in our n = -2.5 simulation. We compare our results with the predictions for the power spectrum from (one-loop) perturbation theory and demonstrate that the renormalization group approach suggested by McDonald improves perturbation theory's ability to predict the power spectrum in the quasi-linear regime. In the non-linear regime, our power spectra differ significantly from the widely used fitting formulae of Peacock & Dodds and Smith et al. and a new fitting formula is presented. Implications of our results for the stable clustering hypothesis versus halo model debate are discussed. Our power spectra are inconsistent with predictions of the stable clustering hypothesis in the high-k limit and lend credence to the halo model. Nevertheless, the fitting formula advocated in this paper is purely empirical and not derived from a specific formulation of the halo model.

  1. Mobile Abuse in University Students and profiles of victimization and aggression.

    PubMed

    Polo Del Río, Mª Isabel; Mendo Lázaro, Santiago; León Del Barco, Benito; Felipe Castaño, Elena

    2017-09-29

    The vast majority of young people have mobile phones. This has become a must-have item in their lives, with traditional socialization spaces displaced by virtual ones. They use their mobile phones for many hours a day, to the detriment of their psychological and social functioning, showing greater vulnerability to abusive or excessive use, and more likely to become problematic or addicted users. This paper aims to study the impact of mobile phone abuse in a sample of college students, assessing the social, personal, and communicational realms and deepening understanding of the different cyberbullying profiles, analyzing who has more personal and social problems using mobiles: victims or aggressors. Whether the number of hours of mobile phone use has an effect on these problems will also be explored. The sample (1,200 students) was selected by multistage cluster sampling among the faculties of the University of Extremadura. Data were obtained through Victimization (CYB-VIC) and Aggression (CYB-AGRES) through the mobile phone scales, and the Questionnaire of Experiences related to Mobile (CERM). The results show that mobile phone abuse generates conflicts in young people of both sexes, although girls have more communication and emotional problems than boys. In addition, age, field of knowledge, victim/aggressor profile, and hours of mobile phone use are crucial variables in the communication and emotional conflicts arising from the misuse of mobile.

  2. GPU acceleration of Dock6's Amber scoring computation.

    PubMed

    Yang, Hailong; Zhou, Qiongqiong; Li, Bo; Wang, Yongjian; Luan, Zhongzhi; Qian, Depei; Li, Hanlu

    2010-01-01

    Dressing the problem of virtual screening is a long-term goal in the drug discovery field, which if properly solved, can significantly shorten new drugs' R&D cycle. The scoring functionality that evaluates the fitness of the docking result is one of the major challenges in virtual screening. In general, scoring functionality in docking requires a large amount of floating-point calculations, which usually takes several weeks or even months to be finished. This time-consuming procedure is unacceptable, especially when highly fatal and infectious virus arises such as SARS and H1N1, which forces the scoring task to be done in a limited time. This paper presents how to leverage the computational power of GPU to accelerate Dock6's (http://dock.compbio.ucsf.edu/DOCK_6/) Amber (J. Comput. Chem. 25: 1157-1174, 2004) scoring with NVIDIA CUDA (NVIDIA Corporation Technical Staff, Compute Unified Device Architecture - Programming Guide, NVIDIA Corporation, 2008) (Compute Unified Device Architecture) platform. We also discuss many factors that will greatly influence the performance after porting the Amber scoring to GPU, including thread management, data transfer, and divergence hidden. Our experiments show that the GPU-accelerated Amber scoring achieves a 6.5× speedup with respect to the original version running on AMD dual-core CPU for the same problem size. This acceleration makes the Amber scoring more competitive and efficient for large-scale virtual screening problems.

  3. Optimal control of a harmonic oscillator: Economic interpretations

    NASA Astrophysics Data System (ADS)

    Janová, Jitka; Hampel, David

    2013-10-01

    Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.

  4. Space shuttle safety - A hybrid vehicle breeds new problems.

    NASA Technical Reports Server (NTRS)

    Pinkel, I. I.

    1971-01-01

    Discussion of a few novel problems raised by the design and flight plan of the space shuttle and by the dangerous cargos it might carry. Among the problems cited are those connected with the inspection of the bearings of the propellant turbopumps, particularly those of the hydrogen pump, for evidence of spalling, as well as problems arising in the inspection of the high-temperature parts of the combustor and turbine section of the airbreathing turbofan for shuttle booster and orbiter, and problems resulting from the possibility of fire hazard due to spontaneous ignition of fuel vapor in the fuel tank vapor space.

  5. Solar array stepping problems in satellites and solutions

    NASA Astrophysics Data System (ADS)

    Maharana, P. K.; Goel, P. S.

    1992-01-01

    The dynamics problems arising due to stepping motion of the solar arrays of spacecraft are studied. To overcome these problems, design improvements in the drive logic based on the phase plane analysis are suggested. The improved designs are applied to the Solar Array Drive Assembly (SADA) of IRS-1B and INSAT-2A satellites. In addition, an alternate torquing strategy for very successful slewing of the arrays, and with minimum excitation of flexible modes, is proposed.

  6. Efficient Preconditioning for the p-Version Finite Element Method in Two Dimensions

    DTIC Science & Technology

    1989-10-01

    paper, we study fast parallel preconditioners for systems of equations arising from the p-version finite element method. The p-version finite element...computations and the solution of a relatively small global auxiliary problem. We study two different methods. In the first (Section 3), the global...20], will be studied in the next section. Problem (3.12) is obviously much more easily solved than the original problem ,nd the procedure is highly

  7. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.

  8. Steiner trees and spanning trees in six-pin soap films

    NASA Astrophysics Data System (ADS)

    Dutta, Prasun; Khastgir, S. Pratik; Roy, Anushree

    2010-02-01

    The problem of finding minimum (local as well as absolute) path lengths joining given points (or terminals) on a plane is known as the Steiner problem. The Steiner problem arises in finding the minimum total road length joining several towns and cities. We study the Steiner tree problem using six-pin soap films. Experimentally, we observe spanning trees as well as Steiner trees partly by varying the pin diameter. We propose a possibly exact expression for the length of a spanning tree or a Steiner tree, which fails mysteriously in certain cases.

  9. Appropriate complexity for the prediction of coastal and estuarine geomorphic behaviour at decadal to centennial scales

    NASA Astrophysics Data System (ADS)

    French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter

    2016-03-01

    Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed 'appropriate complexity modelling' of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.

  10. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  11. Managing auricular haematoma to prevent 'cauliflower ear'.

    PubMed

    Summers, Anthony

    2012-09-01

    This article describes the typical signs of auricular haematoma, how people who have the condition should be treated in emergency departments and the problems that can arise if they are managed inappropriately.

  12. Astrophysical constraints on Planck scale dissipative phenomena.

    PubMed

    Liberati, Stefano; Maccione, Luca

    2014-04-18

    The emergence of a classical spacetime from any quantum gravity model is still a subtle and only partially understood issue. If indeed spacetime is arising as some sort of large scale condensate of more fundamental objects, then it is natural to expect that matter, being a collective excitation of the spacetime constituents, will present modified kinematics at sufficiently high energies. We consider here the phenomenology of the dissipative effects necessarily arising in such a picture. Adopting dissipative hydrodynamics as a general framework for the description of the energy exchange between collective excitations and the spacetime fundamental degrees of freedom, we discuss how rates of energy loss for elementary particles can be derived from dispersion relations and used to provide strong constraints on the base of current astrophysical observations of high-energy particles.

  13. Building Language Through Conflict Resolution: Discussing Problems Enriches Language While Leading to Solutions

    ERIC Educational Resources Information Center

    Church, Ellen Booth

    2005-01-01

    This brief article describes how classroom group time can be "talk central" for children to discuss problems, imagine solutions, even role-play hypothetical situations. It is often in the safety and support of the large group that children develop the tools they need to learn how to resolve the inevitable conflicts that arise throughout life.…

  14. An Improved Memetic Algorithm for Break Scheduling

    NASA Astrophysics Data System (ADS)

    Widl, Magdalena; Musliu, Nysret

    In this paper we consider solving a complex real life break scheduling problem. This problem of high practical relevance arises in many working areas, e.g. in air traffic control and other fields where supervision personnel is working. The objective is to assign breaks to employees such that various constraints reflecting legal demands or ergonomic criteria are satisfied and staffing requirement violations are minimised.

  15. A New Method for Calibrating Perceptual Salience across Dimensions in Infants: The Case of Color vs. Luminance

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Blaser, Erik A.; Leslie, Alan M.

    2006-01-01

    We report a new method for calibrating differences in perceptual salience across feature dimensions, in infants. The problem of inter-dimensional salience arises in many areas of infant studies, but a general method for addressing the problem has not previously been described. Our method is based on a preferential looking paradigm, adapted to…

  16. Improving Productivity in the Work Force: Implications for Research and Development in Vocational Education. Occasional Paper No. 72.

    ERIC Educational Resources Information Center

    Sullivan, Dennis J.

    Declining productivity is a major problem in the American economy. Gains in productivity, and finally, actual rates of productivity, have been declining since the late 1960s. Specific problems arising as a result of this decline in productivity are the inflationary pressures that we face as a nation, the increased regulatory environment under…

  17. Collection and curation of IDPs in the stratosphere and below. Part 2: The Greenland and Antarctic ice sheets

    NASA Technical Reports Server (NTRS)

    Maurette, Michel; Hammer, C.; Harvey, R.; Immel, G.; Kurat, G.; Taylor, S.

    1994-01-01

    In a companion paper, Zolensky discusses interplanetary dust particles (IDP's) collected in the stratosphere. Here, we describe the recovery of much larger unmelted to partially melted IDP's from the Greenland and Antarctica ice sheet, and discuss problems arising in their collection and curation, as well as future prospects for tackling these problems.

  18. A Jubilant Connection: General Jubal Early's Troops and the Golden Ratio

    ERIC Educational Resources Information Center

    Bolte, Linda A.; Noon, Tim R., Jr.

    2012-01-01

    The golden ratio, one of the most beautiful numbers in all of mathematics, arises in some surprising places. At first glance, we might expect that a General checking his troops' progress would be nothing more than a basic distance-rate-time problem. However, further exploration reveals a multi-faceted problem, one in which the ratio of rates…

  19. Language Problems and the Final Act. Esperanto Documents, New Series No. 11A.

    ERIC Educational Resources Information Center

    Universal Esperanto Association, Rotterdam (Netherlands).

    The Final Act of the Conference on Security and Co-operation in Europe, linguistic problems in the way of cooperation, language differences and the potential for discriminatory practice, and the need for a new linguistic order are discussed. It is suggested that misunderstandings arising from differences of language reduce the ability of the 35…

  20. An Investigation of Starting Point Preferences in Human Performance on Traveling Salesman Problems

    ERIC Educational Resources Information Center

    MacGregor, James N.

    2014-01-01

    Previous studies have shown that people start traveling sales problem tours significantly more often from boundary than from interior nodes. There are a number of possible reasons for such a tendency: first, it may arise as a direct result of the processes involved in tour construction; second, boundary points may be perceptually more salient than…

  1. Detection of polarization in the cosmic microwave background using DASI. Degree Angular Scale Interferometer.

    PubMed

    Kovac, J M; Leitch, E M; Pryke, C; Carlstrom, J E; Halverson, N W; Holzapfel, W L

    The past several years have seen the emergence of a standard cosmological model, in which small temperature differences in the cosmic microwave background (CMB) radiation on angular scales of the order of a degree are understood to arise from acoustic oscillations in the hot plasma of the early Universe, arising from primordial density fluctuations. Within the context of this model, recent measurements of the temperature fluctuations have led to profound conclusions about the origin, evolution and composition of the Universe. Using the measured temperature fluctuations, the theoretical framework predicts the level of polarization of the CMB with essentially no free parameters. Therefore, a measurement of the polarization is a critical test of the theory and thus of the validity of the cosmological parameters derived from the CMB measurements. Here we report the detection of polarization of the CMB with the Degree Angular Scale Interferometer (DASI). The polarization is deteced with high confidence, and its level and spatial distribution are in excellent agreement with the predictions of the standard theory.

  2. Problem-based learning: Using students' questions to drive knowledge construction

    NASA Astrophysics Data System (ADS)

    Chin, Christine; Chia, Li-Gek

    2004-09-01

    This study employed problem-based learning for project work in a year 9 biology class. The purpose of the study was to investigate (a) students' inspirations for their self-generated problems and questions, (b) the kinds of questions that students asked individually and collaboratively, and (c) how students' questions guided them in knowledge construction. Data sources included observation and field notes, students' written documents, audiotapes and videotapes of students working in groups, and student interviews. Sources of inspiration for students' problems and questions included cultural beliefs and folklore; wonderment about information propagated by advertisements and the media; curiosity arising from personal encounters, family members' concerns, or observations of others; and issues arising from previous lessons in the school curriculum. Questions asked individually pertained to validation of common beliefs and misconceptions, basic information, explanations, and imagined scenarios. The findings regarding questions asked collaboratively are presented as two assertions. Assertion 1 maintained that students' course of learning were driven by their questions. Assertion 2 was that the ability to ask the right'' questions and the extent to which these could be answered, were important in sustaining students' interest in the project. Implications of the findings for instructional practice are discussed.

  3. The Doubting System 1: Evidence for automatic substitution sensitivity.

    PubMed

    Johnson, Eric D; Tubau, Elisabet; De Neys, Wim

    2016-02-01

    A long prevailing view of human reasoning suggests severe limits on our ability to adhere to simple logical or mathematical prescriptions. A key position assumes these failures arise from insufficient monitoring of rapidly produced intuitions. These faulty intuitions are thought to arise from a proposed substitution process, by which reasoners unknowingly interpret more difficult questions as easier ones. Recent work, however, suggests that reasoners are not blind to this substitution process, but in fact detect that their erroneous responses are not warranted. Using the popular bat-and-ball problem, we investigated whether this substitution sensitivity arises out of an automatic System 1 process or whether it depends on the operation of an executive resource demanding System 2 process. Results showed that accuracy on the bat-and-ball problem clearly declined under cognitive load. However, both reduced response confidence and increased response latencies indicated that biased reasoners remained sensitive to their faulty responses under load. Results suggest that a crucial substitution monitoring process is not only successfully engaged, but that it automatically operates as an autonomous System 1 process. By signaling its doubt along with a biased intuition, it appears System 1 is "smarter" than traditionally assumed.

  4. The cross-over to magnetostrophic convection in planetary dynamo systems

    PubMed Central

    King, E. M.

    2017-01-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ, yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, LX≈(Λo2/Rmo)D, where Λo is the linear (or traditional) Elsasser number, Rmo is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above LX, magnetostrophic convection dynamics should not be possible. Only on scales smaller than LX should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because LX is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λo≃1 and Rmo≃103 in Earth’s core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations. PMID:28413338

  5. The cross-over to magnetostrophic convection in planetary dynamo systems.

    PubMed

    Aurnou, J M; King, E M

    2017-03-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ , yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, [Formula: see text], where Λ o is the linear (or traditional) Elsasser number, Rm o is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above [Formula: see text], magnetostrophic convection dynamics should not be possible. Only on scales smaller than [Formula: see text] should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because [Formula: see text] is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λ o ≃1 and Rm o ≃10 3 in Earth's core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations.

  6. Precision of natural satellite ephemerides from observations of different types

    NASA Astrophysics Data System (ADS)

    Emelyanov, N. V.

    2017-08-01

    Currently, various types of observations of natural planetary satellites are used to refine their ephemerides. A new type of measurement - determining the instants of apparent satellite encounters - has recently been proposed by Morgado and co-workers. The problem that arises is which type of measurement to choose in order to obtain an ephemeris precision that is as high as possible. The answer can be obtained only by modelling the entire process: observations, obtaining the measured values, refining the satellite motion parameters, and generating the ephemeris. The explicit dependence of the ephemeris precision on observational accuracy as well as on the type of observations is unknown. In this paper, such a dependence is investigated using the Monte Carlo statistical method. The relationship between the ephemeris precision for different types of observations is then assessed. The possibility of using the instants of apparent satellite encounters to obtain an ephemeris is investigated. A method is proposed that can be used to fit the satellite orbital parameters to this type of measurement. It is shown that, in the absence of systematic scale errors in the CCD frame, the use of the instants of apparent encounters leads to less precise ephemerides. However, in the presence of significant scale errors, which is often the case, this type of measurement becomes effective because the instants of apparent satellite encounters do not depend on scale errors.

  7. A new indicator for the measurement of change with ordinal scores.

    PubMed

    Ferreira, Mario Luiz Pinto; Almeida, Renan Moritz V R; Luiz, Ronir Raggio

    2013-10-01

    Studies on how to better measure change have been published at least since the third decade of the last century, but no general indicator or strategy of measurement is currently agreed upon. The aim of this study is to propose a new indicator, the indicator of positive change, as an option for the assessment of change when ordinal scores are used in pretest and posttest designs. The basic idea is to measure the proportion of possible (positive) change inside a group that can be attributed to an intervention. The approach is based on the joint distribution of the before and after scores (differences), represented by the cells (i, j) of a contingency table m × m (m is the number of classes of the ordinal measurement scale; i and j are the lines and columns of the table, respectively). By convention, higher classes are the most unfavorable on the scale such that subjects that improve "migrate" from the higher to the lower classes as a result of an intervention and vice versa. The introduced indicator offers a new strategy for the analysis of change when dealing with repeated measurements of the same subject, assuming that the measured variable is ordinal (e.g., clinician-rating scales). The presented approach is easily interpretable and avoids the problems that arise, for instance, in those cases where a large concentration of high/low scores is present at the baseline.

  8. Asymptotic theory of time varying networks with burstiness and heterogeneous activation patterns

    NASA Astrophysics Data System (ADS)

    Burioni, Raffaella; Ubaldi, Enrico; Vezzani, Alessandro

    2017-05-01

    The recent availability of large-scale, time-resolved and high quality digital datasets has allowed for a deeper understanding of the structure and properties of many real-world networks. The empirical evidence of a temporal dimension prompted the switch of paradigm from a static representation of networks to a time varying one. In this work we briefly review the framework of time-varying-networks in real world social systems, especially focusing on the activity-driven paradigm. We develop a framework that allows for the encoding of three generative mechanisms that seem to play a central role in the social networks’ evolution: the individual’s propensity to engage in social interactions, its strategy in allocate these interactions among its alters and the burstiness of interactions amongst social actors. The functional forms and probability distributions encoding these mechanisms are typically data driven. A natural question arises if different classes of strategies and burstiness distributions, with different local scale behavior and analogous asymptotics can lead to the same long time and large scale structure of the evolving networks. We consider the problem in its full generality, by investigating and solving the system dynamics in the asymptotic limit, for general classes of ties allocation mechanisms and waiting time probability distributions. We show that the asymptotic network evolution is driven by a few characteristics of these functional forms, that can be extracted from direct measurements on large datasets.

  9. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  10. Transient behavior of vertical scaling of mesoscale winds in the light of atmospheric turbulence transfer in and between synoptic and mesoscales

    NASA Astrophysics Data System (ADS)

    Barros, A. P.; Eghdami, M.

    2017-12-01

    High-resolution ( 1 km) numerical weather prediction models are capable of producing atmospheric spectra over synoptic and mesoscale ranges. Nogueira and Barros (2015) showed using high-resolution simulations in the Andes that the horizontal scale invariant behavior of atmospheric wind and water fields in the model is a process-dependent transient property that varies with the underlying dynamics. They found a sharp transition in the scaling parameters between non-convective and convective conditions. Spectral slopes around 2-2.3 arise under non-convective or very weak convective conditions, whereas in convective situations the transient scaling exponents remain under -5/3. Based on these results, Nogueira and Barros (2015) proposed a new sub-grid scale parameterization of clouds obtained from coarse resolution states alone. High Reynolds number direct numerical simulations of two-dimensional turbulence transfer shows that atmospheric flows involve concurrent direct (downscale) enstrophy transfer in the synoptic scales and inverse (upscale) kinetic energy transfer from the meso- to the synoptic-scales. In this study we use an analogy to investigate the transient behavior of kinetic energy spectra of winds over the Andes and Southern Appalachian Mountains representative of high and middle mountains, respectively. In the unstable conditions and particularly in the Planetary Boundary Layer (PBL) the spectral slopes approach -5/3 associated with the upscale KE turbulence transfer. However, in the stable conditions and above the planetary boundary layer, the spectra slopes approach steeper slopes about -3 associated with the downscale KE transfer. The underlying topography, surface roughness, diurnal heating and cooling and moist processes add to the complexity of the problem by introducing anisotropy and sources and sinks of energy. A comprehensive analysis and scaling of flow behavior conditional on stability regime for both KE and moist processes (total water, cloud water, rainfall) is necessary to elucidate scale-interactions among different processes.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.

    We revisit the possibility that the Planck mass is spontaneously generated in scale-invariant scalar-tensor theories of gravity, typically leading to a “dilaton.” The fifth force, arising from the dilaton, is severely constrained by astrophysical measurements. We explore the possibility that nature is fundamentally scale invariant and argue that, as a consequence, the fifth-force effects are dramatically suppressed and such models are viable. Finally, we discuss possible obstructions to maintaining scale invariance and how these might be resolved.

  12. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errorsmore » in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis to a small set of nodes or by identifying equivalence classes of nodes and sampling our debug targets from them. We implemented these techniques as lightweight tools that efficiently work on the full scale of the target machine. We explored four lightweight debugging refinements: generic classification parameters, such as stack traces, application-specific classification parameters, such as global variables, statistical data acquisition techniques and machine learning based approaches to perform root cause analysis. Work done under this project can be divided into two categories, new algorithms and techniques for scalable debugging, and foundation infrastructure work on our MRNet multicast-reduction framework for scalability, and Dyninst binary analysis and instrumentation toolkits.« less

  13. Understanding Forces: What's the Problem?

    ERIC Educational Resources Information Center

    Kibble, Bob

    2006-01-01

    Misconceptions about forces are very common and seem to arise from everyday experience and use of words. Ways to improve students' understanding of forces, as used in recent a IOP CD-Rom, are discussed here.

  14. A Hidden Surface Algorithm for Computer Generated Halftone Pictures

    DTIC Science & Technology

    converting data describing three-dimensional objects into data that can be used to generate two-dimensional halftone images. It deals with some problems that arise in black and white, and color shading.

  15. Ethical challenges in conducting clinical research in lung cancer

    PubMed Central

    Tod, Angela M.

    2016-01-01

    The article examines ethical challenges that arise with clinical lung cancer research focusing on design, recruitment, conduct and dissemination. Design: problems related to equipoise can arise in lung cancer studies. Equipoise is an ethics precondition for RCTs and exists where there is insufficient evidence to decide which of two or more treatments is best. Difficulties arise in deciding what level of uncertainty constitutes equipoise and who should be in equipoise, for example, patients might not be even where clinicians are. Patient and public involvement (PPI) can reduce but not remove the problems. Recruitment: (I) lung cancer studies can be complex, making it difficult to obtain good quality consent. Some techniques can help, such as continuous consent. But researchers should not expect consent to be the sole protection for participants’ welfare. This protection is primarily done elsewhere in the research process, for example, in ethics review; (II) the problem of desperate volunteers: some patients only consent to a trial because it gives them a 50/50 option of the treatment they want and can be disappointed or upset if randomised to the other arm. This is not necessarily unfair, given clinical equipoise. However, it should be avoided where possible, for example, by using alternative trial designs; (III) the so-called problem of therapeutic misconception: this is the idea that patients are mistaken if they enter trials believing this to be in their clinical best interest. We argue the problem is misconceived and relates only to certain health systems. Conduct: lung cancer trials face standard ethical challenges with regard to trial conduct. PPI could be used in decisions about criteria for stopping rules. Dissemination: as in other trial areas, it is important that all results, including negative ones, are reported. We argue also that the role of PPI with regard to dissemination is currently under-developed. PMID:27413698

  16. Extending substructure based iterative solvers to multiple load and repeated analyses

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1993-01-01

    Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.

  17. New Physics Beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Cai, Haiying

    In this thesis we discuss several extensons of the standard model, with an emphasis on the hierarchy problem. The hierachy problem related to the Higgs boson mass is a strong indication of new physics beyond the Standard Model. In the literature, several mechanisms, e.g. , supersymmetry (SUSY), the little Higgs and extra dimensions, are proposed to explain why the Higgs mass can be stabilized to the electroweak scale. In the Standard Model, the largest quadratically divergent contribution to the Higgs mass-squared comes from the top quark loop. We consider a few novel possibilities on how this contribution is cancelled. In the standard SUSY scenario, the quadratic divergence from the fermion loops is cancelled by the scalar superpartners and the SUSY breaking scale determines the masses of the scalars. We propose a new SUSY model, where the superpartner of the top quark is spin-1 rather than spin-0. In little Higgs theories, the Higgs field is realized as a psudo goldstone boson in a nonlinear sigma model. The smallness of its mass is protected by the global symmetry. As a variation, we put the little Higgs into an extra dimensional model where the quadratically divergent top loop contribution to the Higgs mass is cancelled by an uncolored heavy "top quirk" charged under a different SU(3) gauge group. Finally, we consider a supersymmetric warped extra dimensional model where the superpartners have continuum mass spectra. We use the holographic boundary action to study how a mass gap can arise to separate the zero modes from continuum modes. Such extensions of the Standard Model have novel signatures at the Large Hadron Collider.

  18. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  19. Using process groups to implement failure detection in asynchronous environments

    NASA Technical Reports Server (NTRS)

    Ricciardi, Aleta M.; Birman, Kenneth P.

    1991-01-01

    Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.

  20. Compact scheme for systems of equations applied to fundamental problems of mechanics of continua

    NASA Technical Reports Server (NTRS)

    Klimkowski, Jerzy Z.

    1990-01-01

    Compact scheme formulation was used in the treatment of boundary conditions for a system of coupled diffusion and Poisson equations. Models and practical solutions of specific engineering problems arising in solid mechanics, chemical engineering, heat transfer and fuid mechanics are described and analyzed for efficiency and accuracy. Only 2-D cases are discussed and a new method of numerical treatment of boundary conditions common in the fundamental problems of mechanics of continua is presented.

  1. Method of Reproduction of the Luminous Flux of the LED Light Sources by a Spherical Photometer

    NASA Astrophysics Data System (ADS)

    Huriev, M.; Neyezhmakov, P.

    2018-02-01

    In connection with transition to energy-efficient temporally stable light-emitting diodes (LEDs) lighting, a problem of ensuring the traceability of results of measurement of characteristics of light sources arises. The problem is related to existing measurement standards of luminous flux based on spherical photometers optimized for the reference incandescent lamps with a relative spectral characteristic different from the spectrum of the LEDs. We propose a method for reproduction of the luminous flux, which solves this problem.

  2. Observation of quantum criticality with ultracold atoms in optical lattices

    NASA Astrophysics Data System (ADS)

    Zhang, Xibo

    As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.

  3. An Artificial Neural Network Controller for Intelligent Transportation Systems Applications

    DOT National Transportation Integrated Search

    1996-01-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems appli...

  4. Humane Ethics in Veterinary Education

    ERIC Educational Resources Information Center

    Fox, M. W.

    1978-01-01

    This discussion focuses on the problem faced by biomedical students who are learning objective, factual information and techniques without being given the opportunity to consider the many ethical dilemmas and moral questions that will arise after graduation. (LBH)

  5. [Assisted reproductive technologies and ethics].

    PubMed

    Belaisch-Allart, Joëlle

    2014-01-01

    Since the first birth after in vitro fertilization more than 5 million of IVF babies are born in the world. Assisted reproductive technologies captivate the public, they allow maternity without ovary (oocyte donation), without uterus (surrogate mother), paternity without spermatozoids (sperm donation), parentality without limits of age, parentality after death and homoparentality. These technologies arise a lot of ethics questions, the problem is that the answers are not the same all-round the world, laws are based on morals, beliefs, faiths, and convictions. Theses variations arise themselves questions on the value of these non-universal answers.

  6. Reflections on psychoanalytic treatment of Lubavitch Chassidim couples: working with a culturally divergent population.

    PubMed

    Schulman, Martin A; Kaplan, Ricki S

    2014-08-01

    Chassidic Jews create separate developmental lines for males and females beginning at three years of age. Since early marriages are encouraged and there is minimal contact between the sexes prior to marriage, problems inevitably arise in relationships. This article discusses both newlywed and long-term married Lubavitch Chassidim in couples treatment with secular analysts, parameters necessary for successful treatment, and countertransferences that arise. It is part of an ongoing series of publications based on the authors' decade-long psychoanalytic work with this population.

  7. Optical Asymmetry and Nonlinear Light Scattering from Colloidal Gold Nanorods.

    PubMed

    Lien, Miao-Bin; Kim, Ji-Young; Han, Myung-Geun; Chang, You-Chia; Chang, Yu-Chung; Ferguson, Heather J; Zhu, Yimei; Herzing, Andrew A; Schotland, John C; Kotov, Nicholas A; Norris, Theodore B

    2017-06-27

    A systematic study is presented of the intensity-dependent nonlinear light scattering spectra of gold nanorods under resonant excitation of the longitudinal surface plasmon resonance (SPR). The spectra exhibit features due to coherent second and third harmonic generation as well as a broadband feature that has been previously attributed to multiphoton photoluminescence arising primarily from interband optical transitions in the gold. A detailed study of the spectral dependence of the scaling of the scattered light with excitation intensity shows unexpected scaling behavior of the coherent signals, which is quantitatively accounted for by optically induced damping of the SPR mode through a Fermi liquid model of the electronic scattering. The broadband feature is shown to arise not from luminescence, but from scattering of the second-order longitudinal SPR mode with the electron gas, where efficient excitation of the second order mode arises from an optical asymmetry of the nanorod. The electronic-temperature-dependent plasmon damping and the Fermi-Dirac distribution together determine the intensity dependence of the broadband emission, and the structure-dependent absorption spectrum determines the spectral shape through the fluctuation-dissipation theorem. Hence a complete self-consistent picture of both coherent and incoherent light scattering is obtained with a single set of physical parameters.

  8. Timescale bias in measuring river migration rate

    NASA Astrophysics Data System (ADS)

    Donovan, M.; Belmont, P.; Notebaert, B.

    2016-12-01

    River channel migration plays an important role in sediment routing, water quality, riverine ecology, and infrastructure risk assessment. Migration rates may change in time and space due to systematic changes in hydrology, sediment supply, vegetation, and/or human land and water management actions. The ability to make detailed measurements of lateral migration over a wide range of temporal and spatial scales has been enhanced from increased availability of historical landscape-scale aerial photography and high-resolution topography (HRT). Despite a surge in the use of historical and contemporary aerial photograph sequences in conjunction with evolving methods to analyze such data for channel change, we found no research considering the biases that may be introduced as a function of the temporal scales of measurement. Unsteady processes (e.g.; sedimentation, channel migration, width changes) exhibit extreme discontinuities over time and space, resulting in distortion when measurements are averaged over longer temporal scales, referred to as `Sadler effects' (Sadler, 1981; Gardner et al., 1987). Using 12 sets of aerial photographs for the Root River (Minnesota), we measure lateral migration over space (110 km) and time (1937-2013) assess whether bias arises from different measurement scales and whether rates shift systematically with increased discharge over time. Results indicate that measurement-scale biases indeed arise from the time elapsed between measurements. We parsed the study reach into three distinct reaches and examine if/how recent increases in river discharge translate into changes in migration rate.

  9. Interest Subsidies on Student Loans: A Better Class of Drain. CEE DP 114

    ERIC Educational Resources Information Center

    Barr, Nicholas; Johnston, Alison

    2010-01-01

    The British system of student loans has a zero real rate of interest, less than it costs the government to borrow the money. This paper discusses the problems that arise from interest subsidies in the UK system of student loans; systems in other countries, for example Australia and New Zealand, face similar problems. The topic appears to be narrow…

  10. Two Problems with Table Saws

    ERIC Educational Resources Information Center

    Vautaw, William R.

    2008-01-01

    We solve two problems that arise when constructing picture frames using only a table saw. First, to cut a cove running the length of a board (given the width of the cove and the angle the cove makes with the face of the board) we calculate the height of the blade and the angle the board should be turned as it is passed over the blade. Second, to…

  11. Elucidation of Heterogeneous Processes Controlling Boost Phase Signatures

    DTIC Science & Technology

    1990-09-12

    three year research program to develop efficient theoretical methods to study collisional processes involved in radiative signature modeling . The...Marlboro, MD 20772 I. Statement of Problem For strategic defense, it is important to be able to effectively model radiative signaturesl arising from...Thus our computational work was on problems or models for which exact results for making comparisons were available. Our key validations were

  12. Effect of refining variables on the properties and composition of JP-5

    NASA Technical Reports Server (NTRS)

    Lieberman, M.; Taylor, W. F.

    1980-01-01

    Potential future problem areas that could arise from changes in the composition, properties, and potential availability of JP-5 produced in the near future are identified. Potential fuel problems concerning thermal stability, lubricity, low temperature flow, combustion, and the effect of the use of specific additives on fuel properties and performance are discussed. An assessment of available crudes and refinery capabilities is given.

  13. The forest ecosystem of southeast Alaska: 6. Forest diseases.

    Treesearch

    Thomas H. Laurent

    1974-01-01

    The disease problems of old growth are largely being taken care of by cutting. This same cutting is rapidly converting large areas of old growth to reproduction and young growth. It is in these areas of young growth that our disease problems will most probably arise. With a few exceptions the reproduction and young-growth stands appear quite healthy at this time with...

  14. A numerical solution of a singular boundary value problem arising in boundary layer theory.

    PubMed

    Hu, Jiancheng

    2016-01-01

    In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors.

  15. Crack problems involving nonhomogeneous interfacial regions in bonded materials

    NASA Technical Reports Server (NTRS)

    Erdogan, F.

    1990-01-01

    Consideration is given to two classes of fracture-related solid mechanics problems in which the model leads to some physically anomalous results. The first is the interface crack problem associated with the debonding process in which the corresponding elasticity solution predicts severe oscillations of stresses and the crack surface displacements vary near the crack tip. The second deals with crack intersecting the interface. The nature of the solutions around the crack tips arising from these problems is reviewed. The rationale for introducing a new interfacial zone model is discussed, its analytical consequences within the context of the two crack-problem classes are described, and some examples are presented.

  16. Semiconductor crystal growth and segregation problems on earth and in space

    NASA Technical Reports Server (NTRS)

    Gatos, H. C.

    1982-01-01

    Semiconductor crystal growth and segregation problems are examined in the context of their relationship to material properties, and some of the problems are illustrated with specific experimental results. The compositional and structural defects encountered in semiconductors are largely associated with gravity-induced convective currents in the melt; additional problems are introduced by variations in stoichiometry. It is demonstrated that in near-zero gravity environment, crystal growth and segregation takes place under ideal steady-state conditions with minimum convective interference. A discussion of the advantages of zero-gravity crystal growth is followed by a summary of problems arising from the absence of gravitational forces.

  17. Reactive transport in a partially molten system with binary solid solution

    NASA Astrophysics Data System (ADS)

    Jordan, J.; Hesse, M. A.

    2017-12-01

    Melt extraction from the Earth's mantle through high-porosity channels is required to explain the composition of the oceanic crust. Feedbacks from reactive melt transport are thought to localize melt into a network of high-porosity channels. Recent studies invoke lithological heterogeneities in the Earth's mantle to seed the localization of partial melts. Therefore, it is necessary to understand the reaction fronts that form as melt flows across the lithological interface of a heterogeneity and the background mantle. Simplified melting models of such systems aide in the interpretation and formulation of larger scale mantle models. Motivated by the aforementioned facts, we present a chromatographic analysis of reactive melt transport across lithological boundaries, using theory for hyperbolic conservation laws. This is an extension of well-known linear trace element chromatography to the coupling of major elements and energy transport. Our analysis allows the prediction of the feedbacks that arise in reactive melt transport due to melting, freezing, dissolution and precipitation for frontal reactions. This study considers the simplified case of a rigid, partially molten porous medium with binary solid solution. As melt traverses a lithological contact-modeled as a Riemann problem-a rich set of features arise, including a reacted zone between an advancing reaction front and partial chemical preservation of the initial contact. Reactive instabilities observed in this study originate at the lithological interface rather than along a chemical gradient as in most studies of mantle dynamics. We present a regime diagram that predicts where reaction fronts become unstable, thereby allowing melt localization into high-porosity channels through reactive instabilities. After constructing the regime diagram, we test the one-dimensional hyperbolic theory against two-dimensional numerical experiments. The one-dimensional hyperbolic theory is sufficient for predicting the qualitative behavior of reactive melt transport simulations conducted in two-dimensions. The theoretical framework presented can be extended to more complex and realistic phase behavior, and is therefore a useful tool for understanding nonlinear feedbacks in reactive melt transport problems relevant to mantle dynamics.

  18. Perspective Imagery in Synthetic Scenes used to Control and Guide Aircraft during Landing and Taxi: Some Issues and Concerns

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W.; Kaiser, Mary K.

    2003-01-01

    Perspective synthetic displays that supplement, or supplant, the optical windows traditionally used for guidance and control of aircraft are accompanied by potentially significant human factors problems related to the optical geometric conformality of the display. Such geometric conformality is broken when optical features are not in the location they would be if directly viewed through a window. This often occurs when the scene is relayed or generated from a location different from the pilot s eyepoint. However, assuming no large visual/vestibular effects, a pilot cad often learn to use such a display very effectively. Important problems may arise, however, when display accuracy or consistency is compromised, and this can usually be related to geometrical discrepancies between how the synthetic visual scene behaves and how the visual scene through a window behaves. In addition to these issues, this paper examines the potentially critical problem of the disorientation that can arise when both a synthetic display and a real window are present in a flight deck, and no consistent visual interpretation is available.

  19. On a numerical method for solving integro-differential equations with variable coefficients with applications in finance

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, O.; Rodochenko, V.

    2018-03-01

    We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.

  20. Flavor from the electroweak scale

    DOE PAGES

    Bauer, Martin; Carena, Marcela; Gemmler, Katrin

    2015-11-04

    We discuss the possibility that flavor hierarchies arise from the electroweak scale in a two Higgs doublet model, in which the two Higgs doublets jointly act as the flavon. Quark masses and mixing angles are explained by effective Yukawa couplings, generated by higher dimensional operators involving quarks and Higgs doublets. Modified Higgs couplings yield important effects on the production cross sections and decay rates of the light Standard Model like Higgs. In addition, flavor changing neutral currents arise at tree-level and lead to strong constraints from meson-antimeson mixing. Remarkably, flavor constraints turn out to prefer a region in parameter spacemore » that is in excellent agreement with the one preferred by recent Higgs precision measurements at the Large Hadron Collider (LHC). Direct searches for extra scalars at the LHC lead to further constraints. Precise predictions for the production and decay modes of the additional Higgs bosons are derived, and we present benchmark scenarios for searches at the LHC Run II. As a result, flavor breaking at the electroweak scale as well as strong coupling effects demand a UV completion at the scale of a few TeV, possibly within the reach of the LHC.« less

  1. Fundamental studies of stress distributions and stress relaxation in oxide scales on high temperature alloys. [Final progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shores, D.A.; Stout, J.H.; Gerberich, W.W.

    1993-06-01

    This report summarizes a three-year study of stresses arising in the oxide scale and underlying metal during high temperature oxidation and of scale cracking. In-situ XRD was developed to measure strains during oxidation over 1000{degrees}C on pure metals. Acoustic emission was used to observe scale fracture during isothermal oxidation and cooling, and statistical analysis was used to infer mechanical aspects of cracking. A microscratch technique was used to measure the fracture toughness of scale/metal interface. A theoretical model was evaluated for the development and relaxation of stresses in scale and metal substrate during oxidation.

  2. Solving multi-leader-common-follower games.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leyffer, S.; Munson, T.; Mathematics and Computer Science

    Multi-leader-common-follower games arise when modelling two or more competitive firms, the leaders, that commit to their decisions prior to another group of competitive firms, the followers, that react to the decisions made by the leaders. These problems lead in a natural way to equilibrium problems with equilibrium constraints (EPECs). We develop a characterization of the solution sets for these problems and examine a variety of nonlinear optimization and nonlinear complementarity formulations of EPECs. We distinguish two broad cases: problems where the leaders can cost-differentiate and problems with price-consistent followers. We demonstrate the practical viability of our approach by solving amore » range of medium-sized test problems.« less

  3. Overview of Krylov subspace methods with applications to control problems

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.

  4. About some types of constraints in problems of routing

    NASA Astrophysics Data System (ADS)

    Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.

    2016-12-01

    Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.

  5. Nanomedicine – challenge and perspectives

    PubMed Central

    Riehemann, Kristina; Schneider, Stefan W.; Luger, Thomas A.; Godin, Biana; Ferrari, Mauro; Fuchs, Harald

    2014-01-01

    Nanomedicine introduces nanotechnology concepts into medicine and thus joins two large cross disciplinary fields with an unprecedented societal and economical potential arising from the natural combination of specific achievements in the respective fields. The common basis evolves from the molecular scale properties relevant in the two fields. Nanoanalytical tools such as local probes and molecular imaging techniques, allow us to characterize surface and interface properties at a nanometer scale at predefined locations, while elaborated chemical approaches offer the opportunity for the control and addressing of surfaces e. g. for targeted drug delivery, enhanced biocompatibility and neuroprosthetic purposes. This commonality opens a wide variety of economic fields both of industrial and clinical interests. However, concerns arise in this cross disciplinary area about toxicological aspects and ethical implications. This review gives an overview of selected recent developments of nanotechnology applied on medical objectives. PMID:19142939

  6. A Universal Model for Solar Eruptions

    NASA Technical Reports Server (NTRS)

    Wyper, Peter F.; Antiochos, Spiro K.; Devore, C. Richard

    2017-01-01

    Magnetically driven eruptions on the Sun, from stellar-scale coronal mass ejections1 to small-scale coronal X-ray and extreme-ultraviolet jets, have frequently been observed to involve the ejection of the highly stressed magnetic flux of a filament. Theoretically, these two phenomena have been thought to arise through very different mechanisms: coronal mass ejections from an ideal (non-dissipative) process, whereby the energy release does not require a change in the magnetic topology, as in the kink or torus instability; and coronal jets from a resistive process, involving magnetic reconnection. However, it was recently concluded from new observations that all coronal jets are driven by filament ejection, just like large mass ejections. This suggests that the two phenomena have physically identical origin and hence that a single mechanism may be responsible, that is, either mass ejections arise from reconnection, or jets arise from an ideal instability. Here we report simulations of a coronal jet driven by filament ejection, whereby a region of highly sheared magnetic field near the solar surface becomes unstable and erupts. The results show that magnetic reconnection causes the energy release via 'magnetic breakout', a positive feedback mechanism between filament ejection and reconnection. We conclude that if coronal mass ejections and jets are indeed of physically identical origin (although on different spatial scales) then magnetic reconnection (rather than an ideal process) must also underlie mass ejections, and that magnetic breakout is a universal model for solar eruptions.

  7. The Cauchy problem for the Pavlov equation

    NASA Astrophysics Data System (ADS)

    Grinevich, P. G.; Santini, P. M.; Wu, D.

    2015-10-01

    Commutation of multidimensional vector fields leads to integrable nonlinear dispersionless PDEs that arise in various problems of mathematical physics and have been intensively studied in recent literature. This report aims to solve the scattering and inverse scattering problem for integrable dispersionless PDEs, recently introduced just at a formal level, concentrating on the prototypical example of the Pavlov equation, and to justify an existence theorem for global bounded solutions of the associated Cauchy problem with small data. An essential part of this work was made during the visit of the three authors to the Centro Internacional de Ciencias in Cuernavaca, Mexico in November-December 2012.

  8. Decentralized control

    NASA Technical Reports Server (NTRS)

    Steffen, Chris

    1990-01-01

    An overview of the time-delay problem and the reliability problem which arise in trying to perform robotic construction operations at a remote space location are presented. The effects of the time-delay upon the control system design will be itemized. A high level overview of a decentralized method of control which is expected to perform better than the centralized approach in solving the time-delay problem is given. The lower level, decentralized, autonomous, Troter Move-Bar algorithm is also presented (Troters are coordinated independent robots). The solution of the reliability problem is connected to adding redundancy to the system. One method of adding redundancy is given.

  9. Photovoltaic module hot spot durability design and test methods

    NASA Technical Reports Server (NTRS)

    Arnett, J. C.; Gonzalez, C. C.

    1981-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, the susceptibility of fat-plate modules to hot-spot problems is investigated. Hot-spot problems arise in modules when the cells become back-biased and operate in the negative-voltage quadrant, as a result of short-circuit current mismatch, cell cracking or shadowing. The details of a qualification test for determining the capability of modules of surviving field hot-spot problems and typical results of this test are presented. In addition, recommended circuit-design techniques for improving the module and array reliability with respect to hot-spot problems are presented.

  10. Large-scale 3-D EM modelling with a Block Low-Rank multifrontal direct solver

    NASA Astrophysics Data System (ADS)

    Shantsev, Daniil V.; Jaysaval, Piyoosh; de la Kethulle de Ryhove, Sébastien; Amestoy, Patrick R.; Buttari, Alfredo; L'Excellent, Jean-Yves; Mary, Theo

    2017-06-01

    We put forward the idea of using a Block Low-Rank (BLR) multifrontal direct solver to efficiently solve the linear systems of equations arising from a finite-difference discretization of the frequency-domain Maxwell equations for 3-D electromagnetic (EM) problems. The solver uses a low-rank representation for the off-diagonal blocks of the intermediate dense matrices arising in the multifrontal method to reduce the computational load. A numerical threshold, the so-called BLR threshold, controlling the accuracy of low-rank representations was optimized by balancing errors in the computed EM fields against savings in floating point operations (flops). Simulations were carried out over large-scale 3-D resistivity models representing typical scenarios for marine controlled-source EM surveys, and in particular the SEG SEAM model which contains an irregular salt body. The flop count, size of factor matrices and elapsed run time for matrix factorization are reduced dramatically by using BLR representations and can go down to, respectively, 10, 30 and 40 per cent of their full-rank values for our largest system with N = 20.6 million unknowns. The reductions are almost independent of the number of MPI tasks and threads at least up to 90 × 10 = 900 cores. The BLR savings increase for larger systems, which reduces the factorization flop complexity from O(N2) for the full-rank solver to O(Nm) with m = 1.4-1.6. The BLR savings are significantly larger for deep-water environments that exclude the highly resistive air layer from the computational domain. A study in a scenario where simulations are required at multiple source locations shows that the BLR solver can become competitive in comparison to iterative solvers as an engine for 3-D controlled-source electromagnetic Gauss-Newton inversion that requires forward modelling for a few thousand right-hand sides.

  11. THE DEPENDENCE OF STELLAR MASS AND ANGULAR MOMENTUM LOSSES ON LATITUDE AND THE INTERACTION OF ACTIVE REGION AND DIPOLAR MAGNETIC FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garraffo, Cecilia; Drake, Jeremy J.; Cohen, Ofer

    Rotation evolution of late-type stars is dominated by magnetic braking and the underlying factors that control this angular momentum loss are important for the study of stellar spin-down. In this work, we study angular momentum loss as a function of two different aspects of magnetic activity using a calibrated Alfvén wave-driven magnetohydrodynamic wind model: the strengths of magnetic spots and their distribution in latitude. By driving the model using solar and modified solar surface magnetograms, we show that the topology of the field arising from the net interaction of both small-scale and large-scale field is important for spin-down rates andmore » that angular momentum loss is not a simple function of large scale magnetic field strength. We find that changing the latitude of magnetic spots can modify mass and angular momentum loss rates by a factor of two. The general effect that causes these differences is the closing down of large-scale open field at mid- and high-latitudes by the addition of the small-scale field. These effects might give rise to modulation of mass and angular momentum loss through stellar cycles, and present a problem for ab initio attempts to predict stellar spin-down based on wind models. For all the magnetogram cases considered here, from dipoles to various spotted distributions, we find that angular momentum loss is dominated by the mass loss at mid-latitudes. The spin-down torque applied by magnetized winds therefore acts at specific latitudes and is not evenly distributed over the stellar surface, though this aspect is unlikely to be important for understanding spin-down and surface flows on stars.« less

  12. Shoreline changes and its impact on archaeological sites in West Greenland

    NASA Astrophysics Data System (ADS)

    Fenger-Nielsen, R.; Kroon, A.; Elberling, B.; Hollesen, J.

    2017-12-01

    Coastal erosion is regarded as a major threat to archaeological sites in the Arctic region. The problem arises because the predominantly marine-focused lifeways of Arctic people means that the majority of archaeological sites are found near the coast. On a Pan-Arctic scale, coastal erosion is often explained by long-term processes such as sea level rise, lengthening of open water periods due to a decline in sea ice, and a predicted increase in the frequency of major storms. However, on a local scale other short-term processes may be important parameters determining the coastal development. In this study, we focus on the Nuuk fjord system in West Greenland, which has been inhabited over the past 4000 years by different cultures and holds around 260 registered archaeological settlements. The fjord is characterized by its large branching of narrow deep-water and well-shaded water bodies, where tidal processes and local sources of sediment supply by rivers are observed to be the dominant factors determining the coastal development. We present a regional model showing the vulnerability of the shoreline and archeological sites due to coastal processes. The model is based on a) levelling surveys and historical aerial photographs of nine specific sites distributed in the region, b) water level measurements at three sites representing the inner-, middle- and outer fjord system, c) aerial photographs, satellite images and meteorological data of the entire region used to up-scale our local information at a specific settlement scale towards a regional scale. This deals with spatial and temporal variability in erosion and accumulation patterns along the shores in fjords and open seas.

  13. The EEOC's New Equal Pay Act Guidelines.

    ERIC Educational Resources Information Center

    Greenlaw, Paul S.; Kohl, John P.

    1982-01-01

    Analyzes the new guidelines for enforcement of the Equal Pay Act and their implications for personnel management. Argues that there are key problem areas in the new regulations arising from considerable ambiguity and uncertainty about their interpretation. (SK)

  14. Remain in Your Seats: Crisis Management for the Alumni Travel Director.

    ERIC Educational Resources Information Center

    Bonenberger, Lynne M.

    1991-01-01

    Three alumni travel directors offer advice on taking control when tour crises arise. The cases cited involved irresponsible tour agents, problem travelers, and on-location disasters. Both precautions and creative solutions are emphasized. (MSE)

  15. The case of the missing third.

    PubMed

    Robertson, Robin

    2005-01-01

    How is it that form arises out of chaos? In attempting to deal with this primary question, time and again a "Missing Third" is posited that lies between extremes. The problem of the "Missing Third" can be traced through nearly the entire history of thought. The form it takes, the problems that arise from it, the solutions suggested for resolving it, are each representative of an age. This paper traces the issue from Plato and Parmenides in the 4th--5th centuries, B.C.; to Neoplatonism in the 3rd--5th centuries; to Locke and Descartes in the 17th century; on to Berkeley and Kant in the 18th century; Fechner and Wundt in the 19th century; to behaviorism and Gestalt psychology, Jung, early in the 20th century, ethology and cybernetics later in the 20th century, then culminates late in the 20th century, with chaos theory.

  16. Is CT angiography of the pulmonary arteries indicated in patients with high clinical probability of pulmonary embolism?

    PubMed

    Martínez Montesinos, L; Plasencia Martínez, J M; García Santos, J M

    When a diagnostic test confirms clinical suspicion, the indicated treatment can be administered. A problem arises when the diagnostic test does not confirm the initially suspected diagnosis; when the suspicion is grounded in clinically validated predictive rules and is high, the problem is even worse. This situation arises in up to 40% of patients with high suspicion for acute pulmonary embolism, raising the question of whether CT angiography of the pulmonary arteries should be done systematically. This paper reviews the literature about this issue and lays out the best evidence about the relevant recommendations for patients with high clinical suspicion of acute pulmonary embolism and negative findings on CT angiography. It also explains the probabilistic concepts derived from Bayes' theorem that can be useful for ascertaining the most appropriate approach in these patients. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. Survival probability of diffusion with trapping in cellular neurobiology

    NASA Astrophysics Data System (ADS)

    Holcman, David; Marchewka, Avi; Schuss, Zeev

    2005-09-01

    The problem of diffusion with absorption and trapping sites arises in the theory of molecular signaling inside and on the membranes of biological cells. In particular, this problem arises in the case of spine-dendrite communication, where the number of calcium ions, modeled as random particles, is regulated across the spine microstructure by pumps, which play the role of killing sites, while the end of the dendritic shaft is an absorbing boundary. We develop a general mathematical framework for diffusion in the presence of absorption and killing sites and apply it to the computation of the time-dependent survival probability of ions. We also compute the ratio of the number of absorbed particles at a specific location to the number of killed particles. We show that the ratio depends on the distribution of killing sites. The biological consequence is that the position of the pumps regulates the fraction of calcium ions that reach the dendrite.

  18. Gelfand-type problem for two-phase porous media

    PubMed Central

    Gordon, Peter V.; Moroz, Vitaly

    2014-01-01

    We consider a generalization of the Gelfand problem arising in Frank-Kamenetskii theory of thermal explosion. This generalization is a natural extension of the Gelfand problem to two-phase materials, where, in contrast to the classical Gelfand problem which uses a single temperature approach, the state of the system is described by two different temperatures. We show that similar to the classical Gelfand problem the thermal explosion occurs exclusively owing to the absence of stationary temperature distribution. We also show that the presence of interphase heat exchange delays a thermal explosion. Moreover, we prove that in the limit of infinite heat exchange between phases the problem of thermal explosion in two-phase porous media reduces to the classical Gelfand problem with renormalized constants. PMID:24611025

  19. Generalized Faxén's theorem: Evaluating first-order (hydrodynamic drag) and second-order (acoustic radiation) forces on finite-sized rigid particles, bubbles and droplets in arbitrary complex flows

    NASA Astrophysics Data System (ADS)

    Annamalai, Subramanian; Balachandar, S.

    2016-11-01

    In recent times, study of complex disperse multiphase problems involving several million particles (e.g. volcanic eruptions, spray control etc.) is garnering momentum. The objective of this work is to present an accurate model (termed generalized Faxén's theorem) to predict the hydrodynamic forces on such inclusions (particles/bubbles/droplets) without having to solve for the details of flow around them. The model is developed using acoustic theory and the force obtained as a summation of infinite series (monopole, dipole and higher sources). The first-order force is the time-dependent hydrodynamic drag force arising from the dipole component due to interaction between the gas and the inclusion at the microscale level. The second-order force however is a time-averaged differential force (contributions arise both from monopole and dipole), also known as the acoustic radiation force primarily used to levitate particles. In this work, the monopole and dipole strengths are represented in terms of particle surface and volume averages of the incoming flow properties and therefore applicable to particle sizes of the order of fluid length scale and subjected to any arbitrary flow. Moreover, this model can also be used to account for inter-particle coupling due to neighboring particles. U.S. DoE, NNSA, Advanced Simulation and Computing Program, Cooperative Agreement under PSAAP-II, Contract No. DE-NA0002378.

  20. Challenges for the Modern Science in its Descend Towards Nano Scale

    PubMed Central

    Uskoković, Vuk

    2013-01-01

    The current rise in the interest in physical phenomena at nano spatial scale is described hereby as a natural consequence of the scientific progress in manipulation with matter with an ever higher sensitivity. The reason behind arising of the entirely new field of nanoscience is that the properties of nanostructured materials may significantly differ from their bulk counterparts and cannot be predicted by extrapolations of the size-dependent properties displayed by materials composed of microsized particles. It is also argued that although a material can comprise critical boundaries at the nano scale, this does not mean that it will inevitably exhibit properties that endow a nanomaterial. This implies that the attribute of “nanomaterial” can be used only in relation with a given property of interest. The major challenges faced with the expansion of resolution of the materials design, in terms of hardly reproducible experiments, are further discussed. It is claimed that owing to an unavoidable interference between the experimental system and its environment to which the controlling system belongs, an increased fineness of the experimental settings will lead to ever more difficulties in rendering them reproducible and controllable. Self-assembly methods in which a part of the preprogrammed scientific design is substituted with letting physical systems spontaneously evolve into attractive and functional structures is mentioned as one of the ways to overcome the problems inherent in synthetic approaches at the ultrafine scale. The fact that physical systems partly owe their properties to the interaction with their environment implies that each self-assembly process can be considered a co-assembly event. PMID:26491428

Top