Sample records for optimized schwarz methods

  1. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  2. Newton-Krylov-Schwarz: An implicit solver for CFD

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.

    1995-01-01

    Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.

  3. Domain decomposition in time for PDE-constrained optimization

    DOE PAGES

    Barker, Andrew T.; Stoll, Martin

    2015-08-28

    Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.

  4. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  5. Exact solutions for Hele-Shaw flows with surface tension: The Schwarz-function approach

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Giovani L.

    1993-08-01

    An alternative derivation of the two-parameter family of solutions for a Hele-Shaw flow with surface tension reported previously by Vasconcelos and Kadanoff [Phys. Rev. A 44, 6490 (1991)] is presented. The method of solution given here is based on the formalism of the Schwarz function: an ordinary differential equation for the Schwarz function of the moving interface is obtained and then solved.

  6. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  7. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  8. On some Aitken-like acceleration of the Schwarz method

    NASA Astrophysics Data System (ADS)

    Garbey, M.; Tromeur-Dervout, D.

    2002-12-01

    In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  10. Corrigendum to “The Schwarz alternating method in solid mechanics” [Comput. Methods Appl. Mech. Engrg. 319 (2017) 19–51

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mota, Alejandro; Tezaur, Irina; Alleman, Coleman

    This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.

  11. Corrigendum to “The Schwarz alternating method in solid mechanics” [Comput. Methods Appl. Mech. Engrg. 319 (2017) 19–51

    DOE PAGES

    Mota, Alejandro; Tezaur, Irina; Alleman, Coleman

    2017-12-06

    This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.

  12. Use of various versions of Schwarz method for solving the problem of contact interaction of elastic bodies

    NASA Astrophysics Data System (ADS)

    Galanin, M. P.; Lukin, V. V.; Rodin, A. S.

    2018-04-01

    A definition of a sufficiently common problem of mechanical contact interaction in a system of elastic bodies is given. Various versions of realization of the Schwarz method for solving the contact problem numerically are described and the results of solution of a number of problems are presented. Special attention is paid to calculations where the grids in the bodies significantly differ in steps.

  13. A stable and accurate partitioned algorithm for conjugate heat transfer

    NASA Astrophysics Data System (ADS)

    Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2017-09-01

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.

  14. A stable and accurate partitioned algorithm for conjugate heat transfer

    DOE PAGES

    Meng, F.; Banks, J. W.; Henshaw, W. D.; ...

    2017-04-25

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  15. Schwarz maps of algebraic linear ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Sanabria Malagón, Camilo

    2017-12-01

    A linear ordinary differential equation is called algebraic if all its solution are algebraic over its field of definition. In this paper we solve the problem of finding closed form solution to algebraic linear ordinary differential equations in terms of standard equations. Furthermore, we obtain a method to compute all algebraic linear ordinary differential equations with rational coefficients by studying their associated Schwarz map through the Picard-Vessiot Theory.

  16. The Casalbuoni-Brink-Schwarz superparticle with covariant, reducible constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayi, O.F.

    1992-04-30

    This paper discusses the fermionic constraints of the massless Casalbuoni-Brink-Schwarz superparticle in d = 10 which are separated covariantly as first- and second-class constraints which are infinitely reducible. Although the reducibility conditions of the second-class constraints include the first-class ones a consistent quantization is possible. The ghost structure of the system for quantizing it in terms of the BFV-BRST methods is given and unitarity is shown.

  17. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking

  18. Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD

    NASA Technical Reports Server (NTRS)

    Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.

    1998-01-01

    Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.

  19. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  20. Two new species of Paratrigona Schwarz and the male of Paratrigona ornaticeps (Schwarz) (Hymenoptera, Apidae)

    USDA-ARS?s Scientific Manuscript database

    Two distinctive new species of the Neotropical stingless bee genus Paratrigona Schwarz from Ecuador and Paraguay are described and figured. The Ecuadorian species, P. scapisetosa sp. n., belongs to the haeckeli-lineatifrons group and is easily distinguished from its congeners by the unique shape and...

  1. Learning Progressions in Context: Tensions and Insights from a Semester-Long Middle School Modeling Curriculum

    ERIC Educational Resources Information Center

    Pierson, Ashlyn E.; Clark, Douglas B.; Sherard, Max K.

    2017-01-01

    Schwarz and colleagues have proposed and refined a learning progression for modeling that provides a valuable template for envisioning increasingly sophisticated levels of modeling practice at an aggregate level (Fortus, Shwartz, & Rosenfeld, 2016; Schwarz et al., 2009; Schwarz, Reiser, Archer, Kenyon, & Fortus, 2012). Thinking about…

  2. A stale challenge to the philosophy of science: commentary on "Is psychology based on a methodological error?" by Michael Schwarz.

    PubMed

    Ruck, Nora; Slunecko, Thomas

    2010-06-01

    In his article "Is psychology based on a methodological error?" and based on a quite convincing empirical basis, Michael Schwarz offers a methodological critique of one of mainstream psychology's key test theoretical axioms, i.e., that of the in principle normal distribution of personality variables. It is characteristic of this paper--and at first seems to be a strength of it--that the author positions his critique within a frame of philosophy of science, particularly positioning himself in the tradition of Karl Popper's critical rationalism. When scrutinizing Schwarz's arguments, however, we find Schwarz's critique profound only as an immanent critique of test theoretical axioms. We raise doubts, however, as to Schwarz's alleged 'challenge' to the philosophy of science because the author not at all seems to be in touch with the state of the art of contemporary philosophy of science. Above all, we question the universalist undercurrent that Schwarz's 'bio-psycho-social model' of human judgment boils down to. In contrast to such position, we close our commentary with a plea for a context- and culture sensitive philosophy of science.

  3. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    NASA Astrophysics Data System (ADS)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  4. Fashion, time and the consumption of a Renaissance man in Germany: the costume book of Matthaus Schwarz of Augsburg, 1496-1564.

    PubMed

    Mentges, Gabriele

    2002-01-01

    This article uses the perspective of cultural anthropology to consider the construction of an early modern perception of time and its relation to the dress and personal consumption of a male subject. It focuses on a costume book from the Renaissance compiled by Matthäus Schwarz, a member of the bourgeoisie, who lived in Augsburg from 1496 to 1574. The book contains a collection of 137 drawings, portraying Schwarz's personal choice of dress. It is also an account of Schwarz's life, beginning with his parents, then covering his life-stages from birth to old age. The relationships between body and dress and between the male subject and the world run as a major thread through the book. This article shows how closely connected Schwarz's body is with the life of commodities (dress) and consumption. The life-story of this Renaissance man is expressed in terms of changing fashions, which act as his subjective measure of time.

  5. Overlapping clusters for distributed computation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initialmore » partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.« less

  6. The Cauchy-Schwarz Inequality and the Induced Metrics on Real Vector Spaces Mainly on the Real Line

    ERIC Educational Resources Information Center

    Ramasinghe, W.

    2005-01-01

    It is very well known that the Cauchy-Schwarz inequality is an important property shared by all inner product spaces and the inner product induces a norm on the space. A proof of the Cauchy-Schwarz inequality for real inner product spaces exists, which does not employ the homogeneous property of the inner product. However, it is shown that a real…

  7. Suitability aero-geophysical methods for generating conceptual soil maps and their use in the modeling of process-related susceptibility maps

    NASA Astrophysics Data System (ADS)

    Tilch, Nils; Römer, Alexander; Jochum, Birgit; Schattauer, Ingrid

    2014-05-01

    In the past years, several times large-scale disasters occurred in Austria, which were characterized not only by flooding, but also by numerous shallow landslides and debris flows. Therefore, for the purpose of risk prevention, national and regional authorities also require more objective and realistic maps with information about spatially variable susceptibility of the geosphere for hazard-relevant gravitational mass movements. There are many and various proven methods and models (e.g. neural networks, logistic regression, heuristic methods) available to create such process-related (e.g. flat gravitational mass movements in soil) suszeptibility maps. But numerous national and international studies show a dependence of the suitability of a method on the quality of process data and parameter maps (f.e. Tilch & Schwarz 2011, Schwarz & Tilch 2011). In this case, it is important that also maps with detailed and process-oriented information on the process-relevant geosphere will be considered. One major disadvantage is that only occasionally area-wide process-relevant information exists. Similarly, in Austria often only soil maps for treeless areas are available. However, in almost all previous studies, randomly existing geological and geotechnical maps were used, which often have been specially adapted to the issues and objectives. This is one reason why very often conceptual soil maps must be derived from geological maps with only hard rock information, which often have a rather low quality. Based on these maps, for example, adjacent areas of different geological composition and process-relevant physical properties are razor sharp delineated, which in nature appears quite rarly. In order to obtain more realistic information about the spatial variability of the process-relevant geosphere (soil cover) and its physical properties, aerogeophysical measurements (electromagnetic, radiometric), carried out by helicopter, from different regions of Austria were interpreted. Previous studies show that, especially with radiometric measurements, the two-dimensional spatial variability of the nature of the process-relevant soil, close to the surface can be determined. In addition, the electromagnetic measurements are more important to obtain three-dimensional information of the deeper geological conditions and to improve the area-specific geological knowledge and understanding. The validation of these measurements is done with terrestrial geoelectrical measurements. So both aspects, radiometric and electromagnetic measurements, are important and subsequently, interpretation of the geophysical results can be used as the parameter maps in the modeling of more realistic susceptibility maps with respect to various processes. Within this presentation, results of geophysical measurements, the outcome and the derived parameter maps, as well as first process-oriented susceptibility maps in terms of gravitational soil mass movements will be presented. As an example results which were obtained with a heuristic method in an area in Vorarlberg (Western Austria) will be shown. References: Schwarz, L. & Tilch, N. (2011): Why are good process data so important for the modelling of landslide susceptibility maps?- EGU-Postersession "Landslide hazard and risk assessment, and landslide management" (NH 3.6), Vienna. [http://www.geologie.ac.at/fileadmin/user_upload/dokumente/pdf/poster/poster_2011_egu_schwarz_tilch_1.pdf] Tilch, N. & Schwarz, L. (2011): Spatial and scale-dependent variability in data quality and their influence on susceptibility maps for gravitational mass movements in soil, modelled by heuristic method.- EGU-Postersession "Landslide hazard and risk assessment, and landslide management" (NH 3.6); Vienna. [http://www.geologie.ac.at/fileadmin/user_upload/dokumente/pdf/poster/poster_2011_egu_tilch_schwarz.pdf

  8. [Oswald Schwarz: a pioneer in psychosomatic urology and sexual medicine].

    PubMed

    Berberich, H J; Schultheiss, D; Kieser, B

    2015-01-01

    Oswald Schwarz, a urologist from Vienna, was a scholar of Anton Ritter von Frisch and Hans Rubritius. As a physician during World War I, he was confronted with numerous bullet wounds to the spinal cord. In 1919, he completed his professorial thesis"Bladder dysfunction as a result of bullet wounds to the spinal cord". Oswald Schwarz was known as a committed surgeon. As an urologist he also treated patients with sexual dysfunction. Besides his practical and scientific urology-related work, he was also interested in psychology and philosophy. He held lectures on both subjects earning himself the nickname, the Urosoph. In the 1920s, Oswald Schwarz belonged to the inner circle of Alfred Adler, the founder of Individual Psychology, and was editor of the first psychosomatic textbook published in German, "Psychological origin and psychotherapy of physical symptoms" (1925). In addition, Schwarz wrote numerous articles and several books on sexual medicine. He also made many valuable contributions to the development of medical anthropology. Altogether, his work includes over 130 publications. Faced with the rise of fascism and National Socialism in Europe, Oswald Schwarz, who was of Jewish origin, emigrated to England in 1934. There he died in 1949. Unfortunately his scientific work has largely been forgotten. The aim of the following article is to remind us of his important contributions to the field.

  9. A study of Schwarz converters for nuclear powered spacecraft

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A.; Schwarze, Gene E.

    1987-01-01

    High power space systems which use low dc voltage, high current sources such as thermoelectric generators, will most likely require high voltage conversion for transmission purposes. This study considers the use of the Schwarz resonant converter for use as the basic building block to accomplish this low-to-high voltage conversion for either a dc or an ac spacecraft bus. The Schwarz converter has the important assets of both inherent fault tolerance and resonant operation; parallel operation in modular form is possible. A regulated dc spacecraft bus requires only a single stage converter while a constant frequency ac bus requires a cascaded Schwarz converter configuration. If the power system requires constant output power from the dc generator, then a second converter is required to route unneeded power to a ballast load.

  10. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  11. A Refined Cauchy-Schwarz Inequality

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2007-01-01

    The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.

  12. A consistent covariant quantization of the Brink-Schwarz superparticle

    NASA Astrophysics Data System (ADS)

    Eisenberg, Yeshayahu

    1992-02-01

    We perform the covariant quantization of the ten-dimensional Brink-Schwarz superparticle by reducing it to a system whose constraints are all first class, covariant and have only two levels of reducibility. Research supported by the Rothschild Fellowship.

  13. World reclassification of the Cardiophorinae (Coleoptera, Elateridae), based on phylogenetic analyses of morphological characters

    PubMed Central

    Douglas, Hume B.

    2017-01-01

    Abstract The prior genus-level classification of Cardiophorinae had never been assessed phylogenetically, and not revised since 1906. A phylogeny for Cardiophorinae and Negastriinae is inferred by Bayesian analyses of 163 adult morphological characters to revise the generic classification. Parsimony analysis is also performed to assess the sensitivity of the Bayesian results to the choice of optimality criterion. Bayesian hypothesis testing rejected monophyly for: Negastriinae; Cardiophorinae (but monophyletic after addition of four taxa); Cardiophorini; cardiophorine genera Aphricus LeConte, 1853; Aptopus Eschscholtz, 1829; Cardiophorus Eschscholtz, 1829; Cardiotarsus Eschscholtz, 1836; Paracardiophorus Schwarz, 1895; Phorocardius Fleutiaux, 1931; Dicronychus sensu Platia, 1994; Dicronychus sensu Méquignon, 1931; Craspedostethus sensu Schwarz, 1906 (i.e., including Tropidiplus Fleutiaux, 1903); Paracardiophorus sensu Cobos, 1970, although well-supported alternative classifications were available for only some. Based on taxonomic interpretation of phylogenetic results: Nyctorini is syn. n. of Cardiophorini; Globothorax Fleutiaux, 1891 (Physodactylinae), Margogastrius Schwarz, 1903 (Physodactylinae), and Pachyelater Lesne, 1897 (Dendrometrinae) are transferred to Cardiophorinae. The following changes are proposed for cardiophorine genera: Aptopus Eschscholtz, 1829 is redefined to exclude Horistonotus-like species; Coptostethus Wollaston, 1854 is subgenus of Cardiophorus; Dicronychus Brullé, 1832 and Diocarphus Fleutiaux, 1947, Metacardiophorus Gurjeva, 1966, Platynychus Motschulsky, 1858, and Zygocardiophorus Iablokoff-Khnzorian and Mardjanian, 1981 are placed at genus rank; Paracardiophorus Schwarz, 1895 is redefined based on North American and Eurasian species only; Horistonotus Candèze, 1860 redefined to include species with multiple apices on each side of their tarsal claws; Patriciella Van Zwaluwenburg, 1953 is syn. n. of Aphricus LeConte, 1853; Teslasena Fleutiaux, 1892 (Physodactylinae) is syn. n. of Globothorax Fleutiaux, 1891. The following new genera are described: Austrocardiophorus (type species: Cardiophorus humeralis Fairmaire and Germain, 1860); Chileaphricus (type species: Aphricus chilensis Fleutiaux, 1940); Floridelater (type species: Coptostethus americanus Horn, 1871, transferred from Negastriinae to Cardiophorinae). Paradicronychus (nomen nudum), is syn. n. of Cardiophorus Eschscholtz, 1829. Generic reassignments to make Cardiodontulus, Cardiophorus, Cardiotarsus, Paracardiophorus consistent with phylogenetically revised genus concepts resulted in 84 new combinations. Lectotypes are designated for 29 type species to fix generic concepts: Anelastes femoralis Lucas, 1857; Aphricus chilensis Fleutiaux, 1940; Athous argentatus Abeille de Perrin, 1894; Cardiophorus adjutor Candèze, 1875; Cardiophorus florentini Fleutiaux, 1895; Cardiophorus inflatus Candèze, 1882; Cardiophorus luridipes Candèze, 1860; Cardiophorus mirabilis Candèze, 1860; Cardiophorus musculus Erichson, 1840; Cardiotarsus capensis Candèze, 1860; Cardiotarsus vitalisi Fleutiaux, 1918; Craspedostethus rufiventris Schwarz, 1898; Elater cinereus Herbst, 1784; Elater minutissimus Germar, 1817; Elater sputator Linnaeus, 1758; Elater thoracicus Fabricius, 1801; Eniconyx pullatus Horn, 1884; Esthesopus castaneus Eschscholtz, 1829; Gastrimargus schneideri Schwarz, 1902; Globothorax chevrolati Fleutiaux, 1891; Horistonotus flavidus Candèze, 1860; Horistonotus simplex LeConte, 1863; Lesnelater madagascariensis Fleutiaux, 1935; Oedostethus femoralis LeConte, 1853; Phorocardius solitarius Fleutiaux, 1931; Platynychus indicus Motschulsky, 1858; Platynychus mixtus Fleutiaux, 1931; Triplonychus acuminatus Candèze, 1860; Tropidiplus tellinii Fleutiaux, 1903. A key to genera and diagnoses are provided for all genera and subgenera. A bibliographic synonymy includes references for all taxonomic changes to genera and new species through 2015. PMID:28331397

  14. World reclassification of the Cardiophorinae (Coleoptera, Elateridae), based on phylogenetic analyses of morphological characters.

    PubMed

    Douglas, Hume B

    2017-01-01

    The prior genus-level classification of Cardiophorinae had never been assessed phylogenetically, and not revised since 1906. A phylogeny for Cardiophorinae and Negastriinae is inferred by Bayesian analyses of 163 adult morphological characters to revise the generic classification. Parsimony analysis is also performed to assess the sensitivity of the Bayesian results to the choice of optimality criterion. Bayesian hypothesis testing rejected monophyly for: Negastriinae; Cardiophorinae (but monophyletic after addition of four taxa); Cardiophorini; cardiophorine genera Aphricus LeConte, 1853; Aptopus Eschscholtz, 1829; Cardiophorus Eschscholtz, 1829; Cardiotarsus Eschscholtz, 1836; Paracardiophorus Schwarz, 1895; Phorocardius Fleutiaux, 1931; Dicronychus sensu Platia, 1994; Dicronychus sensu Méquignon, 1931; Craspedostethus sensu Schwarz, 1906 (i.e., including Tropidiplus Fleutiaux, 1903); Paracardiophorus sensu Cobos, 1970, although well-supported alternative classifications were available for only some. Based on taxonomic interpretation of phylogenetic results: Nyctorini is syn. n. of Cardiophorini; Globothorax Fleutiaux, 1891 (Physodactylinae), Margogastrius Schwarz, 1903 (Physodactylinae), and Pachyelater Lesne, 1897 (Dendrometrinae) are transferred to Cardiophorinae. The following changes are proposed for cardiophorine genera: Aptopus Eschscholtz, 1829 is redefined to exclude Horistonotus -like species; Coptostethus Wollaston, 1854 is subgenus of Cardiophorus ; Dicronychus Brullé, 1832 and Diocarphus Fleutiaux, 1947, Metacardiophorus Gurjeva, 1966, Platynychus Motschulsky, 1858, and Zygocardiophorus Iablokoff-Khnzorian and Mardjanian, 1981 are placed at genus rank; Paracardiophorus Schwarz, 1895 is redefined based on North American and Eurasian species only; Horistonotus Candèze, 1860 redefined to include species with multiple apices on each side of their tarsal claws; Patriciella Van Zwaluwenburg, 1953 is syn. n. of Aphricus LeConte, 1853; Teslasena Fleutiaux, 1892 (Physodactylinae) is syn. n. of Globothorax Fleutiaux, 1891. The following new genera are described: Austrocardiophorus (type species: Cardiophorus humeralis Fairmaire and Germain, 1860); Chileaphricus (type species: Aphricus chilensis Fleutiaux, 1940); Floridelater (type species: Coptostethus americanus Horn, 1871, transferred from Negastriinae to Cardiophorinae). Paradicronychus ( nomen nudum ), is syn. n. of Cardiophorus Eschscholtz, 1829. Generic reassignments to make Cardiodontulus , Cardiophorus , Cardiotarsus , Paracardiophorus consistent with phylogenetically revised genus concepts resulted in 84 new combinations. Lectotypes are designated for 29 type species to fix generic concepts: Anelastes femoralis Lucas, 1857; Aphricus chilensis Fleutiaux, 1940; Athous argentatus Abeille de Perrin, 1894; Cardiophorus adjutor Candèze, 1875; Cardiophorus florentini Fleutiaux, 1895; Cardiophorus inflatus Candèze, 1882; Cardiophorus luridipes Candèze, 1860; Cardiophorus mirabilis Candèze, 1860; Cardiophorus musculus Erichson, 1840; Cardiotarsus capensis Candèze, 1860; Cardiotarsus vitalisi Fleutiaux, 1918; Craspedostethus rufiventris Schwarz, 1898; Elater cinereus Herbst, 1784; Elater minutissimus Germar, 1817; Elater sputator Linnaeus, 1758; Elater thoracicus Fabricius, 1801; Eniconyx pullatus Horn, 1884; Esthesopus castaneus Eschscholtz, 1829; Gastrimargus schneideri Schwarz, 1902; Globothorax chevrolati Fleutiaux, 1891; Horistonotus flavidus Candèze, 1860; Horistonotus simplex LeConte, 1863; Lesnelater madagascariensis Fleutiaux, 1935; Oedostethus femoralis LeConte, 1853; Phorocardius solitarius Fleutiaux, 1931; Platynychus indicus Motschulsky, 1858; Platynychus mixtus Fleutiaux, 1931; Triplonychus acuminatus Candèze, 1860; Tropidiplus tellinii Fleutiaux, 1903. A key to genera and diagnoses are provided for all genera and subgenera. A bibliographic synonymy includes references for all taxonomic changes to genera and new species through 2015.

  15. [The Contributions of the East-German Sports Medicine Specialist and Neurologist Bernhard Schwarz (1918-1991) in the Field of Boxing].

    PubMed

    Bart, Katrin; Steinberg, Holger

    2018-03-01

    This study is the first to provide research on the East-German (GDR) sports physician and neurologist Bernhard Schwarz. It summarises Schwarz's publications from 1953 to 1966 regarding the impact of boxing on health, particularly craniocerebral injury. Also, the study analyses his work in the context of current discussions. It shows that Schwarz, who was a tenured professor and director of the Department of Psychiatry at the University Hospital of Leipzig and the physician of the GDR national boxing team, conducted systematic clinical surveys and pointed to the health impacts of boxing at an early point in time. He believed that risk exposure for athletes could be minimised through intensive and trained supervision by the coach and the physician as well as through changes to the conditions of boxing matches. Schwarz opposed a ban on boxing. Instead, he picked up suggestions concerning the prevention of adverse health impacts and added his own recommendations, which are remarkably similar to current practices aimed at minimising risk. For instance, he advised that ring-side physicians be trained to recognise dangerous conditions. Today, physicians must obtain a license to be allowed to care for a boxer. In addition, Schwarz pursued the concept of integral medicine. He called for a diversified training of boxers and argued that injured athletes should be treated holistically. Being a neurologist, he emphasised the important role of psychotherapy in this context. He identified the key role of rehabilitation, and suggested that rehabilitation is complete only with the patient's successful social and professional reintegration. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Natural Scherk-Schwarz theories of the weak scale

    DOE PAGES

    García, Isabel Garcia; Howe, Kiel; March-Russell, John

    2015-12-01

    Natural supersymmetric theories of the weak scale are under growing pressure given present LHC constraints, raising the question of whether untuned supersymmetric (SUSY) solutions to the hierarchy problem are possible. In this paper, we explore a class of 5-dimensional natural SUSY theories in which SUSY is broken by the Scherk-Schwarz mechanism. We pedagogically explain how Scherk-Schwarz elegantly solves the traditional problems of 4-dimensional SUSY theories (based on the MSSM and its many variants) that usually result in an unsettling level of fine-tuning. The minimal Scherk-Schwarz set up possesses novel phenomenology, which we briefly outline. In this study, we show thatmore » achieving the observed physical Higgs mass motivates extra structure that does not significantly affect the level of tuning (always better than ~10%) and we explore three qualitatively different extensions: the addition of extra matter that couples to the Higgs, an extra U(1)' gauge group under which the Higgs is charged and an NMSSM-like solution to the Higgs mass problem.« less

  17. Molecular Factors and Biological Pathways Associated with Malaria Fever and the Pathogenesis of Cerebral Malaria

    DTIC Science & Technology

    2007-04-09

    Novakovic , P. Gerold, R. T. Schwarz, M. J. McConville, and S. D. Tachado. 1996. Glycosylphosphatidylinositol toxin of Plasmodium up-regulates...Schofield, L., S. Novakovic , P. Gerold, R. T. Schwarz, M. J. McConville, and S. D. Tachado. 1996. Glycosylphosphatidylinositol toxin of Plasmodium up

  18. An unsteady aerodynamic formulation for efficient rotor tonal noise prediction

    NASA Astrophysics Data System (ADS)

    Gennaretti, M.; Testa, C.; Bernardini, G.

    2013-12-01

    An aerodynamic/aeroacoustic solution methodology for predction of tonal noise emitted by helicopter rotors and propellers is presented. It is particularly suited for configurations dominated by localized, high-frequency inflow velocity fields as those generated by blade-vortex interactions. The unsteady pressure distributions are determined by the sectional, frequency-domain Küssner-Schwarz formulation, with downwash including the wake inflow velocity predicted by a three-dimensional, unsteady, panel-method formulation suited for the analysis of rotors operating in complex aerodynamic environments. The radiated noise is predicted through solution of the Ffowcs Williams-Hawkings equation. The proposed approach yields a computationally efficient solution procedure that may be particularly useful in preliminary design/multidisciplinary optimization applications. It is validated through comparisons with solutions that apply the airloads directly evaluated by the time-marching, panel-method formulation. The results are provided in terms of blade loads, noise signatures and sound pressure level contours. An estimation of the computational efficiency of the proposed solution process is also presented.

  19. Ramond and Neveu-Schwarz paraspinning strings in presence of D-branes

    NASA Astrophysics Data System (ADS)

    Hamam, D.; Belaloui, N.

    2018-03-01

    We investigate the theory of an open parafermionic string between two parallel Dp-, Dq-branes in Ramond and Neveu-Schwarz sectors. Trilinear commutation relations between the string variables are postulated and the corresponding ones in terms of modes are derived. The analysis of the spectrum shows that one can again have a free tachyon Neveu-Schwarz model for some values of the order of the paraquantization associated to some values of p and q. The consistency of this model requires the calculation of the partition function and its confrontation with the results of the degeneracies. A perfect agreement between the two results is obtained and the closure of the Virasoro superalgebra is confirmed.

  20. A complex analysis approach to the motion of uniform vortices

    NASA Astrophysics Data System (ADS)

    Riccardi, Giorgio

    2018-02-01

    A new mathematical approach to kinematics and dynamics of planar uniform vortices in an incompressible inviscid fluid is presented. It is based on an integral relation between Schwarz function of the vortex boundary and induced velocity. This relation is firstly used for investigating the kinematics of a vortex having its Schwarz function with two simple poles in a transformed plane. The vortex boundary is the image of the unit circle through the conformal map obtained by conjugating its Schwarz function. The resulting analysis is based on geometric and algebraic properties of that map. Moreover, it is shown that the steady configurations of a uniform vortex, possibly in presence of point vortices, can be also investigated by means of the integral relation. The vortex equilibria are divided in two classes, depending on the behavior of the velocity on the boundary, measured in a reference system rotating with this curve. If it vanishes, the analysis is rather simple. However, vortices having nonvanishing relative velocity are also investigated, in presence of a polygonal symmetry. In order to study the vortex dynamics, the definition of Schwarz function is then extended to a Lagrangian framework. This Lagrangian Schwarz function solves a nonlinear integrodifferential Cauchy problem, that is transformed in a singular integral equation. Its analytical solution is here approached in terms of successive approximations. The self-induced dynamics, as well as the interactions with a point vortex, or between two uniform vortices are analyzed.

  1. Multilevel Preconditioners for Discontinuous Galerkin Approximations of Elliptic Problems with Jump Coefficients

    DTIC Science & Technology

    2010-12-01

    discontinuous coefficients on geometrically nonconforming substructures. Technical Report Serie A 634, Instituto de Matematica Pura e Aplicada, Brazil, 2009...Instituto de Matematica Pura e Aplicada, Brazil, 2010. submitted. [41] M. Dryja, M. V. Sarkis, and O. B. Widlund. Multilevel Schwarz methods for

  2. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  3. Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.

    2012-06-21

    We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less

  4. Reserve Component Special Forces Integration and Employment Models for the Operational Continuum

    DTIC Science & Technology

    1992-04-15

    OCONUS OPTEMPO CY87-90 1987 20 SFGA WINTEX/ CIMEX (NATO) HQ AFSOUTH 20 SFGA EX SCHWARZES PFERD FRG 1-20 SFGA (ODB+2 ODA) FTX SCHWARZES PFERD FRG (ODB+3...FTX TRABUCCO SPAIN 3-20 SFGA (2 ODA) FRENCH COMMANDO JCET MARTINIQUE (2 ODA) GERMAN AIRBORNE GERMANY 37 1-il SFGA FOB WINTEX/ CIMEX UK (ODB+4 ODA) EX

  5. Konrad Adenauer’s Military Advisors

    DTIC Science & Technology

    1989-02-13

    Ausgabe. Hans-Peter Schwarz and 45 Rudolf Morsey, Hg. Vol. 1, Briefe 1945-1947 hg. v. Hans Peter Mensing. Berlin: Siedler Verlag, 1983. Vol. 2, Briefe 1949...Dietrich, Rudolf Morsey and Hans-Peter Schwarz, ed. Quellen zur Geschichte des Parlarnentarismus und der politischen Partein. Bd. 3, Auftakt zur Ara...New York: Penguin, 1982. Steiner , Jirg. European Democracies. New York: Longman, 1986. Taylor, A.J.P. The Origens of the Second World War. 2d. ed. New

  6. Design of tissue engineering scaffolds based on hyperbolic surfaces: structural numerical evaluation.

    PubMed

    Almeida, Henrique A; Bártolo, Paulo J

    2014-08-01

    Tissue engineering represents a new field aiming at developing biological substitutes to restore, maintain, or improve tissue functions. In this approach, scaffolds provide a temporary mechanical and vascular support for tissue regeneration while tissue in-growth is being formed. These scaffolds must be biocompatible, biodegradable, with appropriate porosity, pore structure and distribution, and optimal vascularization with both surface and structural compatibility. The challenge is to establish a proper balance between porosity and mechanical performance of scaffolds. This work investigates the use of two different types of triple periodic minimal surfaces, Schwarz and Schoen, in order to design better biomimetic scaffolds with high surface-to-volume ratio, high porosity and good mechanical properties. The mechanical behaviour of these structures is assessed through the finite element method software Abaqus. The effect of two parametric parameters (thickness and surface radius) is also evaluated regarding its porosity and mechanical behaviour. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Beta Human Chorionic Gonadotropin - Induction of Apoptosis in Breast Cancer

    DTIC Science & Technology

    2006-01-01

    R., Sturzl, M ., Albini, A., Tschachler, E., Zangerle, R., Donini , S., Feichtinger, H., Schwarz, S., 1997. Induction of apoptosis in Kaposi’s...Roth, B., Bock, G., Recheis, H., Sgonc, R., Sturzl, M ., Albini, A., 18 Tschachler, E., Zangerle, R., Donini , S., Feichtinger, H., Schwarz, S., 1997...Biol. Anim. 30A, 4-8. Bièche, I., Lazar, V., Noguès, C., Poynard, T., Giovangrandi, Y., Bellet, D., Lidereau, R., Vidaud, M ., 1998. Prognostic value

  8. A scalable nonlinear fluid-structure interaction solver based on a Schwarz preconditioner with isogeometric unstructured coarse spaces in 3D

    NASA Astrophysics Data System (ADS)

    Kong, Fande; Cai, Xiao-Chuan

    2017-07-01

    Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear in many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexact Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here "geometry" includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.

  9. A scalable nonlinear fluid–structure interaction solver based on a Schwarz preconditioner with isogeometric unstructured coarse spaces in 3D

    DOE PAGES

    Kong, Fande; Cai, Xiao-Chuan

    2017-03-24

    Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexactmore » Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here ''geometry'' includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.« less

  10. A cascaded Schwarz converter for high frequency power distribution

    NASA Technical Reports Server (NTRS)

    Ray, Biswajit; Stuart, Thomas A.

    1988-01-01

    It is shown that two Schwarz converters in cascade provide a very reliable 20-kHz source that features zero current commutation, constant frequency, and fault-tolerant operation, meeting requirements for spacecraft applications. A steady-state analysis of the converter is presented, and equations for the steady-state performance are derived. Fault-current limiting is discussed. Experimental results are presented for a 900-W version, which has been successfully tested under no-load, full-load, and short-circut conditions.

  11. A Study of Three Phase and Single Phase High Frequency Distribution Systems

    DTIC Science & Technology

    1989-09-20

    single Schwarz converter which operates in a variable frequency mode and acts as a regulated dc power supply . This mode of operation is used to maintain a...conditioning stages. The first stage contains a single Schwarz converter which operates in a variable frequency mode and acts as a regulated dc power supply ...dependent upon the amount of current ripple the capacitor must sink. This determines the capacitor heating since the power dissipated is equal to 2R

  12. Another short and elementary proof of strong subadditivity of quantum entropy

    NASA Astrophysics Data System (ADS)

    Ruskai, Mary Beth

    2007-08-01

    A short and elementary proof of the joint convexity of relative entropy is presented, using nothing beyond linear algebra. The key ingredients are an easily verified integral representation and the strategy used to prove the Cauchy-Schwarz inequality in elementary courses. Several consequences are proved in a way which allows an elementary proof of strong subadditivity in a few more lines. Some expository material on Schwarz inequalities for operators and the Holevo bound for partial measurements is also included.

  13. Electric-magnetic dualities in non-abelian and non-commutative gauge theories

    NASA Astrophysics Data System (ADS)

    Ho, Jun-Kai; Ma, Chen-Te

    2016-08-01

    Electric-magnetic dualities are equivalence between strong and weak coupling constants. A standard example is the exchange of electric and magnetic fields in an abelian gauge theory. We show three methods to perform electric-magnetic dualities in the case of the non-commutative U (1) gauge theory. The first method is to use covariant field strengths to be the electric and magnetic fields. We find an invariant form of an equation of motion after performing the electric-magnetic duality. The second method is to use the Seiberg-Witten map to rewrite the non-commutative U (1) gauge theory in terms of abelian field strength. The third method is to use the large Neveu Schwarz-Neveu Schwarz (NS-NS) background limit (non-commutativity parameter only has one degree of freedom) to consider the non-commutative U (1) gauge theory or D3-brane. In this limit, we introduce or dualize a new one-form gauge potential to get a D3-brane in a large Ramond-Ramond (R-R) background via field redefinition. We also use perturbation to study the equivalence between two D3-brane theories. Comparison of these methods in the non-commutative U (1) gauge theory gives different physical implications. The comparison reflects the differences between the non-abelian and non-commutative gauge theories in the electric-magnetic dualities. For a complete study, we also extend our studies to the simplest abelian and non-abelian p-form gauge theories, and a non-commutative theory with the non-abelian structure.

  14. Alternating method applied to edge and surface crack problems

    NASA Technical Reports Server (NTRS)

    Hartranft, R. J.; Sih, G. C.

    1972-01-01

    The Schwarz-Neumann alternating method is employed to obtain stress intensity solutions to two crack problems of practical importance: a semi-infinite elastic plate containing an edge crack which is subjected to concentrated normal and tangential forces, and an elastic half space containing a semicircular surface crack which is subjected to uniform opening pressure. The solution to the semicircular surface crack is seen to be a significant improvement over existing approximate solutions. Application of the alternating method to other crack problems of current interest is briefly discussed.

  15. Geodesy and Cartography (Selected Articles),

    DTIC Science & Technology

    1979-08-10

    C-OO/b73 GEODESY AND CARTOGRAPHY (SELECTED ARTICLES) English pages: 40 Source: GeodezJa i Kartografia, Vol. 27, Nr. 1, 1978, PP. 3-27 Country of...1976. 14) kledzixski, J., Zibek, Z., Czarnecki, K., Rogowski, J.B., Problems in Using Satellite Surveys in an Astronomical-Geodesic Network, Geodezja i...Based on Observations of Low-Low Satellites Using Collocation Methods, Geodezja i Kartografia, Vol. XXVI, No. 4, 1977. [-7. Krynski, J., Schwarz, K.P

  16. Differentiability breaking and Schwarz theorem violation in an aging material

    NASA Astrophysics Data System (ADS)

    Doussineau, P.; Levelut, A. L.

    2002-07-01

    Dielectric constant measurements are performed in the frequency range from 1 kHz to 1 MHz on a disordered material with ferroelectric properties (KTa1-xNbxO3 crystals) after isothermal aging at the plateau temperature Tpl≅10 K. They show that the derivatives of the complex capacitance with respect to temperature and time present two very peculiar behaviors. The first point is that the first and second derivatives against temperature are not equal on the two sides of Tpl; this is differentiability breaking. The second point is that the two crossed second derivatives against temperature and time are not equal (indeed they have opposite signs); this is a violation of Schwarz theorem. These results are obtained on both the real part and the imaginary part of the capacitance. A model, initially imagined for aging and memory of aging, attributes the time-dependent properties to the evolution (growth and reconformations) of the polarization domain walls. It is shown that it can also explain the observed differentiability breaking (and in particular its logarithmic increase with the plateau duration tpl) and the violation of Schwarz theorem.

  17. The Green-Schwarz mechanism and geometric anomaly relations in 2d (0,2) F-theory vacua

    NASA Astrophysics Data System (ADS)

    Weigand, Timo; Xu, Fengjun

    2018-04-01

    We study the structure of gauge and gravitational anomalies in 2d N = (0 , 2) theories obtained by compactification of F-theory on elliptically fibered Calabi-Yau 5-folds. Abelian gauge anomalies, induced at 1-loop in perturbation theory, are cancelled by a generalized Green-Schwarz mechanism operating at the level of chiral scalar fields in the 2d supergravity theory. We derive closed expressions for the gravitational and the non-abelian and abelian gauge anomalies including the Green-Schwarz counterterms. These expressions involve topological invariants of the underlying elliptic fibration and the gauge background thereon. Cancellation of anomalies in the effective theory predicts intricate topological identities which must hold on every elliptically fibered Calabi-Yau 5-fold. We verify these relations in a non-trivial example, but their proof from a purely mathematical perspective remains as an interesting open problem. Some of the identities we find on elliptic 5-folds are related in an intriguing way to previously studied topological identities governing the structure of anomalies in 6d N = (1 , 0) and 4d N = 1 theories obtained from F-theory.

  18. Clinical evaluation of a new measles-mumps-rubella combined live virus vaccine in the Dominican Republic*

    PubMed Central

    Ehrenkranz, N. Joel; Ventura, Arnoldo K.; Medler, Edward M.; Jackson, Joseph E.; Kenny, Michael T.

    1975-01-01

    Over 900 children were enrolled in a double-blind placebo-controlled clinical study of measles (Schwarz strain), mumps (Jeryl Lynn strain), and rubella (Cendehill strain) trivalent vaccine. The trivalent vaccine caused about the same degree of reactivity as is generally associated with the Schwarz strain measles vaccine. Paired sera from triplesusceptible vaccinees had seroconversion rates of 99% for measles, 94% for mumps, and 93% for rubella. The results of this study show that this trivalent vaccine is as well tolerated and as effective as its component vaccines. PMID:764997

  19. No-Ghost Theorem for Neveu-Schwarz String in 0-Picture

    NASA Astrophysics Data System (ADS)

    Kohriki, M.; Kunitomo, H.; Murata, M.

    2010-12-01

    The no-ghost theorem for Neveu-Schwarz string is directly proved in 0-picture. The one-to-one correspondence between physical states in 0-picture and in the conventional (-1)-picture is confirmed. It is shown that a nontrivial metric consistent with the BRST cohomology is needed to define a positive semidefinite norm in the physical Hilbert space. As a by-product, we find a new inverse picture-changing operator, which is noncovariant but has a nonsingular operator product with itself. A possibility to construct a new gauge-invariant superstring field theory is discussed.

  20. The Multiscale Robin Coupled Method for flows in porous media

    NASA Astrophysics Data System (ADS)

    Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.

    2018-02-01

    A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.

  1. The Partition Function in the Four-Dimensional Schwarz-Type Topological Half-Flat Two-Form Gravity

    NASA Astrophysics Data System (ADS)

    Abe, Mitsuko

    We derive the partition functions of the Schwarz-type four-dimensional topological half-flat two-form gravity model on K3-surface or T4 up to on-shell one-loop corrections. In this model the bosonic moduli spaces describe an equivalent class of a trio of the Einstein-Kähler forms (the hyper-Kähler forms). The integrand of the partition function is represented by the product of some bar ∂ -torsions. bar ∂ -torsion is the extension of R-torsion for the de Rham complex to that for the bar ∂ -complex of a complex analytic manifold.

  2. Domain decomposition methods for nonconforming finite element spaces of Lagrange-type

    NASA Technical Reports Server (NTRS)

    Cowsar, Lawrence C.

    1993-01-01

    In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, J E; Vassilevski, P S; Woodward, C S

    This paper provides extensions of an element agglomeration AMG method to nonlinear elliptic problems discretized by the finite element method on general unstructured meshes. The method constructs coarse discretization spaces and corresponding coarse nonlinear operators as well as their Jacobians. We introduce both standard (fairly quasi-uniformly coarsened) and non-standard (coarsened away) coarse meshes and respective finite element spaces. We use both kind of spaces in FAS type coarse subspace correction (or Schwarz) algorithms. Their performance is illustrated on a number of model problems. The coarsened away spaces seem to perform better than the standard spaces for problems with nonlinearities inmore » the principal part of the elliptic operator.« less

  4. Preliminary cone-beam computed tomography study evaluating dental and skeletal changes after treatment with a mandibular Schwarz appliance.

    PubMed

    Tai, Kiyoshi; Hotokezaka, Hitoshi; Park, Jae Hyun; Tai, Hisako; Miyajima, Kuniaki; Choi, Matthew; Kai, Lisa M; Mishima, Katsuaki

    2010-09-01

    The purpose of this study was to evaluate the efficacy of the Schwarz appliance with a new method of superimposing detailed cone-beam computed tomography (CBCT) images. The subjects were 28 patients with Angle Class I molar relationships and crowding; they were randomly divided into 2 groups: 14 expanded and 14 nonexpanded patients. Three-dimensional Rugle CBCT software (Medic Engineering, Kyoto, Japan) was used to measure 10 reference points before treatment (T0) and during the retention period of approximately 9 months after 6 to 12 months of expansion (T1). Cephalometric and cast measurements were used to evaluate the treatments in both groups. Also, the mandibular widths of both groups were measured along an axial plane at 2 levels below the cementoenamel junction from a CBCT scan. Differences between the 2 groups at T0 and T1 were analyzed by using the Mann-Whitney U test. The dental arch (including tooth root apices) had expanded; however, alveolar bone expansion was only up to 2 mm below the cementoenamel junction. There was a statistically significant (P <0.05) difference between the groups in terms of crown, cementoenamel junction, root, and upper alveolar process. However, no significant (P >0.05) differences were observed in the interwidths of the mandibular body, zygomatic bones, condylar heads, or mandibular antegonial notches. In the mandibular cast measurements, arch crowding and arch perimeter showed statistically significant changes in the expanded group. The buccal mandibular width and lingual mandibular width values had significant changes as measured from a point 2 mm below the cementoenamel junction. The findings suggest that the Schwarz appliance primarily affected the dentoalveolar complex, but it had little effect on either the mandibular body or any associated structures. In addition, the molar center of rotation was observed to be below the root apex. 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  5. Eddy current loss analysis of open-slot fault-tolerant permanent-magnet machines based on conformal mapping method

    NASA Astrophysics Data System (ADS)

    Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang

    2017-05-01

    This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.

  6. Image registration based on subpixel localization and Cauchy-Schwarz divergence

    NASA Astrophysics Data System (ADS)

    Ge, Yongxin; Yang, Dan; Zhang, Xiaohong; Lu, Jiwen

    2010-07-01

    We define a new matching metric-corner Cauchy-Schwarz divergence (CCSD) and present a new approach based on the proposed CCSD and subpixel localization for image registration. First, we detect the corners in an image by a multiscale Harris operator and take them as initial interest points. And then, a subpixel localization technique is applied to determine the locations of the corners and eliminate the false and unstable corners. After that, CCSD is defined to obtain the initial matching corners. Finally, we use random sample consensus to robustly estimate the parameters based on the initial matching. The experimental results demonstrate that the proposed algorithm has a good performance in terms of both accuracy and efficiency.

  7. Rholography, black holes and Scherk-Schwarz

    DOE PAGES

    Gaddam, Nava; Gnecchi, Alessandra; Vandoren, Stefan; ...

    2015-06-10

    We present a construction of a class of near-extremal asymptotically flat black hole solutions in four (or five) dimensional gauged supergravity with R-symmetry gaugings obtained from Scherk-Schwarz reductions on a circle. The entropy of these black holes is counted holographically by the well known MSW (or D1/D5) system, with certain twisted boundary conditions labeled by a twist parameter ρ. Here, we find that the corresponding (0, 4) (or (4, 4)) superconformal algebras are exactly those studied by Schwimmer and Seiberg, using a twist on the outer automorphism group. The interplay between R-symmetries, ρ-algebras and holography leads us to name ourmore » construction “Rholography”.« less

  8. Rholography, black holes and Scherk-Schwarz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaddam, Nava; Gnecchi, Alessandra; Vandoren, Stefan

    We present a construction of a class of near-extremal asymptotically flat black hole solutions in four (or five) dimensional gauged supergravity with R-symmetry gaugings obtained from Scherk-Schwarz reductions on a circle. The entropy of these black holes is counted holographically by the well known MSW (or D1/D5) system, with certain twisted boundary conditions labeled by a twist parameter ρ. Here, we find that the corresponding (0, 4) (or (4, 4)) superconformal algebras are exactly those studied by Schwimmer and Seiberg, using a twist on the outer automorphism group. The interplay between R-symmetries, ρ-algebras and holography leads us to name ourmore » construction “Rholography”.« less

  9. KENNEDY SPACE CENTER, FLA. - United Space Alliance employees Jeremy Schwarz (left) and Chris Keeling install new tiles on the heat shield of main engine 1 for the orbiter Discovery. A heat shield is a protective layer on a spacecraft designed to protect it from the high temperatures, usually those that result from aerobraking during reentry into the Earth’s atmosphere.

    NASA Image and Video Library

    2003-09-23

    KENNEDY SPACE CENTER, FLA. - United Space Alliance employees Jeremy Schwarz (left) and Chris Keeling install new tiles on the heat shield of main engine 1 for the orbiter Discovery. A heat shield is a protective layer on a spacecraft designed to protect it from the high temperatures, usually those that result from aerobraking during reentry into the Earth’s atmosphere.

  10. Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems

    NASA Astrophysics Data System (ADS)

    Arrarás, A.; Portero, L.; Yotov, I.

    2014-01-01

    We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.

  11. Hairy strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahakian, Vatche

    Zero modes of the world-sheet spinors of a closed string can source higher order moments of the bulk supergravity fields. In this work, we analyze various configurations of closed strings focusing on the imprints of the quantized spinor vacuum expectation values onto the tails of bulk fields. We identify supersymmetric arrangements for which all multipole charges vanish; while for others, we find that one is left with Neveu-Schwarz-Neveu-Schwarz, and Ramond-Ramond dipole and quadrupole moments. Our analysis is exhaustive with respect to all the bosonic fields of the bulk and to all higher order moments. We comment on the relevance ofmore » these results to entropy computations of hairy black holes of a single charge or more, and to open/closed string duality.« less

  12. KENNEDY SPACE CENTER, FLA. - While Jay Beason (left), with United Space Alliance, looks on, Jeremy Schwarz (front) and Tom Summers (behind), also with USA, place new tiles on the heat shield of main engine 1 for the orbiter Discovery. A heat shield is a protective layer on a spacecraft designed to protect it from the high temperatures, usually those that result from aerobraking during reentry into the Earth’s atmosphere.

    NASA Image and Video Library

    2003-09-23

    KENNEDY SPACE CENTER, FLA. - While Jay Beason (left), with United Space Alliance, looks on, Jeremy Schwarz (front) and Tom Summers (behind), also with USA, place new tiles on the heat shield of main engine 1 for the orbiter Discovery. A heat shield is a protective layer on a spacecraft designed to protect it from the high temperatures, usually those that result from aerobraking during reentry into the Earth’s atmosphere.

  13. Standardization of Schwarz-Christoffel transformation for engineering design of semiconductor and hybrid integrated-circuit elements

    NASA Astrophysics Data System (ADS)

    Yashin, A. A.

    1985-04-01

    A semiconductor or hybrid structure into a calculable two-dimensional region mapped by the Schwarz-Christoffel transformation and a universal algorithm can be constructed on the basis of Maxwell's electro-magnetic-thermal similarity principle for engineering design of integrated-circuit elements. The design procedure involves conformal mapping of the original region into a polygon and then the latter into a rectangle with uniform field distribution, where conductances and capacitances are calculated, using tabulated standard mapping functions. Subsequent synthesis of a device requires inverse conformal mapping. Devices adaptable as integrated-circuit elements are high-resistance film resistors with periodic serration, distributed-resistance film attenuators with high transformation ratio, coplanar microstrip lines, bipolar transistors, directional couplers with distributed coupling to microstrip lines for microwave bulk devices, and quasirregular smooth matching transitions from asymmetric to coplanar microstrip lines.

  14. A 2.5 kW cascaded Schwarz converter for 20 kHz power distribution

    NASA Technical Reports Server (NTRS)

    Shetler, Russell E.; Stuart, Thomas A.

    1989-01-01

    Because it avoids the high currents in a parallel loaded capacitor, the cascaded Schwarz converter should offer better component utilization than converters with sinusoidal output voltages. The circuit is relatively easy to protect, and it provides a predictable trapezoidal voltage waveform that should be satisfactory for 20-kHz distribution systems. Analysis of the system is enhanced by plotting curves of normalized variables vs. gamma(1), where gamma(1) is proportional to the variable frequency of the first stage. Light-load operation is greatly improved by the addition of a power recycling rectifier bridge that is back biased at medium to heavy loads. Operation has been verified on a 2.5-kW circuit that uses input and output voltages in the same range as those anticipated for certain future spacecraft power systems.

  15. A bicontinuous tetrahedral structure in a liquid-crystalline lipid

    NASA Astrophysics Data System (ADS)

    Longley, William; McIntosh, Thomas J.

    1983-06-01

    The structure of most lipid-water phases can be visualized as an ordered distribution of two liquid media, water and hydrocarbons, separated by a continuous surface covered by the polar groups of the lipid molecules1. In the cubic phases in particular, rod-like elements are linked into three-dimensional networks1,2. Two of these phases (space groups Ia3d and Pn3m) contain two such three-dimensional networks mutually inter-woven and unconnected. Under the constraints of energy minimization3, the interface between the components in certain of these `porous fluids' may well resemble one of the periodic minimal surface structures of the type described mathematically by Schwarz4,5. A structure of this sort has been proposed for the viscous isotropic (cubic) form of glycerol monooleate (GMO) by Larsson et al.6 who suggested that the X-ray diagrams of Lindblom et al.7 indicated a body-centred crystal structure in which lipid bilayers might be arranged as in Schwarz's octahedral surface4. We have now found that at high water contents, a primitive cubic lattice better fits the X-ray evidence with the material in the crystal arranged in a tetrahedral way. The lipid appears to form a single bilayer, continuous in three dimensions, separating two continuous interlinked networks of water. Each of the water networks has the symmetry of the diamond crystal structure and the bilayer lies in the space between them following a surface resembling Schwarz's tetrahedral surface4.

  16. Professional Books.

    ERIC Educational Resources Information Center

    Gilstrap, Robert L.; And Others

    1993-01-01

    Reviews six books: "Teacher Lore" (Schubert and Ayers), about teachers' accounts of their experience; "America's Best Classrooms" (Seymour and others); "Another Door to Learning," (Schwarz) about learning-disabled children; "Talking with Your Children about a Troubled World" (Dumas); "Our Family, Our…

  17. Genetics Home Reference: rapid-onset dystonia parkinsonism

    MedlinePlus

    ... 9. Citation on PubMed Brashear A, Dobyns WB, de Carvalho Aguiar P, Borg M, Frijns CJ, Gollamudi S, ... Kabakci K, Isbruch K, Schilling K, Hedrich K, de Carvalho Aguiar P, Ozelius LJ, Kramer PL, Schwarz ...

  18. On New Proofs of Fundamental Inequalities with Applications

    ERIC Educational Resources Information Center

    Ray, Partha

    2010-01-01

    By using the Cauchy-Schwarz inequality a new proof of several standard inequalities is given. A new proof of Young's inequality is given by using Holder's inequality. A new application of the above inequalities is included.

  19. Tree-level disk amplitude of three closed strings

    NASA Astrophysics Data System (ADS)

    Mousavi, Sepideh; Velni, Komeil Babaei

    2018-05-01

    It has been shown that the disk-level S-matrix elements of one Ramond-Ramond (RR) and two Neveu-Schwarz-Neveu-Schwarz (NSNS) states could be found by applying the Ward identity associated with the string duality and the gauge symmetry on a given component of the S matrix. These amplitudes have appeared as the components of six different T-dual multiplets. It is predicted in the literature that there are some nonzero disk-level scattering amplitudes, such as one RR (p -1 ) form with zero transverse index and two N S N S states, could not be captured by the T-dual Ward identity. We explicitly find this amplitude in terms of a minimal context of the integral functions by the insertion of one closed string RR vertex operator and two NSNS vertex operators. From the amplitude invariance under the Ward identity associated with the NSNS gauge transformations and T-duality, we also find some integral identities.

  20. Generalization of the Schwarz-Christoffel mapping to multiply connected polygonal domains.

    PubMed

    Vasconcelos, Giovani L

    2014-06-08

    A generalization of the Schwarz-Christoffel mapping to multiply connected polygonal domains is obtained by making a combined use of two preimage domains, namely, a rectilinear slit domain and a bounded circular domain. The conformal mapping from the circular domain to the polygonal region is written as an indefinite integral whose integrand consists of a product of powers of the Schottky-Klein prime functions, which is the same irrespective of the preimage slit domain, and a prefactor function that depends on the choice of the rectilinear slit domain. A detailed derivation of the mapping formula is given for the case where the preimage slit domain is the upper half-plane with radial slits. Representation formulae for other canonical slit domains are also obtained but they are more cumbersome in that the prefactor function contains arbitrary parameters in the interior of the circular domain.

  1. Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models

    NASA Astrophysics Data System (ADS)

    Xu, Shiming

    2015-04-01

    We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.

  2. Double metric, generalized metric, and α' -deformed double field theory

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2016-03-01

    We relate the unconstrained "double metric" of the "α' -geometry" formulation of double field theory to the constrained generalized metric encoding the spacetime metric and b -field. This is achieved by integrating out auxiliary field components of the double metric in an iterative procedure that induces an infinite number of higher-derivative corrections. As an application, we prove that, to first order in α' and to all orders in fields, the deformed gauge transformations are Green-Schwarz-deformed diffeomorphisms. We also prove that to first order in α' the spacetime action encodes precisely the Green-Schwarz deformation with Chern-Simons forms based on the torsionless gravitational connection. This seems to be in tension with suggestions in the literature that T-duality requires a torsionful connection, but we explain that these assertions are ambiguous since actions that use different connections are related by field redefinitions.

  3. Null hypersurface quantization, electromagnetic duality and asympotic symmetries of Maxwell theory

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Arpan; Hung, Ling-Yan; Jiang, Yikun

    2018-03-01

    In this paper we consider introducing careful regularization at the quantization of Maxwell theory in the asymptotic null infinity. This allows systematic discussions of the commutators in various boundary conditions, and application of Dirac brackets accordingly in a controlled manner. This method is most useful when we consider asymptotic charges that are not localized at the boundary u → ±∞ like large gauge transformations. We show that our method reproduces the operator algebra in known cases, and it can be applied to other space-time symmetry charges such as the BMS transformations. We also obtain the asymptotic form of the U(1) charge following from the electromagnetic duality in an explicitly EM symmetric Schwarz-Sen type action. Using our regularization method, we demonstrate that the charge generates the expected transformation of a helicity operator. Our method promises applications in more generic theories.

  4. Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements

    NASA Astrophysics Data System (ADS)

    Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.

    2000-11-01

    In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.

  5. Topological BF Theories

    NASA Astrophysics Data System (ADS)

    Sǎraru, Silviu-Constantin

    Topological field theories originate in the papers of Schwarz and Witten. Initially, Schwarz shown that one of the topological invariants, namely the Ray-Singer torsion, can be represented as the partition function of a certain quantum field theory. Subsequently, Witten constructed a framework for understanding Morse theory in terms of supersymmetric quantum mechanics. These two constructions represent the prototypes of all topological field theories. The model used by Witten has been applied to classical index theorems and, moreover, suggested some generalizations that led to new mathematical results on holomorphic Morse inequalities. Starting with these results, further developments in the domain of topological field theories have been achieved. The Becchi-Rouet-Stora-Tyutin (BRST) symmetry allowed for a new definition of topological ...eld theories as theories whose BRST-invariant Hamiltonian is also BRST-exact. An important class of topological theories of Schwarz type is the class of BF models. This type of models describes three-dimensional quantum gravity and is useful at the study of four-dimensional quantum gravity in Ashtekar-Rovelli-Smolin formulation. Two-dimensional BF models are correlated to Poisson sigma models from various two-dimensional gravities. The analysis of Poisson sigma models, including their relationship to two-dimensional gravity and the study of classical solutions, has been intensively studied in the literature. In this thesis we approach the problem of construction of some classes of interacting BF models in the context of the BRST formalism. In view of this, we use the method of the deformation of the BRST charge and BRST-invariant Hamiltonian. Both methods rely on specific techniques of local BRST cohomology. The main hypotheses in which we construct the above mentioned interactions are: space-time locality, Poincare invariance, smoothness of deformations in the coupling constant and the preservation of the number of derivatives on each field. The first two hypotheses implies that the resulting interacting theory must be local in space-time and Poincare invariant. The smoothness of deformations means that the deformed objects that contribute to the construction of interactions must be smooth in the coupling constant and reduce to the objects corresponding to the free theory in the zero limit of the coupling constant. The preservation of the number of derivatives on each field imp! lies two aspects that must be simultaneously fulfilled: (i) the differential order of each free field equation must coincide with that of the corresponding interacting field equation; (ii) the maximum number of space-time derivatives from the interacting vertices cannot exceed the maximum number of derivatives from the free Lagrangian. The main results obtained can be synthesized into: obtaining self-interactions for certain classes of BF models; generation of couplings between some classes of BF theories and matter theories; construction of interactions between a class of BF models and a system of massless vector fields.

  6. Tachyons in the Galilean limit

    NASA Astrophysics Data System (ADS)

    Batlle, Carles; Gomis, Joaquim; Mezincescu, Luca; Townsend, Paul K.

    2017-04-01

    The Souriau massless Galilean particle of "colour" k and spin s is shown to be the Galilean limit of the Souriau tachyon of mass m = ik and spin s. We compare and contrast this result with the Galilean limit of the Nambu-Goto string and Green-Schwarz superstring.

  7. Hydrologic Engineering Center: A Quarter Century 1964-1989

    DTIC Science & Technology

    1989-01-01

    consisted of an engineering tech- nician, a mathematician, four hydraulic engineers and a clerk- steno . During the last 25 years, staff members have...McPherson Jack Dangermond John Lager Don Hey Clarence Korhonen Harry Schwarz James Wright John J. Buckley Mike Savage Nicholas Lally Ralph

  8. High concentrations of anthocyanins in genuine cherry-juice of old local Austrian Prunus avium varieties.

    PubMed

    Schüller, Elisabeth; Halbwirth, Heidi; Mikulic-Petkovsek, Maja; Slatnar, Ana; Veberic, Robert; Forneck, Astrid; Stich, Karl; Spornberger, Andreas

    2015-04-15

    Antioxidant activity and polyphenols were quantified in vapour-extracted juice of nine Austrian, partially endemic varieties of sweet cherry (Prunus avium): cv. 'Spätbraune von Purbach', cv. 'Early Rivers', cv. 'Joiser Einsiedekirsche', cv. 'Große Schwarze Knorpelkirsche' and four unidentified local varieties. Additionally the effect of storage was evaluated for six of the varieties. A variety showing the highest antioxidant capacity (9.64 μmol Trolox equivalents per mL), total polyphenols (2747 mg/L) and total cyanidins (1085 mg/L) was suitable for mechanical harvest and its juice did not show any losses of antioxidant capacity and total anthocyanin concentration during storage. The juice of cv. 'Große Schwarze Knorpelkirsche' had also high concentrations of total anthocyanins (873 mg/L), but showed substantial losses through storage. The local Austrian sweet cherry varieties from the Pannonian climate zone are particularly suitable for the production of processed products like cherry juice with high content of anthocyanins and polyphenols. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-02-01

    In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.

  10. Aplanatic Two-Surface Systems: The Optics Of Our Grandfathers

    NASA Astrophysics Data System (ADS)

    Krautter, Martin

    1986-10-01

    Karl Schwarzschild (1873 - 1916)1 has set up the 2-mirror systems as a 2-parameter mani-fold. He constructed them for primary aplanatism with conic section surfaces, and for finite aplanatism with numerically determined surfaces of revolution. Developing from the still older 2-paraboloid telescopes, conceived by Marin Mersenne, the systems since designed fill three domains of existence. The grazing incidence systems too (the Wolter-Schwarz-schild systems) have their loci on this map. Martin Linnemann (born 1880), student of Karl Schwarzschild, designed the first lenses, made aplanatic with two general surfaces of revolution2. For later authors remained only to vary image scale to non-zero values, and to adapt the design method to computer use.

  11. Deliberate choices or strong motives: Exploring the mechanisms underlying the bias of organic claims on leniency judgments.

    PubMed

    Prada, Marília; Rodrigues, David; Garrido, Margarida V

    2016-08-01

    Organic claims can influence how a product is perceived in dimensions that are unrelated with the food production method (e.g., organic food is perceived as more healthful and less caloric than conventional food). Such claims can also bias how the consumers of organic food are perceived and how other people judge their behavior. Schuldt and Schwarz (2010) have shown that individuals evaluating a target with a weight-loss goal are more lenient in judging the target forgoing exercise when the target had an organic (vs. conventional) dessert. This impact of organic claims on leniency judgments has been interpreted either as a halo or a licensing effect. In the current research we aim to replicate and extend Schuldt and Schwarz's (2010) results by examining the mechanisms that are more likely to explain the observed leniency judgments. In Experiment 1, we observed that leniency towards a target that has consumed an organic meal is only observed when the target intentionally chooses such organic meal (vs. choice determined by the situation). These findings suggest that the impact of organic claims on leniency judgments is not merely based on a halo effect. Instead, a licensing account emerges as the most probable mechanism. In Experiment 2, we further found that stronger (vs. weaker) motives for forgoing exercise influenced leniency judgments to the same extent as having had an organic meal. Understanding the mechanisms that shape consumers' decisions may have important implications to prevent bias in their judgments about food and exercise. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. The Nature of Spontaneity in High Quality Mathematics Learning Experiences

    ERIC Educational Resources Information Center

    Williams, Gaye

    2004-01-01

    Spontaneity has been linked to high quality learning experiences in mathematics (Csikszentmihalyi & Csikszentmihalyi, 1992; Williams, 2002).This paper shows how spontaneity can be identified by attending to the nature of social elements in the process of abstracting (Dreyfus, Hershkowitz, & Schwarz, 2001). This process is elaborated…

  13. Developing + Using Models in Physics

    ERIC Educational Resources Information Center

    Campbell, Todd; Neilson, Drew; Oh, Phil Seok

    2013-01-01

    Of the eight practices of science identified in "A Framework for K-12 Science Education" (NRC 2012), helping students develop and use models has been identified by many as an anchor (Schwarz and Passmore 2012; Windschitl 2012). In instruction, disciplinary core ideas, crosscutting concepts, and scientific practices can be meaningfully…

  14. Modeling Natural Selection

    ERIC Educational Resources Information Center

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  15. T-duality constraints on higher derivatives revisited

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2016-04-01

    We ask to what extent are the higher-derivative corrections of string theory constrained by T-duality. The seminal early work by Meissner tests T-duality by reduction to one dimension using a distinguished choice of field variables in which the bosonic string action takes a Gauss-Bonnet-type form. By analyzing all field redefinitions that may or may not be duality covariant and may or may not be gauge covariant we extend the procedure to test T-duality starting from an action expressed in arbitrary field variables. We illustrate the method by showing that it determines uniquely the first-order α' corrections of the bosonic string, up to terms that vanish in one dimension. We also use the method to glean information about the O({α}^' 2}) corrections in the double field theory with Green-Schwarz deformation.

  16. Creating and Using VMCAnalytics for Preservice Teachers' Studying of Argumentation

    ERIC Educational Resources Information Center

    Van Ness, Cheryl K.

    2017-01-01

    Teacher recognition of student argumentation has been addressed by many researchers (e.g., Schwarz, 2009; Krummheuer, 1995; Bieda & Lepak, 2014; Whitenack & Yackel, 2002). Further, standards for mathematics learning emphasize the importance of including argumentation in the K-12 classroom (NCTM, 2000; CCSS, 2010). The study reported here…

  17. The Effect of Processing Fluency on Impressions of Familiarity and Liking

    ERIC Educational Resources Information Center

    Westerman, Deanne L.; Lanska, Meredith; Olds, Justin M.

    2015-01-01

    Processing fluency has been shown to have wide-ranging effects on disparate evaluative judgments, including judgments of liking and familiarity. One account of such effects is the hedonic marking hypothesis (Winkielman, Schwarz, Fazendeiro, & Reber, 2003), which posits that fluency is directly linked to affective preferences via a positive…

  18. Using Agent Based Distillation to Explore Issues Related to Asymmetric Warfare

    DTIC Science & Technology

    2009-10-01

    hierarchical model of needs proposed by Abraham Maslow [12]. An interpretation of Maslow’s hierarchy of needs can be represented as a pyramid with the more...D. Kahnemann, E. Deiner, Dr. Phil Norbert Schwarz, « Foundations of Hedonic Psychology », Russell Sage Foundation, 1999 [12] Abraham.H. Maslow

  19. Pheromones in White Pine Cone Beetle, Conophthorus coniperdu (Schwarz) (Coleoptera: Scolytidae)

    Treesearch

    Goran Birgersson; Gary L. DeBarr; Peter de Groot; Mark J. Dalusky; Harold D. Pierce; John H. Borden; Holger Meyer; Wittko Francke; Karl E. Espelie; C. Wayne Berisford

    1995-01-01

    Female white pine cone beetles, Conophrhorus coniperda, attacking second-year cones of eastern white pine, Pinus strobus L., produced a sex-specific pheromone that attracted conspecific males in laboratory bioassays and to field traps. Beetle response was enhanced by host monoterpenes. The female-produced compound was identified in...

  20. Implementation of SEREP Into LLNL Dyna3d for Global/Local Analysis

    DTIC Science & Technology

    2005-08-01

    System Equivalent Reduction Expansion Process (SEREP). Presented at the 7th International Modal Analysis Conference, Las Vegas, NV, February 1989. 7...HUTCHINSON F SCHWARZ WARREN MI 48397-5000 14 BENET LABS AMSTA AR CCB R FISCELLA M SOJA E KATHE M SCAVULO G SPENCER P WHEELER

  1. Violation of Bell's inequalities in quantum optics

    NASA Technical Reports Server (NTRS)

    Reid, M. D.; Walls, D. F.

    1984-01-01

    An optical field produced by intracavity four-wave mixing is shown to exhibit the following nonclassical features: photon antibunching, squeezing, and violation of Cauchy-Schwarz and Bell's inequalities. These intrinsic quantum mechanical effects are shown to be associated with the nonexistence of a positive normalizable Glauber-Sudarshan P function.

  2. Gauge symmetries of the free supersymmetric string field theories

    NASA Astrophysics Data System (ADS)

    Neveu, A.; West, P. C.

    1985-12-01

    The gauge covariant local formulations of the free supersymmetric strings that contained a finite number of supplementary fields are extended so as to place all the generators of the Ramond-Neveu-Schwarz algebra on a more equal footing. Permanent address: King's College, Mathematics Department, London WC2R 2LS, UK.

  3. Figuring the Acceleration of the Simple Pendulum

    ERIC Educational Resources Information Center

    Lieberherr, Martin

    2011-01-01

    The centripetal acceleration has been known since Huygens' (1659) and Newton's (1684) time. The physics to calculate the acceleration of a simple pendulum has been around for more than 300 years, and a fairly complete treatise has been given by C. Schwarz in this journal. But sentences like "the acceleration is always directed towards the…

  4. Measuring Children's Age Stereotyping Using a Modified Piagetian Conservation Task

    ERIC Educational Resources Information Center

    Kwong See, Sheree T.; Rasmussen, Carmen; Pertman, S. Quinn

    2012-01-01

    We examined five-year-old-children's age stereotyping using a modified Piagetian conservation task. Children were asked if two lines of objects were the "same" after one line had been made longer (transformed). A conversational account posits that children's answers reflect assumptions about the asker's motivation for the question (Schwarz, 1996).…

  5. 75 FR 38986 - Grant of Authority for Subzone Status; Schwarz Pharma Manufacturing, Inc. (Pharmaceutical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [Order No. 1686] Grant of Authority for Subzone... expedite and encourage foreign commerce, and for other purposes,'' and authorizes the Foreign-Trade Zones... cannot serve the specific use involved, and when the activity results in a significant public benefit and...

  6. The Influence of Affective States on the Process of Lie Detection

    ERIC Educational Resources Information Center

    Reinhard, Marc-Andre; Schwarz, Norbert

    2012-01-01

    Lay theories about the tell tale signs of deception include numerous nonverbal cues; empirically, however, a focus on message content results in better lie detection than a focus on nonverbal elements. Feelings-as-information theory (Schwarz, 1990, 2012) predicts that systematic processing of message content is more likely under sad than happy…

  7. Image of the Wehrmacht in Federal German Society and in the Tradition of the Bundeswehr

    DTIC Science & Technology

    1999-08-01

    politicians in 33 Hans-Peter Schwarz, Die Aera Adenauer, 1957-1963, pp.204-216; Hannah Arendt , Eichmann in...became unhinged by the behavior of the man in the glass booth. Hannah Arendt’s thesis of the banality of evil when applied to Eichmann’s biography

  8. The Robotic Hugo E. Schwarz Telescope | CTIO

    Science.gov Websites

    Program PIA Program GO-FAAR Program Other Opportunities Tourism Visits to Tololo Astro tourism in Chile Tourism in Chile Information for travelers Visit Tololo Media Relations News Press Release Publications of a new electronic drive system for the mount, and the second, dedicate to re-design the dome

  9. Making Social Sector Apprenticeships Part of the College Experience

    ERIC Educational Resources Information Center

    Bridgespan Group, 2015

    2015-01-01

    Eric Schwarz cofounded Citizen Schools in 1995 to offer Boston students living in low-income communities the opportunity to participate in apprenticeships in a variety of careers. Twenty years later, Citizen Schools has served more than 50,000 mostly middle-school students in seven states coast-to-coast, engaging some 40,000 volunteer…

  10. The effect of e-learning on the quality of orthodontic appliances

    PubMed Central

    Schorn-Borgmann, Stephanie; Lippold, Carsten; Wiechmann, Dirk; Stamm, Thomas

    2015-01-01

    Purpose The effect of e-learning on practical skills in medicine has not yet been thoroughly investigated. Today’s multimedia learning environment and access to e-books provide students with more knowledge than ever before. The aim of this study is to evaluate the effect of online demonstrations concerning the quality of orthodontic appliances manufactured by undergraduate dental students. Materials and methods The study design was a parallel-group randomized clinical trial. Fifty-four participants were randomly assigned to one of the three groups: 1) conventional lectures, 2) conventional lectures plus written online material, and 3) access to resources of groups one and two plus access to online video material. Three orthodontic appliances (Schwarz Plate, U-Bow Activator, and Fränkel Regulator) were manufactured during the course and scored by two independent raters blinded to the participants. A 15-point scale index was used to evaluate the outcome quality of the appliances. Results In general, no significant differences were found between the groups. Concerning the appliances, the Schwarz Plate obtained the highest scores, whereas the Fränkel Regulator had the lowest scores; however, these results were independent of the groups. Females showed better outcome scores than males in groups two and three, but the difference was insignificant. Age of the participants also had no significant effect. Conclusion The offer that students could use additional time and course-independent e-learning resources did not increase the outcome quality of the orthodontic appliances. The advantages of e-learning observed in the theoretical fields of medicine were not achieved in the educational procedures for manual skills. Factors other than e-learning may have a higher impact on manual skills, and this should be investigated in further studies. PMID:26346485

  11. On habitable Trojan worlds in exoplanetary systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Richard; Eggl, Siegfried; Akos, Bazso; Funk, Barbara

    2016-09-01

    When astronomers look for life on planets in exoplanetary systems (EPS), they use the concept of the habitable zone (HZ) for the search of life in the universe. In many EPS a giant planet moves in the HZ and makes the existence of another habitable planet impossible, because of the gravitational interaction with a gas giant (GG). Therefore the investigation of the Trojan configuration provides another opportunity for an additional habitable planet. The configuration is the following, when a GG (like Jupiter or larger) moves in the HZ, a terrestrial Trojan planet may move in a stable orbit around the Lagrangian equilibrium points L4 or L5. Trojans are moving either close to 60° ahead or 60° behind the GG with nearly the same semi-major axis as the planet (as shown in the figure for the circular case). Former studies (Schwarz et al. 2009 and Schwarz et al 2014) could show that this configuration is not only stable for small bodies like asteroids (e.g. Jupiter Trojans), but also for larger ones (Earth-mass). We investigate the stability of possible Trojan planets in several known extra-solar planetary systems, by using the planar 3 and N-body problem as dynamical model considering the eccentricity of the planets. For our numerical simulations we use the Lie-integration method with an automatic step-size control to solve the equations of motion (Eggl and Dvorak 2010). In our study, we have concentrated on the extension of the stability region around the Lagrangian points and the influence of additional outer or inner GG. Finally we present a list of candidates of EPS where a massive GG (3-10 Jupiter masses) moves almost or fully in the HZ and an additional possible Trojan planet can have stable motion.

  12. Response of cone and twig beetles (Coleoptera: Scolytidae) and a predator (Coleoptera: Cleridae) to pityol, conophthorin, and verbenone

    Treesearch

    Peter De Groot; Gary L. DeBarr

    2000-01-01

    Field studies were conducted in the United States and Canada to determine the response of the white pine cone beetle, Conophthorus coniperda (Schwarz), and the red pine cone beetle, Conophthorus resinosae Hopkins, to two potential inhibitors, conophthorin and verbenone, of pheromone communication. Trap catches of male C....

  13. Insect-induced crystallization of white pine resins. II. white-pine cone beetle

    Treesearch

    Frank S., Jr. Santamour

    1965-01-01

    The white-pine cone beetle (Conophthoras coniperda ( Schwarz ) ) can cause extensive damage to cones of eastern white pine (Pinus strobus L.) and can severely hamper natural reproduction of this species (Graber 1964). This insect also will be a potential pest of seed orchards for the production of genetically superior seed if and...

  14. Molecular diagnostics of the honey bee parasites Lotmaria passim and Crithidia spp. (Trypanosomatidae) using multiplex PCR

    USDA-ARS?s Scientific Manuscript database

    Lotmaria passim Schwarz is a recently described trypanosome parasite of honey bees in continental United States, Europe, and Japan. We developed a multiplex PCR technique using a PCR primer specific for L. passim to distinguish this species from C. mellificae. We report the presence of L. passim in ...

  15. Influence of seed weight on early development of eastern white pine

    Treesearch

    M. E., Jr. Demeritt; H. W., Jr. Hocker

    1975-01-01

    In the Northeast, eastern white pine (Pinus strobus L.) cannot be relied upon to consistently regenerate naturally due to the destruction of the cone crops by the white pine cone beetle (Conopthorus coniperda Schwarz). The white pine cone beetle has been reported to have destroyed the white pine cone crops for nine consecutive...

  16. Sunlight, Sea Ice, and the Ice Albedo Feedback in a Changing Arctic Sea Ice Cover

    DTIC Science & Technology

    2015-09-30

    PUBLICATIONS Carmack, E .; I. Polyakov; L. Padman; I. Fer; E . Hunke; J. Hutchings; J. Jackson; D. Kelley; R. Kwok; C. Layton ; D.K. Perovich; O. Persson; B...Heygster, M. Huntemann, P. Schwarz, G. Birnbaum, C. Polashenski, D. Perovich, E . Zege, A. Malinka and A. Prikchach (2015), The melt pond fraction and

  17. A domain decomposition approach to implementing fault slip in finite-element models of quasi-static and dynamic crustal deformation

    USGS Publications Warehouse

    Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.

    2013-01-01

    We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.

  18. Obituary: Hugo Schwarz, 1953-2006

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    2007-12-01

    Hugo Schwarz died in a motorcycle accident on 20 October 2006 near his home in La Serena, Chile. At the time of his death he was a staff astronomer at Cerro Tololo Inter-American Observatory and President of IAU Commission 50 (The Protection of Existing and Potential Observatory Sites). After Hugo's half-brother Frans died when Hugo was an infant, he effectively grew up as an only child. One consequence was that Hugo became an avid reader. He once estimated that he had read between 3,000 and 4,000 books. He also moved around a great deal. For most of the first seven years of his life, Hugo lived in Venezuela because his father worked for Shell Oil Company. According to Hugo's count, he had a total of 43 different addresses in his life. This gave him experience with different cultures and a facility with several languages. He was fluent in Dutch, German, Spanish, and English, and knew some French. He was very fond of quoting his father's sayings in Dutch and liked to relate stories filled with Chilean-slang to people who understood neither, providing translations that retained the cleverness of the originals. While on holiday in Scotland in 1974, Hugo decided to enroll in the Glasgow College of Technology, as it was then known. A year later he transferred to the University of Glasgow, where he earned his BSc (1979) and PhD (officially in 1984). From 1982 to 1986 he worked on X-ray detectors for X-ray astronomy at Mullard Space Science Laboratory, south of London. In 1986 Hugo, his first wife Catriona (Cat), and their two children departed for Chile, where Hugo worked as a staff astronomer for the European Southern Observatory. Over the next nine years he spent over 1,300 nights at La Silla. A big change occurred in 1995 when Hugo moved to La Palma in the Canary Islands to be Astronomer in Charge of the Nordic Optical Telescope. He was very proud of having organized a team of astronomers and technicians who made the NOT into a valuable research facility with minimal down time. In October of 2000 Hugo returned to Chile to work at CTIO. After his demonstrated technical, scientific, and social skills drumming the NOT into shape, he was the natural choice to be the CTIO staff member assigned to the 4-m Southern Astrophysical Research (SOAR) Telescope sited at Cerro Pachon. Over the next six years Hugo worked closely with Steve Heathcote and the SOAR technical staff to improve the telescope's operational capacity. Hugo's scientific work dealt with late stages of stellar evolution, particularly planetary nebulae, and stellar polarimetry. Higher resolution optical and infrared imaging of He 2-104 led to its being known as the Southern Crab Nebula (Schwarz, Aspin, & Lutz, ApJ, L29, p. 344, 1989). Unlike the northern supernova remnant, this southern object (a nebula surrounding a symbiotic binary) looks very much like a crab. Their images of it appeared in magazines and books around the world. In 1992, along with Romano Corradi (a Ph.D. student of Hugo's) and Jorge Melnick, Schwarz published "A catalogue of narrow band images of planetary nebulae" (A&A, 96, p. 23, 1992). This was the first extensive, and still the largest, CCD image catalogue of PNe. Hugo edited the conference proceedings of a meeting held in La Serena in January 1992 (Mass Loss on the AGB and Beyond). The talks and published papers strengthened some of Hugo's ideas about the importance of evolution in binary systems, in particular the interaction of compact stellar companions and the formation of accretion disk winds and their precession in the formation of non-symmetrical planetary nebulae. In a highly cited paper, Corradi & Schwarz (A&A, 293, p. 871, 1995) were able to show that bipolar nebulae are produced from higher-mass progenitors than other morphological classes. Hugo knew that you wanted to model PNe in three dimensions, not just in two. He went on to make 3-D photoionization models of PNe with his final PhD student Hektor Monteiro (Schwarz & Monteiro, ApJ, 648, p. 430, 2006). One of the projects well along at the time of his death was a collaboration with David Spergel and a number of REU summer students on the measurement of the polarization of 2,000 stars evenly distributed around this sky. This simple set of data being obtained with the NOT, a telescope Hugo helped make fully functional, will, by a factor of two, improve the sensitivity of experiments such as WMAP and Planck to the detection of gravity waves, one of the holy grails of experimental physics. Because of Hugo's sense of humor, enthusiasm, and perspective, he achieved a good balance between work and play. He could play the diplomat and hobnob with politicians and royalty. He also was proud of the fact that his native language, Dutch, is probably the best language for swearing. He often adopted a Glaswegian accent from his time at university, and would ask you a common question of bartenders there: "So, Jimmy, what's yer name?" He loved fine cigars, particularly the flojos (not-so-tightly rolled ones) from his cigar maker in La Palma, which he generously shared with friends. He loved having people over for barbecues, and would often make paella. Which newspaper was used to cover the large pan was important. It had to be left of center politically, but not too far left. On Hugo's fiftieth birthday a temporary addition was built onto the house, carpeting was laid out on the lawn, and there ensued a sit down dinner for 107 people, complete with live musicians, and many broken glasses. Hugo is survived by his wife Claudia Sanhueza, his two children Tamar and Jouke Schwarz, his step-children Maria Josefina and Diego Gomez, and his half-brother James Schwarz. More than anyone I can think of, he also leaves behind many friends who considered him their best friend.

  19. Limonene: attractant kairomone for white pine cone beetles (Coleoptera: Scolytidae) in an Eastern white pine seed orchard in Western North Carolina

    Treesearch

    Daniel R. Miller

    2007-01-01

    I report on the attraction of the white pine cone beetle, Canophthorus coniperda (Schwarz) (Coleoptera: Scolytidae), to traps baited with the host monoterpene limonene in western North Carolina. Both (+)- and (-)-limonene attracted male and female cone beetles to Japenese beetle traps in an eastern white pine, Pinus strobus L. seed...

  20. Factors Affecting Capture of the White Pine Cone Beetle, Conophthorus coniperda (Schwarz) (Col., Scolytidae) in Phermone Traps

    Treesearch

    Peter de Groot; Gary L. DeBarr

    1998-01-01

    The white pine cone beetle, Conophthorus coniperda, is a serious pest of seed orchards. The sex pheromone (+)-trans-pityol, (2R,5S)-2-(l-hydroxy-1-methylethyl)-S-methyltetrahydrofuran, shows considerable promise to manage the cone beetle populations in seed orchards. Our work confirms that pityol is an effective attractant to...

  1. Procedures, Requirements and Challenges Associated with Analysis of Environmental Samples for Chemical Warfare Material (CWM)

    DTIC Science & Technology

    2012-03-29

    DOD Environmental Monitoring Data Quality (EMDQ) Workshop John Schwarz, Laboratory Manager; Environmental Monitoring Laboratory ( EML ) March 29, 2012...Center (ECBC),Environmental Monitoring Laboratory ( EML ),5183 Blackhawk RD,Aberdeen Proving Ground,MD,21010-5424 8. PERFORMING ORGANIZATION REPORT...Biological Applications and Risk Reduction (CBARR) Environmental Monitoring Laboratory ( EML ) Approved for Public Release Environmental Monitoring

  2. Biological Sciences Division 1991 Programs

    DTIC Science & Technology

    1991-08-01

    missing offending polysaccharides and 2) identify monosaccharide peaks in gas chromatography that we know are not holdfast- derived and can ignore. 3-On...ACCOMPLISHMENTS: 1. The polysaccharidic component of the extracellular slime of Flexibacter maritimus is predominantly a glucose polymer. In collaboration...are due to the presence of polypeptide(s), not polysaccharide as predicted. W.H. Schwarz (John Hopkins) has performed rheological analysis of this

  3. You can see galaxies from your computer | CTIO

    Science.gov Websites

    Calendar Activities NOAO-S EPO Programs CADIAS Astro Chile Hugo E. Schwarz Telescope Dark Sky Education Preserving the Dark Skies La Oficina de Protección de la Calidad del Cielo del Norte de Chile - OPCC Light Pollution StarLight Universe The World at Night (TWAN) International Dark-Sky Association (IDA) Students REU

  4. Inequalities for frequency-moment sum rules of electron liquids

    NASA Technical Reports Server (NTRS)

    Iwamoto, N.

    1986-01-01

    The relations between the various frequency-moment sum rules of electron liquids, which include even-power moments, are systematically examined by using the Cauchy-Schwarz and Hoelder inequalities. A relation involving the isothermal sound velocity and the kinetic and potential energies is obtained from one of the inequalities in the long-wavelength limit, and is generalized to arbitrary spatial dimensions.

  5. High School Students' Meta-Modeling Knowledge

    NASA Astrophysics Data System (ADS)

    Fortus, David; Shwartz, Yael; Rosenfeld, Sherman

    2016-12-01

    Modeling is a core scientific practice. This study probed the meta-modeling knowledge (MMK) of high school students who study science but had not had any explicit prior exposure to modeling as part of their formal schooling. Our goals were to (A) evaluate the degree to which MMK is dependent on content knowledge and (B) assess whether the upper levels of the modeling learning progression defined by Schwarz et al. (2009) are attainable by Israeli K-12 students. Nine Israeli high school students studying physics, chemistry, biology, or general science were interviewed individually, once using a context related to the science subject that they were learning and once using an unfamiliar context. All the interviewees displayed MMK superior to that of elementary and middle school students, despite the lack of formal instruction on the practice. Their MMK was independent of content area, but their ability to engage in the practice of modeling was content dependent. This study indicates that, given proper support, the upper levels of the learning progression described by Schwarz et al. (2009) may be attainable by K-12 science students. The value of explicitly focusing on MMK as a learning goal in science education is considered.

  6. Linking Automatic Evaluation to Mood and Information Processing Style: Consequences for Experienced Affect, Impression Formation, and Stereotyping

    ERIC Educational Resources Information Center

    Chartrand, Tanya L.; van Baaren, Rick B.; Bargh, John A.

    2006-01-01

    According to the feelings-as-information account, a person's mood state signals to him or her the valence of the current environment (N. Schwarz & G. Clore, 1983). However, the ways in which the environment automatically influences mood in the first place remain to be explored. The authors propose that one mechanism by which the environment…

  7. Critical Encounters in a Middle School English Language Arts Classroom: Using Graphic Novels to Teach Critical Thinking & Reading for Peace Education

    ERIC Educational Resources Information Center

    Sun, Lina

    2017-01-01

    Graphic novels, which tell real and fictional stories using a combination of words and images, are often sophisticated, and involve intriguing topics. There has been an increasing interest in teaching with graphic novels to promote literacy as one alternative to traditional literacy pedagogy (e.g., Gorman, 2003; Schwarz, 2002). A pedagogy of…

  8. L2 Reading Research and Pedagogical Considerations in the Teaching of French and Francophone Theater

    ERIC Educational Resources Information Center

    Edwards, Carole; Taylor, Alan M.

    2012-01-01

    Little research on improving second language (L2) reading comprehension of French and francophone theater has been conducted. This study provides insight into enhancing L2 comprehension of drama by combining L2 research with examples from L'accent grave by Jacques Prevert, Ton beau capitaine by Simone Schwarz-Bart (1987), Un Touareg s'est marie a…

  9. ANOMALY STRUCTURE OF SUPERGRAVITY AND ANOMALY CANCELLATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butter, Daniel; Gaillard, Mary K.

    2009-06-10

    We display the full anomaly structure of supergravity, including new D-term contributions to the conformal anomaly. This expression has the super-Weyl and chiral U(1){sub K} transformation properties that are required for implementation of the Green-Schwarz mechanism for anomaly cancellation. We outline the procedure for full anomaly cancellation. Our results have implications for effective supergravity theories from the weakly coupled heterotic string theory.

  10. Dental and skeletal changes in the upper and lower jaws after treatment with Schwarz appliances using cone-beam computed tomography.

    PubMed

    Tai, Kiyoshi; Park, Jae Hyun

    2010-01-01

    The purpose of this research was to use cone-beam computed tomography (CBCT) images to evaluate dental and skeletal changes in upper and lower jaws after treatment with Schwarz appliances. 28 patients with Angle Class I molar relationships and crowding were randomly divided into two groups--14 non-expanded and 14 expanded patients. 3D-Rugle CBCT software was used to measure various reference points before treatment (TO) and during the retention period of approximately 9 months after 6 to 12 month expansion (T1). Cephalometric and cast measurements were used to evaluate treatment in both groups. To test whether there were any significant differences between the control and treatment groups at TO and T1, the Mann-Whitney U-test was used. The dental arch (including tooth root apices) had expanded in the upper and lower jaws. Alveolar bone expansion of up to 2 mm apical to the cementoenamel junction (CEJ) was detected. The midpalatal sutures were separated in some cases and subsequent expansion was observed at the inner surface of the nasal cavity at the inferior turbinates. However no significant (P > 0.05) difference was observed in the inter-width of the mandibular bodies, zygomatic bones, nasal cavity in the middle turbinate region, condylar heads, or antegonial notches. In mandibular and maxillary cast measurements, arch crowding and arch perimeter showed statistically significant changes in the expansion group. The mandibular width values demonstrated no significant changes as measured from a point 2 mm apical to the CEJ whereas the maxillary width values demonstrated significant changes as measured from a point 2 mm apical to the CEJ. This study indicates that the Schwarz appliance primarily affects the dento-alveolar complex, while it has little effect on either the mandibular bodies, any associated structures including the maxillary midpalatal suture and the inter-width of the nasal cavity in the middle turbinate region. In addition, the center of rotation of the mandibular and maxillary first molar was observed apical to the root apex.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Paul T.; Shadid, John N.; Tsuji, Paul H.

    Here, this study explores the performance and scaling of a GMRES Krylov method employed as a smoother for an algebraic multigrid (AMG) preconditioned Newton- Krylov solution approach applied to a fully-implicit variational multiscale (VMS) nite element (FE) resistive magnetohydrodynamics (MHD) formulation. In this context a Newton iteration is used for the nonlinear system and a Krylov (GMRES) method is employed for the linear subsystems. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play a critical role. Krylov smoothers are considered inmore » an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. Three time dependent resistive MHD test cases are considered to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.« less

  12. Emotional reactivity to daily events in major and minor depression.

    PubMed

    Bylsma, Lauren M; Taylor-Clift, April; Rottenberg, Jonathan

    2011-02-01

    Although emotional dysfunction is an important aspect of major depressive disorder (MDD), it has rarely been studied in daily life. Peeters, Nicolson, Berkhof, Delespaul, and deVries (2003) observed a surprising mood-brightening effect when individuals with MDD reported greater reactivity to positive events. To better understand this phenomenon, we conducted a multimethod assessment of emotional reactivity to daily life events, obtaining detailed reports of appraisals and event characteristics using the experience-sampling method and the Day Reconstruction Method (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004) in 35 individuals currently experiencing a major depressive episode, 26 in a minor depressive (mD) episode, and 38 never-depressed healthy controls. Relative to healthy controls, both mood-disordered groups reported greater daily negative affect and lower positive affect and reported events as less pleasant, more unpleasant, and more stressful. Importantly, MDD and mD individuals reported greater reductions in negative affect following positive events, an effect that converged across assessment methods and was not explained by differences in prevailing affect, event appraisals, or medications. Implications of this curious mood-brightening effect are discussed. (c) 2010 APA, all rights reserved.

  13. On new physics searches with multidimensional differential shapes

    NASA Astrophysics Data System (ADS)

    Ferreira, Felipe; Fichet, Sylvain; Sanz, Veronica

    2018-03-01

    In the context of upcoming new physics searches at the LHC, we investigate the impact of multidimensional differential rates in typical LHC analyses. We discuss the properties of shape information, and argue that multidimensional rates bring limited information in the scope of a discovery, but can have a large impact on model discrimination. We also point out subtleties about systematic uncertainties cancellations and the Cauchy-Schwarz bound on interference terms.

  14. Military Nutrition Research: Four Tasks to Address Personnel Readiness and Warfighter Performance

    DTIC Science & Technology

    2007-03-01

    insulin, free fatty acids, beta hydroxybutyrate, glucagon, and IGF-1, epinephrine, norepinephrine, urine creatinine, urine total nitrogen, urine urea...project. • Completion of blood testing for Project 4. Specifically, the following tests were completed: AST, beta hydroxybutyrate, blood urea...Minehira, J-M Schwarz, K Acheson, P Schneiter, J Burri, E Jequier, and L Tappy. Mechanisms of action of ß- glucan in postprandial glucose metabolism

  15. Cognitive Adaptability: The Role of Metacognition and Feedback in Entrepreneural Decision Policies

    DTIC Science & Technology

    2005-01-01

    their environments in such a way as to facilitate effective and dynamic cognitive functioning. In this dissertation, I present three complementary studies ...the study of metacognition (Jost, Kruglanski, and Nelson, 1998; Mischel, 1998; Schwarz, 1998b). This research has three goals, specifically to...environments in such a way as to facilitate effective and dynamic cognitive functioning. In this dissertation, I present three complementary studies that

  16. Potential Hardware and Software Improvements of Inertial Positioning and Gravity Vector Determination,

    DTIC Science & Technology

    1981-08-17

    P. 1979b. Inertial Surveying Systems - Experience and Prognosis. Paper, presented at the FIG-Symposium on Modern Technology for Cadastre and Land... Information Systems , Ottawa, Canada, Oct. 2-5, 1979. Schwarz, K. P. 1980. Gravity Field Approximation Using Inertial Survey System . The Canadian...higher performance gyroscope; and accelerometers in the horizontal channels of Litton’s local-level inertial positioning system and the resulting

  17. Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks

    DTIC Science & Technology

    1992-05-30

    course, to on-going changes brought about by learning processes. As research in neurodynamics proceeded, the concept of reverberatory information flows...Microstructure of Cognition . Vol. 1: Foundations, M.I.T. Press, Cambridge, Massachusetts, pp. 354-361, 1986. 100 I Schwarz, G., "Estimating the dimension of a...Continually Running Fully Recurrent Neural Networks, ICS Report 8805, Institute of Cognitive Science, University of California at San Diego, 1988. 10 II

  18. Catalogue of Exoplanets in Multiple-Star-Systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Richard; Funk, Barbara; Bazsó, Ákos; Pilat-Lohinger, Elke

    2017-07-01

    Cataloguing the data of exoplanetary systems becomes more and more important, due to the fact that they conclude the observations and support the theoretical studies. Since 1995 there is a database which list most of the known exoplanets (The Extrasolar Planets Encyclopaedia is available at http://exoplanet.eu/ and described at Schneider et al. 2011). With the growing number of detected exoplanets in binary and multiple star systems it became more important to mark and to separate them into a new database. Therefore we started to compile a catalogue for binary and multiple star systems. Since 2013 the catalogue can be found at http://www.univie.ac.at/adg/schwarz/multiple.html (description can be found at Schwarz et al. 2016) which will be updated regularly and is linked to the Extrasolar Planets Encyclopaedia. The data of the binary catalogue can be downloaded as a file (.csv) and used for statistical purposes. Our database is divided into two parts: the data of the stars and the planets, given in a separate list. Every columns of the list can be sorted in two directions: ascending, meaning from the lowest value to the highest, or descending. In addition an introduction and help is also given in the menu bar of the catalogue including an example list.

  19. A Novel Feature Level Fusion for Heart Rate Variability Classification Using Correntropy and Cauchy-Schwarz Divergence.

    PubMed

    Goshvarpour, Ateke; Goshvarpour, Atefeh

    2018-04-30

    Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.

  20. MAMAP - a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: retrieval algorithm and first inversions for point source emission rates

    NASA Astrophysics Data System (ADS)

    Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Burrows, J. P.; Bovensmann, H.

    2011-04-01

    MAMAP is an airborne passive remote sensing instrument designed for measuring columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument consists of two optical grating spectrometers: One in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions and another one in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an airplane MAMAP can effectively survey areas on regional to local scales with a ground pixel resolution of about 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP can be used to close the gap between satellite data exhibiting global coverage but with a rather coarse resolution on the one hand and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007 test flights were performed over two coal-fired powerplants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions as stated by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of delivering reliable estimates for strong point source emission rates, given appropriate flight patterns and detailed knowledge of wind conditions.

  1. European Science Notes Information Bulletin Reports on Current European/ Middle Eastern Science

    DTIC Science & Technology

    1991-12-01

    Symposium 89, F.-L. Krause , H. Jansen, eds., held in Berlin, NY:ASME. FRG November 1989. Hansmann, W. November. 1985. Interactiver entwurf und Nowacki, H...8217 Smoothing of Multipatch Bzier Surfaces - Curvature Approximation and Knot Removal for Wolfgang Schwarz, EDS GmbH, FRG (A). Handling Scattered Data - Bernd...Physical Oceanography research vessel. The Institute has three CTDs which have been used to obtain a very complete hydrographic series Dr. Wolfgang F

  2. ["Anxiety glistens on our brows". Dream reports in literary works on the horrors of ghettos and concentration camps].

    PubMed

    Klein, J

    1991-06-01

    Dream reports occupy a special place in literature about confinement in concentration camps and ghettos (Robert Antelme, Charlotte Delbo, Anna Langfus, André Schwarz-Bart). They are central elements in the narrative that relate the anxiety of those threatened with destruction more faithfully than any realistic account could. They disrupt the chronological linearity and rationality and represent in images horror beyond memory or description.

  3. Rosoboroneksport: Arms Sales and the Structure of Russian Defense Industry

    DTIC Science & Technology

    2007-01-01

    comparable with such segments of the global economy as energy and food . Competition here is 11 extremely strong.”26 Moreover, he also stated that...as energy and food . Competition here is extremely strong.”266 Similarly, as early as 2004, management changes at key defense industrial firms like...Military Realities,” Juergen Schwarz, Wilfred A. Herrmann, and Hanns-Frank Seller, eds., Maritime Strategies in Asia, Bangkok: White Lotus Press

  4. KSC-2012-1942

    NASA Image and Video Library

    2012-04-03

    CAPE CANAVERAL, Fla. – Jeremy Schwarz, left, quality assurance technician, and Mike Williams, right, a thermal protection system technician, both with United Space Alliance, affix a section of tile to the right wing of space shuttle Endeavour at NASA's Kennedy Space Center in Florida. Ongoing transition and retirement activities are preparing the spacecraft for public display at the California Science Center in Los Angeles. Endeavour flew 25 missions during its 19-year career. Photo credit: NASA/Cory Huston

  5. The role of familiarity in daily well-being: developmental and cultural variation.

    PubMed

    Oishi, Shigehiro; Kurtz, Jaime L; Miao, Felicity F; Park, Jina; Whitchurch, Erin

    2011-11-01

    The present study examined life stage and cultural differences in the degree to which familiarity of one's physical location and interaction partner is associated with daily well-being. Participants reported all the activities they engaged in and how they felt during these activities on a previous day using the Day Reconstruction Method (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004). Both Korean and American retirees were happier when in a familiar place than in an unfamiliar place, whereas the reverse was true for both Korean and American working adults. In addition, we found cultural differences in the role of familiarity of the interaction partner. Specifically, Koreans (both retirees and working adults) were substantially happier when they interacted with a familiar person than when they interacted with an unfamiliar person. In contrast, Americans (both retirees and working adults) were no happier with a familiar person than with an unfamiliar person.

  6. Bioreactors Drive Advances in Tissue Engineering

    NASA Technical Reports Server (NTRS)

    2012-01-01

    It was an unlikely moment for inspiration. Engineers David Wolf and Ray Schwarz stopped by their lab around midday. Wolf, of Johnson Space Center, and Schwarz, with NASA contractor Krug Life Sciences (now Wyle Laboratories Inc.), were part of a team tasked with developing a unique technology with the potential to enhance medical research. But that wasn t the focus at the moment: The pair was rounding up colleagues interested in grabbing some lunch. One of the lab s other Krug engineers, Tinh Trinh, was doing something that made Wolf forget about food. Trinh was toying with an electric drill. He had stuck the barrel of a syringe on the bit; it spun with a high-pitched whirr when he squeezed the drill s trigger. At the time, a multidisciplinary team of engineers and biologists including Wolf, Schwarz, Trinh, and project manager Charles D. Anderson, who formerly led the recovery of the Apollo capsules after splashdown and now worked for Krug was pursuing the development of a technology called a bioreactor, a cylindrical device used to culture human cells. The team s immediate goal was to grow human kidney cells to produce erythropoietin, a hormone that regulates red blood cell production and can be used to treat anemia. But there was a major barrier to the technology s success: Moving the liquid growth media to keep it from stagnating resulted in turbulent conditions that damaged the delicate cells, causing them to quickly die. The team was looking forward to testing the bioreactor in space, hoping the device would perform more effectively in microgravity. But on January 28, 1986, the Space Shuttle Challenger broke apart shortly after launch, killing its seven crewmembers. The subsequent grounding of the shuttle fleet had left researchers with no access to space, and thus no way to study the effects of microgravity on human cells. As Wolf looked from Trinh s syringe-capped drill to where the bioreactor sat on a workbench, he suddenly saw a possible solution to both problems. It dawned on me that rotating the wall of the reactor would solve one of our fundamental fluid mechanical problems, specifically by removing the velocity gradient of the tissue culture fluid media near the reactor s walls, says Wolf. It looked as though it would allow us to suspend the growing cells within the reactor without introducing turbulent fluid mechanical conditions.

  7. Stepwise magnetic-geochemical approach for efficient assessment of heavy metal polluted sites

    NASA Astrophysics Data System (ADS)

    Appel, E.; Rösler, W.; Ojha, G.

    2012-04-01

    Previous studies have shown that magnetometry can outline the distribution of fly ash deposition in the surroundings of coal-burning power plants and steel industries. Especially the easy-to-measure magnetic susceptibility (MS) is capable to act as a proxy for heavy metal (HM) pollution caused by such kind of point source pollution. Here we present a demonstration project around the coal-burning power plant complex "Schwarze Pumpe" in eastern Germany. Before reunification of West and East Germany huge amounts of HM pollutants were emitted from the "Schwarze Pumpe" into the environment by both fly ash emission and dumped clinker. The project has been conducted as part of the TASK Centre of Competence which aims at bringing new innovative techniques closer to the market. Our project combines in situ and laboratory MS measurements and HM analyses in order to demonstrate the efficiency of a stepwise approach for site assessment of HM pollution around point sources of fly-ash emission and deposition into soil. The following scenario is played through: We assume that the "true" spatial distribution of HM pollution (given by the pollution load index PLI comprising Fe, Zn, Pb, and Cu) is represented by our entire set of 85 measured samples (XRF analyses) from forest sites around the "Schwarze Pumpe". Surface MS data (collected with a Bartington MS2D) and in situ vertical MS sections (logged by an SM400 instrument) are used to determine a qualitative overview of potentially higher and lower polluted areas. A suite of spatial HM distribution maps obtained by random selections of 30 out of the 85 analysed sites is compared to the HM map obtained from a targeted 30-sites-selection based on pre-information from the MS results. The PLI distribution map obtained from the targeted 30-sites-selection shows all essential details of the "true" pollution map, while the different random 30-sites-selections miss important features. This comparison shows that, for the same cost investment, a stepwise combined magnetic-geochemical site assessment leads to a clearly more significant characterization of soil pollution than by a common approach with exclusively random sampling for geochemical analysis, or alternatively to an equal quality result for lower costs.

  8. The Shock and Vibration Digest. Volume 12, Number 12,

    DTIC Science & Technology

    1980-12-01

    accelerations is presented. R.G. Schwarz It is shown that while the technique is theoretically cor- Fortschritt-Berichte der VDI -Zt., Series 8, No. 30, rect, it...is subject to experimental limitations due to in- 188 pp, 22 figs, 7 tables (1980). Summary in VDI -Z accuracies in current accelerometer technology...relationship of the so- better understanding of the fatigue life of wind turbine called K-value of the proposed standard VDI 2057 to the pal blades

  9. Evaluation of Auditory Characteristics of Communications and Hearing Protection Systems (C&HPS) Part 3 - Auditory Localization

    DTIC Science & Technology

    2013-08-01

    of ANR is in headphones, such as those marketed to frequent fliers for listening to music on airplanes. ANR is much better at reducing low...1966; Butler, 1987; Hofman and Van Opstal, 2003; Hofman et al., 1998; Javer and Schwarz, 1995; Musicant and Butler, 1980; Van Wanrooij and Van Opstal...improvements in performance over time; further, training sped up the process of learning. Other investigators have demonstrated similar effects with passive

  10. Operations Research Center. Annual Report. Jul 1, 1977 through June 30, 1978.

    DTIC Science & Technology

    1978-06-30

    The Center’s commitment in this area is illustrated, for instance, by a new two-week summer course it is offering for the first time, "Recent...published in the summer of 1978. Some simple indications of the findings are (1) about half of the people presently eligible to donate blood have...Schwarz and W.H. Hausman ), Stanford University Department of Industrial Engineering and Engineering Management Technical Report No. 77-4, September

  11. KSC-2012-1938

    NASA Image and Video Library

    2012-04-03

    CAPE CANAVERAL, Fla. – Jeremy Schwarz, left, quality assurance technician, and Mike Williams, right, a thermal protection system technician, both with United Space Alliance, prepare the right wing of space shuttle Endeavour for tile bonding. Endeavour is inside Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. Ongoing transition and retirement activities are preparing the spacecraft for public display at the California Science Center in Los Angeles. Endeavour flew 25 missions during its 19-year career. Photo credit: NASA/Cory Huston

  12. Mechanical properties of ceramic structures based on Triply Periodic Minimal Surface (TPMS) processed by 3D printing

    NASA Astrophysics Data System (ADS)

    Restrepo, S.; Ocampo, S.; Ramírez, J. A.; Paucar, C.; García, C.

    2017-12-01

    Repairing tissues and organs has been the main goal of surgical procedures. Since the 1990s, the main goal of tissue engineering has been reparation, using porous scaffolds that serve as a three-dimensional template for the initial fixation of cells and subsequent tissue formation both in vitro and in vivo. A scaffold must have specific characteristics of porosity, interconnectivity, surface area, pore volume, surface tortuosity, permeability and mechanical properties, which makes its design, manufacturing and characterization a complex process. Inspired by nature, triply periodic minimal surfaces (TPMS) have emerged as an alternative for the manufacture of porous pieces with design requirements, such as scaffolds for tissue repair. In the present work, we used the technique of 3D printing to obtain ceramic structures with Gyroid, Schwarz Primitive and Schwarz Diamond Surfaces shapes, three TPMS that fulfil the geometric requirements of a bone tissue scaffold. The main objective of this work is to compare the mechanical properties of ceramic pieces of three different forms of TPMS printed in 3D using a commercial ceramic paste. In this way it will be possible to clarify which is the TPMS with appropriate characteristics to construct scaffolds of ceramic materials for bone repair. A dependence of the mechanical properties with the geometry was found being the Primitive Surface which shows the highest mechanical properties.

  13. The Multigrid-Mask Numerical Method for Solution of Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Ku, Hwar-Ching; Popel, Aleksander S.

    1996-01-01

    A multigrid-mask method for solution of incompressible Navier-Stokes equations in primitive variable form has been developed. The main objective is to apply this method in conjunction with the pseudospectral element method solving flow past multiple objects. There are two key steps involved in calculating flow past multiple objects. The first step utilizes only Cartesian grid points. This homogeneous or mask method step permits flow into the interior rectangular elements contained in objects, but with the restriction that the velocity for those Cartesian elements within and on the surface of an object should be small or zero. This step easily produces an approximate flow field on Cartesian grid points covering the entire flow field. The second or heterogeneous step corrects the approximate flow field to account for the actual shape of the objects by solving the flow field based on the local coordinates surrounding each object and adapted to it. The noise occurring in data communication between the global (low frequency) coordinates and the local (high frequency) coordinates is eliminated by the multigrid method when the Schwarz Alternating Procedure (SAP) is implemented. Two dimensional flow past circular and elliptic cylinders will be presented to demonstrate the versatility of the proposed method. An interesting phenomenon is found that when the second elliptic cylinder is placed in the wake of the first elliptic cylinder a traction force results in a negative drag coefficient.

  14. Conformable pressure vessel for high pressure gas storage

    DOEpatents

    Simmons, Kevin L.; Johnson, Kenneth I.; Lavender, Curt A.; Newhouse, Norman L.; Yeggy, Brian C.

    2016-01-12

    A non-cylindrical pressure vessel storage tank is disclosed. The storage tank includes an internal structure. The internal structure is coupled to at least one wall of the storage tank. The internal structure shapes and internally supports the storage tank. The pressure vessel storage tank has a conformability of about 0.8 to about 1.0. The internal structure can be, but is not limited to, a Schwarz-P structure, an egg-crate shaped structure, or carbon fiber ligament structure.

  15. Capture Matrices Handbook

    DTIC Science & Technology

    2014-04-01

    192–195. 2. I. Šafařik and M. Šafařikova.2002. “Detection of Low Concentrations of Malachite Green and Crystal Violet in Water,” Water Research 36... Malachite Green and Crystal Violet in Water,” Water Research 36:196–200. 5. F. P. Schwarz and S. P. Wasik. 1976. “Fluorescence Measurements of Benzene...Detection of Low Concentration of Malachite Green and Crystal Violet in Water,” Water Research 36:196–200. 3. Y. Lee, C.-L. Chang, and L.-M. Fu. 2011

  16. Consequences of an Abelian family symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramond, P.

    1996-01-01

    The addition of an Abelian family symmetry to the Minimal Super-symmetric Standard Model reproduces the observed hierarchies of quark and lepton masses and quark mixing angles, only if it is anomalous. Green-Schwarz compensation of its anomalies requires the electroweak mixing angle to be sin{sup 2}{theta}{sub {omega}} = 3/8 at the string scale, without any assumed GUT structure, suggesting a superstring origin for the standard model. The analysis is extended to neutrino masses and the lepton mixing matrix.

  17. Civil Defense in Central Europe and its Effects on Political and Military Leadership

    DTIC Science & Technology

    1981-06-05

    Belgique in French (Brussels) 20 November 1970; translated and ited Tn West Europe Report No 1533 dated 29 Jan 1980 (JPRS No 75021). 4 Carl-Friedrlch von ...1979; translated and cited in West Europe Report No 1533 dated 29 Jan 1980 (JPRS No 75021). 25 LJ 26 12Wolfram von Raven, * (The Hole in the Security...Schwarz, Zivilschutz im Ausland II (Bonn: Bundesamt fuer Zivilschutz, 1977), page 156. 34 Ibi.d, page 153 and 155. 35 Hans Sperl , "Strahlenschutz in

  18. Infinite tension limit of the pure spinor superstring

    NASA Astrophysics Data System (ADS)

    Berkovits, Nathan

    2014-03-01

    Mason and Skinner recently constructed a chiral infinite tension limit of the Ramond-Neveu-Schwarz superstring which was shown to compute the Cachazo-He-Yuan formulae for tree-level d = 10 Yang-Mills amplitudes and the NS-NS sector of tree-level d = 10 supergravity amplitudes. In this letter, their chiral infinite tension limit is generalized to the pure spinor superstring which computes a d = 10 superspace version of the Cachazo-He-Yuan formulae for tree-level d = 10 super-Yang-Mills and supergravity amplitudes.

  19. KSC-2012-1940

    NASA Image and Video Library

    2012-04-03

    CAPE CANAVERAL, Fla. – Jeremy Schwarz, left, quality assurance technician, and Mike Williams, right, a thermal protection system technician, both with United Space Alliance, apply adhesive to space shuttle Endeavour's right wing. The work is being done in preparation for tile bonding. Endeavour is inside Orbiter Processing Facility-2 at NASA's Kennedy Space Center in Florida. Ongoing transition and retirement activities are preparing the spacecraft for public display at the California Science Center in Los Angeles. Endeavour flew 25 missions during its 19-year career. Photo credit: NASA/Cory Huston

  20. Semi-automatic sparse preconditioners for high-order finite element methods on non-uniform meshes

    NASA Astrophysics Data System (ADS)

    Austin, Travis M.; Brezina, Marian; Jamroz, Ben; Jhurani, Chetan; Manteuffel, Thomas A.; Ruge, John

    2012-05-01

    High-order finite elements often have a higher accuracy per degree of freedom than the classical low-order finite elements. However, in the context of implicit time-stepping methods, high-order finite elements present challenges to the construction of efficient simulations due to the high cost of inverting the denser finite element matrix. There are many cases where simulations are limited by the memory required to store the matrix and/or the algorithmic components of the linear solver. We are particularly interested in preconditioned Krylov methods for linear systems generated by discretization of elliptic partial differential equations with high-order finite elements. Using a preconditioner like Algebraic Multigrid can be costly in terms of memory due to the need to store matrix information at the various levels. We present a novel method for defining a preconditioner for systems generated by high-order finite elements that is based on a much sparser system than the original high-order finite element system. We investigate the performance for non-uniform meshes on a cube and a cubed sphere mesh, showing that the sparser preconditioner is more efficient and uses significantly less memory. Finally, we explore new methods to construct the sparse preconditioner and examine their effectiveness for non-uniform meshes. We compare results to a direct use of Algebraic Multigrid as a preconditioner and to a two-level additive Schwarz method.

  1. Interplay between morphological and shielding effects in field emission via Schwarz-Christoffel transformation

    NASA Astrophysics Data System (ADS)

    Marcelino, Edgar; de Assis, Thiago A.; de Castilho, Caio M. C.

    2018-03-01

    It is well known that sufficiently strong electrostatic fields are able to change the morphology of Large Area Field Emitters (LAFEs). This phenomenon affects the electrostatic interactions between adjacent sites on a LAFE during field emission and may lead to several consequences, such as: the emitter's degradation, diffusion of absorbed particles on the emitter's surface, deflection due to electrostatic forces, and mechanical stress. These consequences are undesirable for technological applications, since they may significantly affect the macroscopic current density on the LAFE. Despite the technological importance, these processes are not completely understood yet. Moreover, the electrostatic effects due to the proximity between emitters on a LAFE may compete with the morphological ones. The balance between these effects may lead to a non trivial behavior in the apex-Field Enhancement Factor (FEF). The present work intends to study the interplay between proximity and morphological effects by studying a model amenable for an analytical treatment. In order to do that, a conducting system under an external electrostatic field, with a profile limited by two mirror-reflected triangular protrusions on an infinite line, is considered. The FEF near the apex of each emitter is obtained as a function of their shape and the distance between them via a Schwarz-Christoffel transformation. Our results suggest that a tradeoff between morphological and proximity effects on a LAFE may provide an explanation for the observed reduction of the local FEF and its variation at small distances between the emitter sites.

  2. Superfluid turbulence in a nonuniform circular channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, P.J.

    The excess dissipation due to the presence of quantized vorticity in flowing helium has been studied extensively. The success of the microscopic theory proposed by Schwarz in describing many properties of this dissipation led to a belief that the major aspects of the problem had been understood at the microscopic level. The experiment of Kafkalidis and Tough demonstrated that a weak one dimensional nonuniformity in the flow field led to a dramatic departure between the observed behavior and the predictions of the Schwarz theory using the local uniformity approximation (LUA). The research presented in this thesis was undertaken to measuremore » the dissipative states for thermal counterflow with a weak two dimensional nonuniformity. The experiment of Kafkalidis and Tough used a flow channel with a high aspect ratio. Such channels are known to exhibit only one state of superfluid turbulence. In this research the channel is circular in cross section and shows two distinct turbulent states (T-I and T-II). This experiment demonstrates that there is no difference in the excess dissipation for flows that are either converging or diverging. The T-I state is described by the same parameters as the T-I state in uniform channels. The turbulence exhibits front behavior at the transition between states. These conclusions are consistent with the LUA. The T-II state is at variance with the LUA, but is consistent with the results found in the Kafkalidis and Tough experiment.« less

  3. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method.

    PubMed

    Gao, Mingzhong; Yu, Bin; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method's validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure.

  4. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method

    PubMed Central

    Gao, Mingzhong; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method’s validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure. PMID:29155892

  5. A taxonomy and comparison of parallel block multi-level preconditioners for the incompressible Navier-Stokes equations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.

    2007-04-01

    In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less

  6. Environmental Perturbations, Behavioral Change, and Population Response in a Long-Term Northern Elephant Seal Study

    DTIC Science & Technology

    2013-09-30

    and Physiology a-Molecular & Integrative Physiology 161:388-394. Goldstein, T ., I . Mena, S. J. Anthony, R. Medina, P. W. Robinson, D. J. Greig, D. P...behaviour and foraging success in the northern elephant seal. Functional Ecology 27:1055-1063. Lyons, E. T ., T . A. Kuzmina , T . R. Spraker, N. Jaggi...Klanjšček, T ., Lusseau, D., Kraus, S., McMahon, C.R., Robinson, P. W., Schick, R., Schwarz, L.K., Simmons, S. E., Thomas, L., Tyack, P. and Harwood

  7. KSC-2012-1943

    NASA Image and Video Library

    2012-04-03

    CAPE CANAVERAL, Fla. – Mike Williams, left, a thermal protection system technician, and Jeremy Schwarz, right, quality assurance technician, both with United Space Alliance, set weights atop a newly installed section of tile on the right wing of space shuttle Endeavour at NASA's Kennedy Space Center in Florida. The weights will hold the section in place while the adhesive hardens beneath. Ongoing transition and retirement activities are preparing the spacecraft for public display at the California Science Center in Los Angeles. Endeavour flew 25 missions during its 19-year career. Photo credit: NASA/Cory Huston

  8. A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale

    DOE PAGES

    Cheung, James; Frischknecht, Amalie L.; Perego, Mauro; ...

    2017-07-20

    Here, we develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson–Nernst–Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independentlymore » on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.« less

  9. A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale

    NASA Astrophysics Data System (ADS)

    Cheung, James; Frischknecht, Amalie L.; Perego, Mauro; Bochev, Pavel

    2017-11-01

    We develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson-Nernst-Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independently on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.

  10. A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, James; Frischknecht, Amalie L.; Perego, Mauro

    Here, we develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson–Nernst–Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independentlymore » on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.« less

  11. Cascaded resonant bridge converters

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A. (Inventor)

    1989-01-01

    A converter for converting a low voltage direct current power source to a higher voltage, high frequency alternating current output for use in an electrical system where it is desired to use low weight cables and other circuit elements. The converter has a first stage series resonant (Schwarz) converter which converts the direct current power source to an alternating current by means of switching elements that are operated by a variable frequency voltage regulator, a transformer to step up the voltage of the alternating current, and a rectifier bridge to convert the alternating current to a direct current first stage output. The converter further has a second stage series resonant (Schwarz) converter which is connected in series to the first stage converter to receive its direct current output and convert it to a second stage high frequency alternating current output by means of switching elements that are operated by a fixed frequency oscillator. The voltage of the second stage output is controlled at a relatively constant value by controlling the first stage output voltage, which is accomplished by controlling the frequency of the first stage variable frequency voltage controller in response to second stage voltage. Fault tolerance in the event of a load short circuit is provided by making the operation of the first stage variable frequency voltage controller responsive to first and second stage current limiting devices. The second stage output is connected to a rectifier bridge whose output is connected to the input of the second stage to provide good regulation of output voltage wave form at low system loads.

  12. Detection capability of a pulsed Ground Penetrating Radar utilizing an oscilloscope and Radargram Fusion Approach for optimal signal quality

    NASA Astrophysics Data System (ADS)

    Seyfried, Daniel; Schoebel, Joerg

    2015-07-01

    In scientific research pulsed radars often employ a digital oscilloscope as sampling unit. The sensitivity of an oscilloscope is determined in general by means of the number of digits of its analog-to-digital converter and the selected full scale vertical setting, i.e., the maximal voltage range displayed. Furthermore oversampling or averaging of the input signal may increase the effective number of digits, hence the sensitivity. Especially for Ground Penetrating Radar applications high sensitivity of the radar system is demanded since reflection amplitudes of buried objects are strongly attenuated in ground. Hence, in order to achieve high detection capability this parameter is one of the most crucial ones. In this paper we analyze the detection capability of our pulsed radar system utilizing a Rohde & Schwarz RTO 1024 oscilloscope as sampling unit for Ground Penetrating Radar applications, such as detection of pipes and cables in the ground. Also effects of averaging and low-noise amplification of the received signal prior to sampling are investigated by means of an appropriate laboratory setup. To underline our findings we then present real-world radar measurements performed on our GPR test site, where we have buried pipes and cables of different types and materials in different depths. The results illustrate the requirement for proper choice of the settings of the oscilloscope for optimal data recording. However, as we show, displaying both strong signal contributions due to e.g., antenna cross-talk and direct ground bounce reflection as well as weak reflections from objects buried deeper in ground requires opposing trends for the oscilloscope's settings. We therefore present our Radargram Fusion Approach. By means of this approach multiple radargrams recorded in parallel, each with an individual optimized setting for a certain type of contribution, can be fused in an appropriate way in order to finally achieve a single radargram which displays all contributions occurring originally at different strengths in an equalized and normalized way by means of appropriate digital signal post-processing.

  13. Large deviation approach to the generalized random energy model

    NASA Astrophysics Data System (ADS)

    Dorlas, T. C.; Dukes, W. M. B.

    2002-05-01

    The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.

  14. Correlation between cystatin C-based formulas, Schwartz formula and urinary creatinine clearance for glomerular filtration rate estimation in children with kidney disease.

    PubMed

    Safaei-Asl, Afshin; Enshaei, Mercede; Heydarzadeh, Abtin; Maleknejad, Shohreh

    2016-01-01

    Assessment of glomerular filtration rate (GFR) is an important tool for monitoring renal function. Regarding to limitations in available methods, we intended to calculate GFR by cystatin C (Cys C) based formulas and determine correlation rate of them with current methods. We studied 72 children (38 boys and 34 girls) with renal disorders. The 24 hour urinary creatinine (Cr) clearance was the gold standard method. GFR was measured with Schwartz formula and Cys C-based formulas (Grubb, Hoek, Larsson and Simple). Then correlation rates of these formulas were determined. Using Pearson correlation coefficient, a significant positive correlation between all formulas and the standard method was seen (R(2) for Schwartz, Hoek, Larsson, Grubb and Simple formula was 0.639, 0.722, 0.705, 0.712, 0.722, respectively) (P<0.001). Cys C-based formulas could predict the variance of standard method results with high power. These formulas had correlation with Schwarz formula by R(2) 0.62-0.65 (intermediate correlation). Using linear regression and constant (y-intercept), it revealed that Larsson, Hoek and Grubb formulas can estimate GFR amounts with no statistical difference compared with standard method; but Schwartz and Simple formulas overestimate GFR. This study shows that Cys C-based formulas have strong relationship with 24 hour urinary Cr clearance. Hence, they can determine GFR in children with kidney injury, easier and with enough accuracy. It helps the physician to diagnosis of renal disease in early stages and improves the prognosis.

  15. The effect of e-learning on the quality of orthodontic appliances.

    PubMed

    Schorn-Borgmann, Stephanie; Lippold, Carsten; Wiechmann, Dirk; Stamm, Thomas

    2015-01-01

    The effect of e-learning on practical skills in medicine has not yet been thoroughly investigated. Today's multimedia learning environment and access to e-books provide students with more knowledge than ever before. The aim of this study is to evaluate the effect of online demonstrations concerning the quality of orthodontic appliances manufactured by undergraduate dental students. The study design was a parallel-group randomized clinical trial. Fifty-four participants were randomly assigned to one of the three groups: 1) conventional lectures, 2) conventional lectures plus written online material, and 3) access to resources of groups one and two plus access to online video material. Three orthodontic appliances (Schwarz Plate, U-Bow Activator, and Fränkel Regulator) were manufactured during the course and scored by two independent raters blinded to the participants. A 15-point scale index was used to evaluate the outcome quality of the appliances. In general, no significant differences were found between the groups. Concerning the appliances, the Schwarz Plate obtained the highest scores, whereas the Fränkel Regulator had the lowest scores; however, these results were independent of the groups. Females showed better outcome scores than males in groups two and three, but the difference was insignificant. Age of the participants also had no significant effect. The offer that students could use additional time and course-independent e-learning resources did not increase the outcome quality of the orthodontic appliances. The advantages of e-learning observed in the theoretical fields of medicine were not achieved in the educational procedures for manual skills. Factors other than e-learning may have a higher impact on manual skills, and this should be investigated in further studies.

  16. Macroscopic Violation of Three Cauchy-Schwarz Inequalities Using Correlated Light Beams From an Infra-Red Emitting Semiconductor Diode Array

    NASA Technical Reports Server (NTRS)

    Edwards, P. J.; Huang, X.; Li, Y. Q. (Editor); Wang, Y. Z. (Editor)

    1996-01-01

    We briefly review quantum mechanical and semi-classical descriptions of experiments which demonstrate the macroscopic violation of the three Cauchy-Schwarz inequalities: g(sup 2)(sub 11)(0) greater than or equal to 1; g(sup 2)(sub 11)(0) greater than or equal to g(sup 2)(sub 11)(t), (t approaches infinity); (the absolute value of g(sup 2)(sub 11)(0))(exp 2) less than or equal to g(sup 2)(sub 11)(0) g(sup 2)(sub 11)(0). Our measurements demonstrate the violation, at macroscopic intensities, of each of these inequalities. We show that their violation, although weak, can be demonstrated through photodetector current covariance measurements on correlated sub-Poissonian Poissonian, and super Poissonian light beams. Such beams are readily generated by a tandem array of infrared-emitting semiconductor junction diodes. Our measurements utilize an electrically coupled array of one or more infrared-emitting diodes, optically coupled to a detector array. The emitting array is operated in such a way as to generate highly correlated beams of variable photon Fano Factor. Because the measurements are made on time scales long compared with the first order coherence time and with detector areas large compared with the corresponding coherence areas, first order interference effects are negligible. The first and second inequalities are violated, as expected, when a sub-Poissonian light beam is split and the intensity fluctuations of the two split beams are measured by two photodetectors and subsequently cross-correlated. The third inequality is violated by bunched (as well as anti-bunched) beams of equal intensity provided the measured cross correlation coefficient exceeds (F - 1)/F, where F is the measured Fano Factor of each beam. We also investigate the violation for the case of unequal beams.

  17. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less

  18. Advanced steam power plant concepts with optimized life-cycle costs: A new approach for maximum customer benefit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seiter, C.

    1998-07-01

    The use of coal power generation applications is currently enjoying a renaissance. New highly efficient and cost-effective plant concepts together with environmental protection technologies are the main factors in this development. In addition, coal is available on the world market at attractive prices and in many places it is more readily available than gas. At the economical leading edge, standard power plant concepts have been developed to meet the requirements of emerging power markets. These concepts incorporate the high technological state-of-the-art and are designed to achieve lowest life-cycle costs. Low capital cost, fuel costs and operating costs in combination withmore » shortest lead times are the main assets that make these plants attractive especially for IPPs and Developers. Other aspects of these comprehensive concepts include turnkey construction and the willingness to participate in BOO/BOT projects. One of the various examples of such a concept, the 2 x 610-MW Paiton Private Power Project Phase II in Indonesia, is described in this paper. At the technological leading edge, Siemens has always made a major contribution and was pacemaker for new developments in steam power plant technology. Modern coal-fired steam power plants use computer-optimized process and plant design as well as advanced materials, and achieve efficiencies exceeding 45%. One excellent example of this high technology is the world's largest lignite-fired steam power plant Schwarze Pumpe in Germany, which is equipped with two 800 MW Siemens steam turbine generators with supercritical steam parameters. The world's largest 50-Hz single-shaft turbine generator with supercritical steam parameters rated at 1025 MW for the Niederaussem lignite-fired steam power plant in Germany is a further example of the sophisticated Siemens steam turbine technology and sets a new benchmark in this field.« less

  19. A three-dimensional color space from the 13th century

    PubMed Central

    Smithson, Hannah E.; Dinkova-Bruun, Greti; Gasper, Giles E. M.; Huxtable, Mike; McLeish, Tom C. B.; Panti, Cecilia

    2012-01-01

    We present a new commentary on Robert Grosseteste’s De colore, a short treatise that dates from the early 13th century, in which Grosseteste constructs a linguistic combinatorial account of color. In contrast to other commentaries (e.g., Kuehni & Schwarz, Color Ordered: A Survey of Color Order Systems from Antiquity to the Present, 2007, p. 36), we argue that the color space described by Grosseteste is explicitly three-dimensional. We seek the appropriate translation of Grosseteste’s key terms, making reference both to Grosseteste’s other works and the broader intellectual context of the 13th century, and to modern color spaces. PMID:22330399

  20. Weierstrass as a reader of Poincaré's early works

    NASA Astrophysics Data System (ADS)

    Bottazzini, Umberto

    2014-08-01

    From the very beginning of his scientific career Poincaré found an attentive reader in Weierstrass. To support this claim, in the apparent lack of direct relationship between them, in the present paper I take into account indirect sources such as Mittag-Leffler's letters to Poincaré and Weierstrass, and Weierstrass's letters to S. Kovalevskaya and H. A. Schwarz. These letters provide evidence of Weierstrass's interest in the achievements of the young French mathematician, including in particular his early statement of the uniformisation theorem. In addition, such subjects as gap series, the Poincaré-Volterra theorem and the n-body problem are also discussed.

  1. Computational simulations of supersonic magnetohydrodynamic flow control, power and propulsion systems

    NASA Astrophysics Data System (ADS)

    Wan, Tian

    This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.

  2. Residual interference and wind tunnel wall adaption

    NASA Technical Reports Server (NTRS)

    Mokry, Miroslav

    1989-01-01

    Measured flow variables near the test section boundaries, used to guide adjustments of the walls in adaptive wind tunnels, can also be used to quantify the residual interference. Because of a finite number of wall control devices (jacks, plenum compartments), the finite test section length, and the approximation character of adaptation algorithms, the unconfined flow conditions are not expected to be precisely attained even in the fully adapted stage. The procedures for the evaluation of residual wall interference are essentially the same as those used for assessing the correction in conventional, non-adaptive wind tunnels. Depending upon the number of flow variables utilized, one can speak of one- or two-variable methods; in two dimensions also of Schwarz- or Cauchy-type methods. The one-variable methods use the measured static pressure and normal velocity at the test section boundary, but do not require any model representation. This is clearly of an advantage for adaptive wall test section, which are often relatively small with respect to the test model, and for the variety of complex flows commonly encountered in wind tunnel testing. For test sections with flexible walls the normal component of velocity is given by the shape of the wall, adjusted for the displacement effect of its boundary layer. For ventilated test section walls it has to be measured by the Calspan pipes, laser Doppler velocimetry, or other appropriate techniques. The interface discontinuity method, also described, is a genuine residual interference assessment technique. It is specific to adaptive wall wind tunnels, where the computation results for the fictitious flow in the exterior of the test section are provided.

  3. Generation of non-classical correlated photon pairs via a ladder-type atomic configuration: theory and experiment.

    PubMed

    Ding, Dong-Sheng; Zhou, Zhi-Yuan; Shi, Bao-Sen; Zou, Xu-Bo; Guo, Guang-Can

    2012-05-07

    We experimentally generate a non-classical correlated two-color photon pair at 780 and 1529.4 nm in a ladder-type configuration using a hot 85Rb atomic vapor with the production rate of ~10(7)/s. The non-classical correlation between these two photons is demonstrated by strong violation of Cauchy-Schwarz inequality by the factor R = 48 ± 12. Besides, we experimentally investigate the relations between the correlation and some important experimental parameters such as the single-photon detuning, the powers of pumps. We also make a theoretical analysis in detail and the theoretical predictions are in reasonable agreement with our experimental results.

  4. Unifying Type-II Strings by Exceptional Groups

    NASA Astrophysics Data System (ADS)

    Arvanitakis, Alex S.; Blair, Chris D. A.

    2018-05-01

    We construct the exceptional sigma model: a two-dimensional sigma model coupled to a supergravity background in a manifestly (formally) ED (D )-covariant manner. This formulation of the background is provided by exceptional field theory (EFT), which unites the metric and form fields of supergravity in ED (D ) multiplets before compactification. The realization of the symmetries of EFT on the world sheet uniquely fixes the Weyl-invariant Lagrangian and allows us to relate our action to the usual type-IIA fundamental string action and a form of the type-IIB (m , n ) action. This uniqueness "predicts" the correct form of the couplings to gauge fields in both Neveu-Schwarz and Ramond sectors, without invoking supersymmetry.

  5. Background Independence and Duality Invariance in String Theory.

    PubMed

    Hohm, Olaf

    2017-03-31

    Closed string theory exhibits an O(D,D) duality symmetry on tori, which in double field theory is manifest before compactification. I prove that to first order in α^{'} there is no manifestly background independent and duality invariant formulation of bosonic string theory in terms of a metric, b field, and dilaton. To this end I use O(D,D) invariant second order perturbation theory around flat space to show that the unique background independent candidate expression for the gauge algebra at order α^{'} is inconsistent with the Jacobi identity. A background independent formulation exists instead for frame variables subject to α^{'}-deformed frame transformations (generalized Green-Schwarz transformations). Potential applications for curved backgrounds, as in cosmology, are discussed.

  6. Double field theory at order α'

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2014-11-01

    We investigate α' corrections of bosonic strings in the framework of double field theory. The previously introduced "doubled α'-geometry" gives α'-deformed gauge transformations arising in the Green-Schwarz anomaly cancellation mechanism but does not apply to bosonic strings. These require a different deformation of the duality-covariantized Courant bracket which governs the gauge structure. This is revealed by examining the α' corrections in the gauge algebra of closed string field theory. We construct a four-derivative cubic double field theory action invariant under the deformed gauge transformations, giving a first glimpse of the gauge principle underlying bosonic string α' corrections. The usual metric and b-field are related to the duality covariant fields by non-covariant field redefinitions.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baguet, A.; Pope, Christopher N.; Samtleben, H.

    We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less

  8. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE PAGES

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; ...

    2017-11-06

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  9. String scattering amplitudes and deformed cubic string field theory

    NASA Astrophysics Data System (ADS)

    Lai, Sheng-Hong; Lee, Jen-Chi; Lee, Taejin; Yang, Yi

    2018-01-01

    We study string scattering amplitudes by using the deformed cubic string field theory which is equivalent to the string field theory in the proper-time gauge. The four-string scattering amplitudes with three tachyons and an arbitrary string state are calculated. The string field theory yields the string scattering amplitudes evaluated on the world sheet of string scattering whereas the conventional method, based on the first quantized theory brings us the string scattering amplitudes defined on the upper half plane. For the highest spin states, generated by the primary operators, both calculations are in perfect agreement. In this case, the string scattering amplitudes are invariant under the conformal transformation, which maps the string world sheet onto the upper half plane. If the external string states are general massive states, generated by non-primary field operators, we need to take into account carefully the conformal transformation between the world sheet and the upper half plane. We show by an explicit calculation that the string scattering amplitudes calculated by using the deformed cubic string field theory transform into those of the first quantized theory on the upper half plane by the conformal transformation, generated by the Schwarz-Christoffel mapping.

  10. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    NASA Astrophysics Data System (ADS)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; Lim, Hojun; Littlewood, David J.

    2018-02-01

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.

  11. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  12. Quantification of map similarity to magnetic pre-screening for heavy metal pollution assessment in top soil

    NASA Astrophysics Data System (ADS)

    Cao, L.; Appel, E.; Roesler, W.; Ojha, G.

    2013-12-01

    From numerous published results, the link between magnetic concentration and heavy metal (HM) concentrations is well established. However, bivariate correlation analysis does not imply causality, and if there are extreme values, which often appear in magnetic data, they can lead to seemingly excellent correlation. It seems clear that site selection for chemical sampling based on magnetic pre-screening can deliver a superior result for outlining HM pollution, but this conclusion has only been drawn from qualitative evaluation so far. In this study, we use map similarity comparison techniques to demonstrate the usefulness of a combined magnetic-chemical approach quantitatively. We chose available data around the 'Schwarze Pumpe', a large coal burning power plant complex located in eastern Germany. The site of 'Schwarze Pumpe' is suitable for a demonstration study as soil in its surrounding is heavy fly-ash polluted, the magnetic natural background is very low, and magnetic investigations can be done in undisturbed forest soil. Magnetic susceptibility (MS) of top soil was measured by a Bartington MS2D surface sensor at 180 locations and by a SM400 downhole device in ~0.5m deep vertical sections at 90 locations. Cores from the 90 downhole sites were also studied for HM analysis. From these results 85 sites could be used to determine a spatial distribution map of HM contents reflecting the 'True' situation of pollution. Different sets comprising 30 sites were chosen by arbitrarily selection from the above 85 sample sites (we refer to four such maps here: S1-4). Additionally, we determined a 'Targeted' map from 30 sites selected on the basis of the pre-screening MS results. The map comparison process is as follows: (1) categorization of all absolute values into five classes by the Natural Breaks classification method; (2) use Delaunay triangulation for connecting the sample locations in the x-y plane; (3) determination of a distribution map of triangular planes with classified values as the Z coordinate; (4) calculation of normal vectors for each individual triangular plane; (5)transformation to the TINs into raster data assigning the same normal vectors to all grid-points which are inside the same TIN; (6) calculation of the root-mean-square of angles between normal vectors of two maps at the same grid points. Additionally, we applied the kappa statistics method to assess map similarities, and moreover developed a Fuzzy set approach. Combining both methods using indices of Khisto, Klocation, Kappa, Kfuzzy obtains a broad comparison system, which allows determining the degree of similarity and also the spatial distribution of similarity between two maps. The results indicate that the similarity between the 'Targeted' and 'True' distribution map is higher than that between 'S1-4' and the 'True' map. It manifests that magnetic pre-screening can provide a reliable basis for targeted selection of chemical sampling sites demonstrating the superior efficiency of a combined magnetic-chemical site assessment in comparison to a traditional chemical-only approach.

  13. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  14. Commentary on the special issue on the adolescent brain: Adolescence, trajectories, and the importance of prevention.

    PubMed

    Andersen, Susan L

    2016-11-01

    Adolescence as highlighted in this special issue is a period of tremendous growth, synaptic exuberance, and plasticity, but also a period for the emergence of mental illness and addiction. This commentary aims to stimulate research on prevention science to reduce the impact of early life events that often manifest during adolescence. By promoting a better understanding of what creates a normal and abnormal trajectory, the reviews by van Duijvenvoorde et al., Kilford et al., Lichenstein et al., and Tottenham and Galvan in this special issue comprehensively describe how the adolescent brain develops under typical conditions and how this process can go awry in humans. Preclinical reviews also within this issue describe how adolescents have prolonged extinction periods to maximize learning about their environment (Baker et al.), whereas Schulz and Sisk focus on the importance of puberty and how it interacts with stress (Romeo). Caballero and Tseng then set the stage of describing the neural circuitry that is often central to these changes and psychopathology. Factors that affect the mis-wiring of the brain for illness, including prenatal exposure to anti-mitotic agents (Gomes et al.) and early life stress and inflammation (Schwarz and Brenhouse), are included as examples of how exposure to early adversity manifests. These reviews are synthesized and show how information from the maturational stages that precede or occur during adolescence is likely to hold the key towards optimizing development to produce an adolescent and adult that is resilient and well adapted to their environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. [Establishing and applying of autoregressive integrated moving average model to predict the incidence rate of dysentery in Shanghai].

    PubMed

    Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An

    2010-01-01

    To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.

  16. A rights-based approach to indoor air pollution.

    PubMed

    Lim, Jamie; Petersen, Stephen; Schwarz, Dan; Schwarz, Ryan; Maru, Duncan

    2013-12-12

    Household indoor air pollution from open-fire cookstoves remains a public health and environmental hazard which impacts negatively on people's right to health. Technologically improved cookstoves designed to reduce air pollution have demonstrated their efficacy in laboratory studies. Despite the tremendous need for such stoves, in the field they have often failed to be effective, with low rates of long-term adoption by users, mainly due to poor maintenance of the stoves. In poor, rural, isolated communities, there is unlikely to be a single behavioral or technological "fix" to this problem. In this paper, we suggest that improved cookstoves are an important health intervention to which people have a right, as they do to family planning, vaccination, and essential primary care medicines. Like these other necessary elements in the fulfillment of the right to health, access to clean indoor air should be incorporated into state health strategies, policies, and plans. State infrastructure and health systems should support public and private sector delivery of improved cookstove services, and ensure that such services reach all communities, even those that are poor, located remotely, and likely not to be served by the market. We suggest that community health workers could play a critical role in creating demand for, implementing facilitation and delivery of, and monitoring these cookstoves and related services. Through this approach, improved cookstoves could become an appealing, available, and sustainable option for the rural poor. In this paper, we adopt a human rights-based approach to overcome the problem of indoor air pollution, and we use Nepal as an example. Copyright © 2013 Lim, Petersen, Schwarz, Schwarz, Maru. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

  17. Conformal mapping in optical biosensor applications.

    PubMed

    Zumbrum, Matthew E; Edwards, David A

    2015-09-01

    Optical biosensors are devices used to investigate surface-volume reaction kinetics. Current mathematical models for reaction dynamics rely on the assumption of unidirectional flow within these devices. However, new devices, such as the Flexchip, include a geometry that introduces two-dimensional flow, complicating the depletion of the volume reactant. To account for this, a previous mathematical model is extended to include two-dimensional flow, and the Schwarz-Christoffel mapping is used to relate the physical device geometry to that for a device with unidirectional flow. Mappings for several Flexchip dimensions are considered, and the ligand depletion effect is investigated for one of these mappings. Estimated rate constants are produced for simulated data to quantify the inclusion of two-dimensional flow in the mathematical model.

  18. On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources

    NASA Astrophysics Data System (ADS)

    Savina, Tatiana; Akinyemi, Lanre; Savin, Avital

    2018-01-01

    A two-phase Hele-Shaw problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities. In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary.

  19. A comparison of two conformal mapping techniques applied to an aerobrake body

    NASA Technical Reports Server (NTRS)

    Hommel, Mark J.

    1987-01-01

    Conformal mapping is a classical technique which has been utilized for solving problems in aerodynamics and hydrodynamics. Conformal mapping has been successfully applied in the construction of grids around airfoils, engine inlets and other aircraft configurations. Conformal mapping techniques were applied to an aerobrake body having an axis of symmetry. Two different approaches were utilized: (1) Karman-Trefftz transformation; and (2) Point Wise Schwarz Christoffel transformation. In both cases, the aerobrake body was mapped onto a near circle, and a grid was generated in the mapped plane. The mapped body and grid were then mapped back into physical space and the properties of the associated grids were examined. Advantages and disadvantages of both approaches are discussed.

  20. SO(32) heterotic line bundle models

    NASA Astrophysics Data System (ADS)

    Otsuka, Hajime

    2018-05-01

    We search for the three-generation standard-like and/or Pati-Salam models from the SO(32) heterotic string theory on smooth, quotient complete intersection Calabi-Yau threefolds with multiple line bundles, each with structure group U(1). These models are S- and T-dual to intersecting D-brane models in type IIA string theory. We find that the stable line bundles and Wilson lines lead to the standard model gauge group with an extra U(1) B-L via a Pati-Salam-like symmetry and the obtained spectrum consists of three chiral generations of quarks and leptons, and vector-like particles. Green-Schwarz anomalous U(1) symmetries control not only the Yukawa couplings of the quarks and leptons but also the higher-dimensional operators causing the proton decay.

  1. A Nonlinear Hyperbolic Volterra Equation in Viscoelasticity.

    DTIC Science & Technology

    1980-06-01

    states that k(t) e L (0,-) if -nd only if (3.7) P(Z) def .’’(3.7) P(z) d ;’(0) + ’(0)a’(z) = X( 0 ) + (0)za(z) does not vanish on the half plane Rez ; 0...For w(t,x) e X(M,T), (2.5) and the Poincare inequality yield (2.6) w 2 (t,x) + w2x(t,x) + w 2(t,x) ( M , 0 4 t ( T, 0 4 x ( I x tx xx We now consider...2.6), the Poincare inequality and Schwarz’s inequality, every term on the right-hand side of (2.12), (2.15), and (2.16) can be majorized by one of p

  2. Consistent Pauli reduction on group manifolds

    DOE PAGES

    Baguet, A.; Pope, Christopher N.; Samtleben, H.

    2016-01-01

    We prove an old conjecture by Duff, Nilsson, Pope and Warner asserting that the NSNS sector of supergravity (and more general the bosonic string) allows for a consistent Pauli reduction on any d-dimensional group manifold G, keeping the full set of gauge bosons of the G×G isometry group of the bi-invariant metric on G. The main tool of the construction is a particular generalised Scherk–Schwarz reduction ansatz in double field theory which we explicitly construct in terms of the group's Killing vectors. Examples include the consistent reduction from ten dimensions on S3×S3 and on similar product spaces. The construction ismore » another example of globally geometric non-toroidal compactifications inducing non-geometric fluxes.« less

  3. A New Reynolds Stress Algebraic Equation Model

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    A general turbulent constitutive relation is directly applied to propose a new Reynolds stress algebraic equation model. In the development of this model, the constraints based on rapid distortion theory and realizability (i.e. the positivity of the normal Reynolds stresses and the Schwarz' inequality between turbulent velocity correlations) are imposed. Model coefficients are calibrated using well-studied basic flows such as homogeneous shear flow and the surface flow in the inertial sublayer. The performance of this model is then tested in complex turbulent flows including the separated flow over a backward-facing step and the flow in a confined jet. The calculation results are encouraging and point to the success of the present model in modeling turbulent flows with complex geometries.

  4. Quality of life after cancer-How the extent of impairment is influenced by patient characteristics.

    PubMed

    Peters, Elisabeth; Mendoza Schulz, Laura; Reuss-Borst, Monika

    2016-10-10

    Although this effect is well known, tailored treatment methods have not yet been broadly adopted. The aim of this study was to identify those patient characteristics that most influence the impairment of quality of life and thus to identify those patients who need and can benefit most from specific intervention treatment. 1879 cancer patients were given the EORTC QLQ C-30 questionnaire at the beginning and end of their inpatient rehabilitation. Patients' scores were compared to those of 2081 healthy adults (Schwarz and Hinz, Eur J Cancer 37:1345-1351, 2001). Furthermore, differences in quality of life corresponding to sex, age, tumor site, TNM stage, interval between diagnosis and rehabilitation, and therapy method were examined. Compared to the healthy population, the study group showed a decreased quality of life in all analyzed domains. This difference diminished with increasing age. Women reported a lower quality of life then men in general. Patients with prostate cancer showed the least impairment in several domains. Patients having undergone chemotherapy as well as radiotherapy were impaired the most. Surprisingly, TNM stage and interval between diagnosis and rehabilitation did not significantly influence quality of life. Global quality of life and all functional domains significantly improved after a 3-week rehabilitation program. Despite an individualized and increasingly better tolerable therapy, the quality of life of cancer patients is still considerably impaired. However, systematic screening of psychosocial aspects of cancer, e.g. quality of life, could enable improved intervention.

  5. Distributed Memory Parallel Computing with SEAWAT

    NASA Astrophysics Data System (ADS)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources. Speed-ups up to 40 were obtained with the new PKS solver.

  6. M theory through the looking glass: Tachyon condensation in the E{sub 8} heterotic string

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horava, Petr; Keeler, Cynthia A.

    2008-03-15

    We study the spacetime decay to nothing in string theory and M-theory. First we recall a nonsupersymmetric version of heterotic M-theory, in which bubbles of nothing--connecting the two E{sub 8} boundaries by a throat--are expected to be nucleated. We argue that the fate of this system should be addressed at weak string coupling, where the nonperturbative instanton instability is expected to turn into a perturbative tachyonic one. We identify the unique string theory that could describe this process: The heterotic model with one E{sub 8} gauge group and a singlet tachyon. We then use world sheet methods to study themore » tachyon condensation in the Neveu-Schwarz-Ramond formulation of this model, and show that it induces a world sheet super-Higgs effect. The main theme of our analysis is the possibility of making meaningful alternative gauge choices for world sheet supersymmetry, in place of the conventional superconformal gauge. We show in a version of unitary gauge how the world sheet gravitino assimilates the Goldstino and becomes dynamical. This picture clarifies recent results of Hellerman and Swanson. We also present analogs of R{sub {xi}} gauges, and note the importance of logarithmic conformal field theories in the context of tachyon condensation.« less

  7. PIXIE3D: A Parallel, Implicit, eXtended MHD 3D Code.

    NASA Astrophysics Data System (ADS)

    Chacon, L.; Knoll, D. A.

    2004-11-01

    We report on the development of PIXIE3D, a 3D parallel, fully implicit Newton-Krylov extended primitive-variable MHD code in general curvilinear geometry. PIXIE3D employs a second-order, finite-volume-based spatial discretization that satisfies remarkable properties such as being conservative, solenoidal in the magnetic field, non-dissipative, and stable in the absence of physical dissipation.(L. Chacón , phComput. Phys. Comm.) submitted (2004) PIXIE3D employs fully-implicit Newton-Krylov methods for the time advance. Currently, first and second-order implicit schemes are available, although higher-order temporal implicit schemes can be effortlessly implemented within the Newton-Krylov framework. A successful, scalable, MG physics-based preconditioning strategy, similar in concept to previous 2D MHD efforts,(L. Chacón et al., phJ. Comput. Phys). 178 (1), 15- 36 (2002); phJ. Comput. Phys., 188 (2), 573-592 (2003) has been developed. We are currently in the process of parallelizing the code using the PETSc library, and a Newton-Krylov-Schwarz approach for the parallel treatment of the preconditioner. In this poster, we will report on both the serial and parallel performance of PIXIE3D, focusing primarily on scalability and CPU speedup vs. an explicit approach.

  8. Dye-induced aggregation of single stranded RNA: a mechanistic approach.

    PubMed

    Biver, Tarita; Ciatto, Carlo; Secco, Fernando; Venturini, Marcella

    2006-08-15

    The binding of proflavine (D) to single stranded poly(A) (P) was investigated at pH 7.0 and 25 degrees C using T-jump, stopped-flow and spectrophotometric methods. Equilibrium measurements show that an external complex PD(I) and an internal complex PD(II) form upon reaction between P and D and that their concentrations depend on the polymer/dye concentration ratio (C(P)/C(D)). For C(P)/C(D)<2.5, cooperative formation of stacks external to polymer strands prevails (PD(I)). Equilibria and T-jump experiments, performed at I=0.1M and analyzed according to the Schwarz theory for cooperative binding, provide the values of site size (g=1), equilibrium constant for the nucleation step (K( *)=(1.4+/-0.6)x10(3)M(-1)), equilibrium constant for the growth step (K=(1.2+/-0.6)x10(5)M(-1)), cooperativity parameter (q=85) and rate constants for the growth step (k(r)=1.2x10(7)M(-1)s(-1), k(d)=1.1 x 10(2)s(-1)). Stopped-flow experiments, performed at low ionic strength (I=0.01 M), indicate that aggregation of stacked poly(A) strands do occur provided that C(P)/C(D)<2.5.

  9. Compulsive buying disorder clustering based on sex, age, onset and personality traits.

    PubMed

    Granero, Roser; Fernández-Aranda, Fernando; Baño, Marta; Steward, Trevor; Mestre-Bach, Gemma; Del Pino-Gutiérrez, Amparo; Moragas, Laura; Mallorquí-Bagué, Núria; Aymamí, Neus; Goméz-Peña, Mónica; Tárrega, Salomé; Menchón, José M; Jiménez-Murcia, Susana

    2016-07-01

    In spite of the revived interest in compulsive buying disorder (CBD), its classification into the contemporary nosologic systems continues to be debated, and scarce studies have addressed heterogeneity in the clinical phenotype through methodologies based on a person-centered approach. To identify empirical clusters of CBD employing personality traits, as well as patients' sex, age and the age of CBD onset as indicators. An agglomerative hierarchical clustering method defining a combination of the Schwarz Bayesian Information Criterion and log-likelihood was used. Three clusters were identified in a sample of n=110 patients attending a specialized CBD unit a) "male compulsive buyers" reported the highest prevalence of comorbid gambling disorder and the lowest levels of reward dependence; b) "female low-dysfunctional" mainly included employed women, with the highest level of education, the oldest age of onset, the lowest scores in harm avoidance and the highest levels of persistence, self-directedness and cooperativeness; and c) "female highly-dysfunctional" with the youngest age of onset, the highest levels of comorbid psychopathology and harm avoidance, and the lowest score in self-directedness. Sociodemographic characteristics and personality traits can be used to determine CBD clusters which represent different clinical subtypes. These subtypes should be considered when developing assessment instruments, preventive programs and treatment interventions. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Thermomagnetic Analyses to Test Concrete Stability

    NASA Astrophysics Data System (ADS)

    Geiss, C. E.; Gourley, J. R.

    2017-12-01

    Over the past decades pyrrhotite-containing aggregate has been used in concrete to build basements and foundations in central Connecticut. The sulphur in the pyrrhotite reacts to several secondary minerals, and associated changes in volume lead to a loss of structural integrity. As a result hundreds of homes have been rendered worthless as remediation costs often exceed the value of the homes and the value of many other homes constructed during the same time period is in question as concrete provenance and potential future structural issues are unknown. While minor abundances of pyrrhotite are difficult to detect or quantify by traditional means, the mineral is easily identified through its magnetic properties. All concrete samples from affected homes show a clear increase in magnetic susceptibility above 220°C due to the γ - transition of Fe9S10 [1] and a clearly defined Curie-temperature near 320°C for Fe7S8. X-ray analyses confirm the presence of pyrrhotite and ettringite in these samples. Synthetic mixtures of commercially available concrete and pyrrhotite show that the method is semiquantitative but needs to be calibrated for specific pyrrhotite mineralogies. 1. Schwarz, E.J., Magnetic properties of pyrrhotite and their use in applied geology and geophysics. 1975, Geological Survey of Canada : Ottawa, ON, Canada: Canada.

  11. Biogeography: An interweave of climate, fire, and humans

    USGS Publications Warehouse

    Stambaugh, Michael C.; Varner, J. Morgan; Jackson, Stephen T.

    2017-01-01

    Longleaf pine (Pinus palustris) is an icon of the southeastern United States and has been considered a foundation species in forests, woodlands, and savannas of the region (Schwarz 1907; Platt 1999). Longleaf pine is an avatar for the extensive pine-dominated, fire-dependent ecosystems (Figure 2.1) that provide habitats for thousands of species and have largely vanished from the landscape. Longleaf pine is one of the world's most resilient and fire-adapted trees (Keeley and Zedler 1998), widely perceived as the sole dominant in forests across a large area of the Southeast (Sargent 1884; Mohr 1896; Wahlenberg 1946). Longleaf pine was once a primary natural resource, providing high-quality timber, resins, and naval stores that fueled social changes and economic growth through the 19th and early 20th centuries.

  12. Pinching parameters for open (super) strings

    NASA Astrophysics Data System (ADS)

    Playle, Sam; Sciuto, Stefano

    2018-02-01

    We present an approach to the parametrization of (super) Schottky space obtained by sewing together three-punctured discs with strips. Different cubic ribbon graphs classify distinct sets of pinching parameters; we show how they are mapped onto each other. The parametrization is particularly well-suited to describing the region within (super) moduli space where open bosonic or Neveu-Schwarz string propagators become very long and thin, which dominates the IR behaviour of string theories. We show how worldsheet objects such as the Green's function converge to graph theoretic objects such as the Symanzik polynomials in the α ' → 0 limit, allowing us to see how string theory reproduces the sum over Feynman graphs. The (super) string measure takes on a simple and elegant form when expressed in terms of these parameters.

  13. Electric field distribution and current emission in a miniaturized geometrical diode

    NASA Astrophysics Data System (ADS)

    Lin, Jinpu; Wong, Patrick Y.; Yang, Penglu; Lau, Y. Y.; Tang, W.; Zhang, Peng

    2017-06-01

    We study the electric field distribution and current emission in a miniaturized geometrical diode. Using Schwarz-Christoffel transformation, we calculate exactly the electric field inside a finite vacuum cathode-anode (A-K) gap with a single trapezoid protrusion on one of the electrode surfaces. It is found that there is a strong field enhancement on both electrodes near the protrusion, when the ratio of the A-K gap distance to the protrusion height d /h <2. The calculations are spot checked against COMSOL simulations. We calculate the effective field enhancement factor for the field emission current, by integrating the local Fowler-Nordheim current density along the electrode surfaces. We systematically examine the electric field enhancement and the current rectification of the miniaturized geometrical diode for various geometric dimensions and applied electric fields.

  14. Bäcklund transformation of Painlevé III(D 8) τ function

    NASA Astrophysics Data System (ADS)

    Bershtein, M. A.; Shchechkin, A. I.

    2017-03-01

    We study the explicit formula (suggested by Gamayun, Iorgov and Lisovyy) for the Painlevé III(D 8) τ function in terms of Virasoro conformal blocks with a central charge of 1. The Painlevé equation has two types of bilinear forms, which we call Toda-like and Okamoto-like. We obtain these equations from the representation theory using an embedding of the direct sum of two Virasoro algebras in a certain superalgebra. These two types of bilinear forms correspond to the Neveu-Schwarz sector and the Ramond sector of this algebra. We also obtain the τ functions of the algebraic solutions of the Painlevé III(D 8) from the special representations of the Virasoro algebra of the highest weight (n  +  1/4)2.

  15. Differential diagnosis of the honey bee trypanosomatids Crithidia mellificae and Lotmaria passim.

    PubMed

    Ravoet, Jorgen; Schwarz, Ryan S; Descamps, Tine; Yañez, Orlando; Tozkar, Cansu Ozge; Martin-Hernandez, Raquel; Bartolomé, Carolina; De Smet, Lina; Higes, Mariano; Wenseleers, Tom; Schmid-Hempel, Regula; Neumann, Peter; Kadowaki, Tatsuhiko; Evans, Jay D; de Graaf, Dirk C

    2015-09-01

    Trypanosomatids infecting honey bees have been poorly studied with molecular methods until recently. After the description of Crithidia mellificae (Langridge and McGhee, 1967) it took about forty years until molecular data for honey bee trypanosomatids became available and were used to identify and describe a new trypanosomatid species from honey bees, Lotmaria passim (Evans and Schwarz, 2014). However, an easy method to distinguish them without sequencing is not yet available. Research on the related bumble bee parasites Crithidia bombi and Crithidia expoeki revealed a fragment length polymorphism in the internal transcribed spacer 1 (ITS1), which enabled species discrimination. In search of fragment length polymorphisms for differential diagnostics in honey bee trypanosomatids, we studied honey bee trypanosomatid cell cultures of C. mellificae and L. passim. This research resulted in the identification of fragment length polymorphisms in ITS1 and ITS1-2 markers, which enabled us to develop a diagnostic method to differentiate both honey bee trypanosomatid species without the need for sequencing. However, the amplification success of the ITS1 marker depends probably on the trypanosomatid infection level. Further investigation confirmed that L. passim is the dominant species in Belgium, Japan and Switzerland. We found C. mellificae only rarely in Belgian honey bee samples, but not in honey bee samples from other countries. C. mellificae was also detected in mason bees (Osmia bicornis and Osmia cornuta) besides in honey bees. Further, the characterization and comparison of additional markers from L. passim strain SF (published as C. mellificae strain SF) and a Belgian honey bee sample revealed very low divergence in the 18S rRNA, ITS1-2, 28S rRNA and cytochrome b sequences. Nevertheless, a variable stretch was observed in the gp63 virulence factor. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Measles antibody levels after vaccination with Edmonston-Zagreb and Schwarz measles vaccine at 9 months or at 9 and 18 months of age: a serological study within a randomised trial of different measles vaccines.

    PubMed

    Martins, Cesario; Garly, May-Lill; Bale, Carlitos; Rodrigues, Amabelia; Benn, Christine S; Whittle, Hilton; Aaby, Peter

    2013-11-19

    Standard-titre Schwarz (SW) and Edmonston-Zagreb (EZ) measles vaccines (MV) are both used in the routine immunisation programme. Within a trial of different strains of MV, we examined antibody responses in both one-dose and two-dose schedules when the first dose was administered at 9 months. The trial was conducted in an urban area in Guinea-Bissau where we have had a health and demographic surveillance system and studied strategies to prevent measles infection since 1978. In the present study, children were randomised to SW or EZ as the first MV and furthermore randomised to a second dose of the same MV or no vaccine at 18 months of age. We obtained blood samples from 996 children at baseline; post-vaccination blood samples were collected at 18 and 24 months of age to assess measles antibody levels after one or two doses of MV. At age 18 months all had responded to the first dose and only 1% (8/699) of the children had non-protective antibody levels irrespective of vaccine type. SW was associated with significantly higher levels of measles antibodies (geometric mean titre (GMT)=2114 mIU/mL (95%CI 1153-2412)) than EZ (GMT=807 mIU/mL (722-908)) (p=0.001). Antibody concentration was significantly higher in girls than in boys after EZ but not after SW. Antibody levels were higher in the rainy than the dry season. There was no clear indication that a booster dose at 18 months increased the antibody level at 24 months of age. Maternal antibody levels have declined significantly in recent years and 99% had protective levels of measles antibody following primary MV at 9 months of age. It is unlikely that measles prevention and child health will be improved by increasing the age of MV as currently recommended. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  18. Probing soil C metabolism in response to temperature: results from experiments and modeling

    NASA Astrophysics Data System (ADS)

    Dijkstra, P.; Dalder, J.; Blankinship, J.; Selmants, P. C.; Schwartz, E.; Koch, G. W.; Hart, S.; Hungate, B. A.

    2010-12-01

    C use efficiency (CUE) is one of the least understood aspects of soil C cycling, has a very large effect on soil respiration and C sequestration, and decreases with elevated temperature. CUE is directly related to substrate partitioning over energy production and biosynthesis. The production of energy and metabolic precursors occurs in well-known processes such as glycolysis and Krebs cycle. We have developed a new stable isotope approach using position-specific 13C-labeled metabolic tracers to measure these fundamental metabolic processes in intact soil communities (1). We use this new approach, combined with models of soil metabolic flux patterns, to analyze the response of microbial energy production, biosynthesis, and CUE to temperature. The method consists of adding small but precise amounts of position-specific 13C -labeled metabolic tracers to parallel soil incubations, in this case 1-13C and 2,3-13C pyruvate and 1-13C and U-13C glucose. The measurement of CO2 released from the labeled tracers is used to calculate the C flux rates through various metabolic pathways. A simplified metabolic model consisting of 23 reactions is iteratively solved using results of the metabolic tracer experiments and information on microbial precursor demand under different temperatures. This new method enables direct study of fundamental aspects of microbial energy production, C use efficiency, and soil organic matter formation in response to temperature. (1) Dijkstra P, Blankinship JC, Selmants PC, Hart SC, Koch GW, Schwarz E and Hungate BA. Probing metabolic flux patterns of soil microbial communities using parallel position-specific tracer labeling. Soil Biology and Biochemistry (accepted)

  19. Linking automatic evaluation to mood and information processing style: consequences for experienced affect, impression formation, and stereotyping.

    PubMed

    Chartrand, Tanya L; van Baaren, Rick B; Bargh, John A

    2006-02-01

    According to the feelings-as-information account, a person's mood state signals to him or her the valence of the current environment (N. Schwarz & G. Clore, 1983). However, the ways in which the environment automatically influences mood in the first place remain to be explored. The authors propose that one mechanism by which the environment influences affect is automatic evaluation, the nonconscious evaluation of environmental stimuli as good or bad. A first experiment demonstrated that repeated brief exposure to positive or negative stimuli (which leads to automatic evaluation) induces a corresponding mood in participants. In 3 additional studies, the authors showed that automatic evaluation affects information processing style. Experiment 4 showed that participants' mood mediates the effect of valenced brief primes on information processing. ((c) 2006 APA, all rights reserved).

  20. pth moment exponential stability of stochastic memristor-based bidirectional associative memory (BAM) neural networks with time delays.

    PubMed

    Wang, Fen; Chen, Yuanlong; Liu, Meichun

    2018-02-01

    Stochastic memristor-based bidirectional associative memory (BAM) neural networks with time delays play an increasingly important role in the design and implementation of neural network systems. Under the framework of Filippov solutions, the issues of the pth moment exponential stability of stochastic memristor-based BAM neural networks are investigated. By using the stochastic stability theory, Itô's differential formula and Young inequality, the criteria are derived. Meanwhile, with Lyapunov approach and Cauchy-Schwarz inequality, we derive some sufficient conditions for the mean square exponential stability of the above systems. The obtained results improve and extend previous works on memristor-based or usual neural networks dynamical systems. Four numerical examples are provided to illustrate the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Dataset on the structural characterization of organosolv lignin obtained from ensiled Poaceae grass and load-dependent molecular weight changes during thermoplastic processing.

    PubMed

    Dörrstein, Jörg; Scholz, Ronja; Schwarz, Dominik; Schieder, Doris; Sieber, Volker; Walther, Frank; Zollfrank, Cordt

    2018-04-01

    This article presents experimental data of organosolv lignin from Poacea grass and structural changes after compounding and injection molding as presented in the research article "Effects of high-lignin-loading on thermal, mechanical, and morphological properties of bioplastic composites" [1]. It supplements the article with morphological (SEM), spectroscopic ( 31 P NMR, FT-IR) and chromatographic (GPC, EA) data of the starting lignin as well as molar mass characteristics (mass average molar mass (M w ) and Polydispersity (D)) of the extracted lignin. Refer to Schwarz et al. [2] for a detailed description of the production of the organosolv residue and for further information on the raw material used for lignin extraction. The dataset is made publicly available and can be useful for extended lignin research and critical analyzes.

  2. Solving the Quantum Many-Body Problem via Correlations Measured with a Momentum Microscope

    NASA Astrophysics Data System (ADS)

    Hodgman, S. S.; Khakimov, R. I.; Lewis-Swan, R. J.; Truscott, A. G.; Kheruntsyan, K. V.

    2017-06-01

    In quantum many-body theory, all physical observables are described in terms of correlation functions between particle creation or annihilation operators. Measurement of such correlation functions can therefore be regarded as an operational solution to the quantum many-body problem. Here, we demonstrate this paradigm by measuring multiparticle momentum correlations up to third order between ultracold helium atoms in an s -wave scattering halo of colliding Bose-Einstein condensates, using a quantum many-body momentum microscope. Our measurements allow us to extract a key building block of all higher-order correlations in this system—the pairing field amplitude. In addition, we demonstrate a record violation of the classical Cauchy-Schwarz inequality for correlated atom pairs and triples. Measuring multiparticle momentum correlations could provide new insights into effects such as unconventional superconductivity and many-body localization.

  3. Linking Automatic Evaluation to Mood and Information Processing Style: Consequences for Experienced Affect, Impression Formation, and Stereotyping

    PubMed Central

    Chartrand, Tanya L.; van Baaren, Rick B.; Bargh, John A.

    2009-01-01

    According to the feelings-as-information account, a person’s mood state signals to him or her the valence of the current environment (N. Schwarz & G. Clore, 1983). However, the ways in which the environment automatically influences mood in the first place remain to be explored. The authors propose that one mechanism by which the environment influences affect is automatic evaluation, the nonconscious evaluation of environmental stimuli as good or bad. A first experiment demonstrated that repeated brief exposure to positive or negative stimuli (which leads to automatic evaluation) induces a corresponding mood in participants. In 3 additional studies, the authors showed that automatic evaluation affects information processing style. Experiment 4 showed that participants’ mood mediates the effect of valenced brief primes on information processing. PMID:16478316

  4. Temporal Quantum Correlations in Inelastic Light Scattering from Water.

    PubMed

    Kasperczyk, Mark; de Aguiar Júnior, Filomeno S; Rabelo, Cassiano; Saraiva, Andre; Santos, Marcelo F; Novotny, Lukas; Jorio, Ado

    2016-12-09

    Water is one of the most prevalent chemicals on our planet, an integral part of both our environment and our existence as a species. Yet it is also rich in anomalous behaviors. Here we reveal that water is a novel-yet ubiquitous-source for quantum correlated photon pairs at ambient conditions. The photon pairs are produced through Raman scattering, and the correlations arise from the shared quantum of a vibrational mode between the Stokes and anti-Stokes scattering events. We confirm the nonclassical nature of the produced photon pairs by showing that the cross-correlation and autocorrelations of the signals violate a Cauchy-Schwarz inequality by over 5 orders of magnitude. The unprecedented degree of violating the inequality in pure water, as well as the well-defined polarization properties of the photon pairs, points to its usefulness in quantum information.

  5. Highly effective action from large N gauge fields

    NASA Astrophysics Data System (ADS)

    Yang, Hyun Seok

    2014-10-01

    Recently Schwarz put forward a conjecture that the world-volume action of a probe D3-brane in an AdS5×S5 background of type IIB superstring theory can be reinterpreted as the highly effective action (HEA) of four-dimensional N =4 superconformal field theory on the Coulomb branch. We argue that the HEA can be derived from the noncommutative (NC) field theory representation of the AdS/CFT correspondence and the Seiberg-Witten (SW) map defining a spacetime field redefinition between ordinary and NC gauge fields. It is based only on the well-known facts that the master fields of large N matrices are higher-dimensional NC U(1) gauge fields and the SW map is a local coordinate transformation eliminating U(1) gauge fields known as the Darboux theorem in symplectic geometry.

  6. Historizing epistemology in psychology.

    PubMed

    Jovanović, Gordana

    2010-12-01

    The conflict between the psychometric methodological framework and the particularities of human experiences reported in psychotherapeutic context led Michael Schwarz to raise the question whether psychology is based on a methodological error. I take this conflict as a heuristic tool for the reconstruction of the early history of psychology, which bears witness to similar epistemological conflicts, though the dominant historiography of psychology has largely forgotten alternative conceptions and their valuable insights into complexities of psychic phenomena. In order to work against the historical amnesia in psychology I suggest to look at cultural-historical contexts which decisively shaped epistemological choices in psychology. Instead of keeping epistemology and history of psychology separate, which nurtures individualism and naturalism in psychology, I argue for historizing epistemology and for historical psychology. From such a historically reflected perspective psychology in contemporary world can be approached more critically.

  7. Polynomial interpretation of multipole vectors

    NASA Astrophysics Data System (ADS)

    Katz, Gabriel; Weeks, Jeff

    2004-09-01

    Copi, Huterer, Starkman, and Schwarz introduced multipole vectors in a tensor context and used them to demonstrate that the first-year Wilkinson microwave anisotropy probe (WMAP) quadrupole and octopole planes align at roughly the 99.9% confidence level. In the present article, the language of polynomials provides a new and independent derivation of the multipole vector concept. Bézout’s theorem supports an elementary proof that the multipole vectors exist and are unique (up to rescaling). The constructive nature of the proof leads to a fast, practical algorithm for computing multipole vectors. We illustrate the algorithm by finding exact solutions for some simple toy examples and numerical solutions for the first-year WMAP quadrupole and octopole. We then apply our algorithm to Monte Carlo skies to independently reconfirm the estimate that the WMAP quadrupole and octopole planes align at the 99.9% level.

  8. 6d, Coulomb branch anomaly matching

    NASA Astrophysics Data System (ADS)

    Intriligator, Kenneth

    2014-10-01

    6d QFTs are constrained by the analog of 't Hooft anomaly matching: all anomalies for global symmetries and metric backgrounds are constants of RG flows, and for all vacua in moduli spaces. We discuss an anomaly matching mechanism for 6d theories on their Coulomb branch. It is a global symmetry analog of Green-Schwarz-West-Sagnotti anomaly cancellation, and requires the apparent anomaly mismatch to be a perfect square, . Then Δ I 8 is cancelled by making X 4 an electric/magnetic source for the tensor multiplet, so background gauge field instantons yield charged strings. This requires the coefficients in X 4 to be integrally quantized. We illustrate this for theories. We also consider the SCFTs from N small E8 instantons, verifying that the recent result for its anomaly polynomial fits with the anomaly matching mechanism.

  9. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  10. A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments

    PubMed Central

    Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J

    2014-01-01

    Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576

  11. Dielectric-spectroscopy approach to ferrofluid nanoparticle clustering induced by an external electric field.

    PubMed

    Rajnak, Michal; Kurimsky, Juraj; Dolnik, Bystrik; Kopcansky, Peter; Tomasovicova, Natalia; Taculescu-Moaca, Elena Alina; Timko, Milan

    2014-09-01

    An experimental study of magnetic colloidal particles cluster formation induced by an external electric field in a ferrofluid based on transformer oil is presented. Using frequency domain isothermal dielectric spectroscopy, we study the influence of a test cell electrode separation distance on a low-frequency relaxation process. We consider the relaxation process to be associated with an electric double layer polarization taking place on the particle surface. It has been found that the relaxation maximum considerably shifts towards lower frequencies when conducting the measurements in the test cells with greater electrode separation distances. As the electric field intensity was always kept at a constant value, we propose that the particle cluster formation induced by the external ac electric field accounts for that phenomenon. The increase in the relaxation time is in accordance with the Schwarz theory of electric double layer polarization. In addition, we analyze the influence of a static electric field generated by dc bias voltage on a similar shift in the relaxation maximum position. The variation of the dc electric field for the hysteresis measurements purpose provides understanding of the development of the particle clusters and their decay. Following our results, we emphasize the utility of dielectric spectroscopy as a simple, complementary method for detection and study of clusters of colloidal particles induced by external electric field.

  12. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  13. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  14. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  15. N=2 Minimal Conformal Field Theories and Matrix Bifactorisations of x d

    NASA Astrophysics Data System (ADS)

    Davydov, Alexei; Camacho, Ana Ros; Runkel, Ingo

    2018-01-01

    We establish an action of the representations of N = 2-superconformal symmetry on the category of matrix factorisations of the potentials x d and x d - y d , for d odd. More precisely we prove a tensor equivalence between (a) the category of Neveu-Schwarz-type representations of the N = 2 minimal super vertex operator algebra at central charge 3-6/d, and (b) a full subcategory of graded matrix factorisations of the potential x d - y d . The subcategory in (b) is given by permutation-type matrix factorisations with consecutive index sets. The physical motivation for this result is the Landau-Ginzburg/conformal field theory correspondence, where it amounts to the equivalence of a subset of defects on both sides of the correspondence. Our work builds on results by Brunner and Roggenkamp [BR], where an isomorphism of fusion rules was established.

  16. Binary catalogue of exoplanets

    NASA Astrophysics Data System (ADS)

    Schwarz, Richard; Bazso, Akos; Zechner, Renate; Funk, Barbara

    2016-02-01

    Since 1995 there is a database which list most of the known exoplanets (The Extrasolar Planets Encyclopaedia at http://exoplanet.eu/). With the growing number of detected exoplanets in binary and multiple star systems it became more important to mark and to separate them into a new database, which is not available in the Extrasolar Planets Encyclopaedia. Therefore we established an online database (which can be found at: http://www.univie.ac.at/adg/schwarz/multiple.html) for all known exoplanets in binary star systems and in addition for multiple star systems, which will be updated regularly and linked to the Extrasolar Planets Encyclopaedia. The binary catalogue of exoplanets is available online as data file and can be used for statistical purposes. Our database is divided into two parts: the data of the stars and the planets, given in a separate list. We describe also the different parameters of the exoplanetary systems and present some applications.

  17. On proton excitation of forbidden lines in positive ions

    NASA Astrophysics Data System (ADS)

    Burgess, Alan; Tully, John A.

    2005-08-01

    The semi-classical impact parameter approximations used by Bahcall and Wolf and by Bely and Faucher, for proton excitation of electric quadrupole transitions in positive ions, both fail at high energies, giving cross sections which do not fall off correctly as constant/E. This is in contrast with the pioneering example of Seaton for Fe+13 and of Reid and Schwarz for S+3, both of whom achieve the correct functional form, but do not ensure the correct constant of proportionality. By combining the Born and semi-classical approximations one can obtain cross sections which have the full correct behaviour as E → ∞, and hence, rate coefficients which have the correct high temperature behaviour (~C/T1/2 with the correct value of C). We provide a computer program for calculating these. An error in Faucher's derivation of the Born formula is also discussed.

  18. Elementary Particles and the Universe

    NASA Astrophysics Data System (ADS)

    Schwarz, John H.

    2005-07-01

    1. Excess baggage J. Hartle; 2. Through the clouds E. Witten; 3. Covariant foundations of the superparticle L. Brink; 4. Chiral symmetry and confinement T. Goldman; 5. The original fifth interaction Y. Neeman; 6. The mass hierarchy of leptons and quarks H. Fritzsch; 7. Spacetime duality in string theory J. H. Schwarz; 8. Symmetry and quasi-symmetry Y. Nambu; 9. On an exceptional non-associative superspace M. Gunaydin; 10. Algebra of reparametrization-invariant and normal ordered operators in open string field theory P. Ramond; 11. Superconductivity of an ideal charged boson system T. D. Lee; 12. Some remarks on the symmetry approach to nuclear rotational motion L. C. Biedebharn and P. Truini; 13. Uncomputability, intractability and the efficiency of heat engines S. Lloyd; 14. The new mathematical physics I. Singer; 15. For the birds V. Telegdi; 16. Gell-Mann's approach to physics A. Salam; 17. Remarks M. Goldberger.

  19. Strings, vortex rings, and modes of instability

    DOE PAGES

    Gubser, Steven S.; Nayar, Revant; Parikh, Sarthak

    2015-01-12

    We treat string propagation and interaction in the presence of a background Neveu–Schwarz three-form field strength, suitable for describing vortex rings in a superfluid or low-viscosity normal fluid. A circular vortex ring exhibits instabilities which have been recognized for many years, but whose precise boundaries we determine for the first time analytically in the small core limit. Two circular vortices colliding head-on exhibit stronger instabilities which cause splitting into many small vortices at late times. We provide an approximate analytic treatment of these instabilities and show that the most unstable wavelength is parametrically larger than a dynamically generated length scalemore » which in many hydrodynamic systems is close to the cutoff. We also summarize how the string construction we discuss can be derived from the Gross–Pitaevskii Lagrangian, and also how it compares to the action for giant gravitons.« less

  20. Measles control in developing and developed countries: the case for a two-dose policy.

    PubMed

    Tulchinsky, T H; Ginsberg, G M; Abed, Y; Angeles, M T; Akukwe, C; Bonn, J

    1993-01-01

    Despite major reductions in the incidence of measles and its complications, measles control with a single dose of the currently used. Schwarz strain vaccine has failed to eradicate the disease in the developed countries. In developing countries an enormous toll of measles deaths and disability continues, despite considerable efforts and increasing immunization coverage. Empirical evidence from a number of countries suggests that a two-dose measles vaccination programme, by improving individual protection and heard immunity can make a major contribution to measles control and elimination of local circulation of the disease. Cost-benefit analysis also supports the two-dose schedule in terms of savings in health costs, and total costs to society. A two-dose measles vaccination programme is therefore an essential component of preventive health care in developing, as well as developed countries for the 1990s.

  1. A superstring field theory for supergravity

    NASA Astrophysics Data System (ADS)

    Reid-Edwards, R. A.; Riccombeni, D. A.

    2017-09-01

    A covariant closed superstring field theory, equivalent to classical tendimensional Type II supergravity, is presented. The defining conformal field theory is the ambitwistor string worldsheet theory of Mason and Skinner. This theory is known to reproduce the scattering amplitudes of Cachazo, He and Yuan in which the scattering equations play an important role and the string field theory naturally incorporates these results. We investigate the operator formalism description of the ambitwsitor string and propose an action for the string field theory of the bosonic and supersymmetric theories. The correct linearised gauge symmetries and spacetime actions are explicitly reproduced and evidence is given that the action is correct to all orders. The focus is on the NeveuSchwarz sector and the explicit description of tree level perturbation theory about flat spacetime. Application of the string field theory to general supergravity backgrounds and the inclusion of the Ramond sector are briefly discussed.

  2. Measles inclusion-body encephalitis caused by the vaccine strain of measles virus.

    PubMed

    Bitnun, A; Shannon, P; Durward, A; Rota, P A; Bellini, W J; Graham, C; Wang, E; Ford-Jones, E L; Cox, P; Becker, L; Fearon, M; Petric, M; Tellier, R

    1999-10-01

    We report a case of measles inclusion-body encephalitis (MIBE) occurring in an apparently healthy 21-month-old boy 8.5 months after measles-mumps-rubella vaccination. He had no prior evidence of immune deficiency and no history of measles exposure or clinical disease. During hospitalization, a primary immunodeficiency characterized by a profoundly depressed CD8 cell count and dysgammaglobulinemia was demonstrated. A brain biopsy revealed histopathologic features consistent with MIBE, and measles antigens were detected by immunohistochemical staining. Electron microscopy revealed inclusions characteristic of paramyxovirus nucleocapsids within neurons, oligodendroglia, and astrocytes. The presence of measles virus in the brain tissue was confirmed by reverse transcription polymerase chain reaction. The nucleotide sequence in the nucleoprotein and fusion gene regions was identical to that of the Moraten and Schwarz vaccine strains; the fusion gene differed from known genotype A wild-type viruses.

  3. Sequestered gravity in gauge mediation.

    PubMed

    Antoniadis, Ignatios; Benakli, Karim; Quiros, Mariano

    2016-01-01

    We present a novel mechanism of supersymmetry breaking embeddable in string theory and simultaneously sharing the main advantages of (sequestered) gravity and gauge mediation. It is driven by a Scherk-Schwarz deformation along a compact extra dimension, transverse to a brane stack supporting the supersymmetric extension of the Standard Model. This fixes the magnitude of the gravitino mass, together with that of the gauginos of a bulk gauge group, at a scale as high as [Formula: see text] GeV. Supersymmetry breaking is mediated to the observable sector dominantly by gauge interactions using massive messengers transforming non-trivially under the bulk and Standard Model gauge groups and leading to a neutralino LSP as dark matter candidate. The Higgsino mass [Formula: see text] and soft Higgs-bilinear [Formula: see text] term could be generated at the same order of magnitude as the other soft terms by effective supergravity couplings as in the Giudice-Masiero mechanism.

  4. STUDIES OF THE RADIATION CHEMISTRY OF ORGANIC COMPOUNDS. THE RADIOLYSIS OF METHANOL AND METHANOLIC SOLUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lichtin, N.N.

    1961-02-28

    >Installation, equipping, and dosimetry of an 850-curie Schwarz-Allen type Co/sup 60/ source were completed. Dose rates are tabulated for six positions in the source. The dissolution of boron from pyrex by methanol was studied using the curcumin procedure. The results indicated <7 x 10/sup -6/M of boron in the methanol independent of irradiations. The gamma radiolysis of methanol resulted in G-values of: 4.66 plus or minus 0.07, H/sub 2/; 0.27 plus or minus 0.03, CH/sub 4/; 1.94 plus or minus 0.06, CH/sub 2/O; and 2.86 plus or minus 0.05, C/sub 2/H/sub 6/O/sub 2/. An improvement in the trapping of methanolmore » resulted in the reduction of the apparent yield of H/sub 2/ to 3.9. (B.O.G.)« less

  5. Beyond valence in the perception of likelihood: the role of emotion specificity.

    PubMed

    DeSteno, D; Petty, R E; Wegener, D T; Rucker, D D

    2000-03-01

    Positive and negative moods have been shown to increase likelihood estimates of future events matching these states in valence (e.g., E. J. Johnson & A. Tversky, 1983). In the present article, 4 studies provide evidence that this congruency bias (a) is not limited to valence but functions in an emotion-specific manner, (b) derives from the informational value of emotions, and (c) is not the inevitable outcome of likelihood assessment under heightened emotion. Specifically, Study 1 demonstrates that sadness and anger, 2 distinct, negative emotions, differentially bias likelihood estimates of sad and angering events. Studies 2 and 3 replicate this finding in addition to supporting an emotion-as-information (cf. N. Schwarz & G. L. Clore, 1983), as opposed to a memory-based, mediating process for the bias. Finally, Study 4 shows that when the source of the emotion is salient, a reversal of the bias can occur given greater cognitive effort aimed at accuracy.

  6. T-duality and α'-corrections

    NASA Astrophysics Data System (ADS)

    Marqués, Diego; Nuñez, Carmen A.

    2015-10-01

    We construct an O( d, d) invariant universal formulation of the first-order α'-corrections of the string effective actions involving the dilaton, metric and two-form fields. Two free parameters interpolate between four-derivative terms that are even and odd with respect to a Z 2-parity transformation that changes the sign of the two-form field. The Z 2-symmetric model reproduces the closed bosonic string, and the heterotic string effective action is obtained through a Z 2-parity-breaking choice of parameters. The theory is an extension of the generalized frame formulation of Double Field Theory, in which the gauge transformations are deformed by a first-order generalized Green-Schwarz transformation. This deformation defines a duality covariant gauge principle that requires and fixes the four-derivative terms. We discuss the O( d, d) structure of the theory and the (non-)covariance of the required field redefinitions.

  7. Development of a species-diagnostic marker for identification of the stingless bee Trigona pagdeni in Thailand.

    PubMed

    Thummajitsakul, Sirikul; Klinbunga, Sirawut; Sittipraneed, Siriporn

    2010-04-01

    A species-diagnostic SCAR marker for identification of the stingless bee (Trigona pagdeni Schwarz) was successfully developed. Initially, amplified fragment length polymorphism analysis was carried out across representatives of 12 stingless bee species using 64 primer combinations. A 284 bp band restrictively found in T. pagdeni was cloned and sequenced. A primer pair (CUTP1-F/R) was designed and tested for species-specificity in 15 stingless bees. The expected 163 bp fragment was successfully amplified in all examined individuals of T. pagdeni (129/129). Nevertheless, cross-species amplification was also observed in T. fimbriata (1/3), T. collina (11/112), T. laeviceps (1/12), and T. fuscobalteata (15/15), but not in other species. SSCP analysis of CUTP1 further differentiated T. fuscobalteata and T. collina from T. pagdeni. Although T. laeviceps, T. fimbriata, and T. pagdeni shared an identical SSCP genotype, they are not taxonomically problematic species.

  8. Quantum no-scale regimes in string theory

    NASA Astrophysics Data System (ADS)

    Coudarchet, Thibaut; Fleming, Claude; Partouche, Hervé

    2018-05-01

    We show that in generic no-scale models in string theory, the flat, expanding cosmological evolutions found at the quantum level can be attracted to a "quantum no-scale regime", where the no-scale structure is restored asymptotically. In this regime, the quantum effective potential is dominated by the classical kinetic energies of the no-scale modulus and dilaton. We find that this natural preservation of the classical no-scale structure at the quantum level occurs when the initial conditions of the evolutions sit in a subcritical region of their space. On the contrary, supercritical initial conditions yield solutions that have no analogue at the classical level. The associated intrinsically quantum universes are sentenced to collapse and their histories last finite cosmic times. Our analysis is done at 1-loop, in perturbative heterotic string compactified on tori, with spontaneous supersymmetry breaking implemented by a stringy version of the Scherk-Schwarz mechanism.

  9. Biomimetic block copolymer particles with gated nanopores and ultrahigh protein sorption capacity

    NASA Astrophysics Data System (ADS)

    Yu, Haizhou; Qiu, Xiaoyan; Nunes, Suzana P.; Peinemann, Klaus-Viktor

    2014-06-01

    The design of micro- or nanoparticles that can encapsulate sensitive molecules such as drugs, hormones, proteins or peptides is of increasing importance for applications in biotechnology and medicine. Examples are micelles, liposomes and vesicles. The tiny and, in most cases, hollow spheres are used as vehicles for transport and controlled administration of pharmaceutical drugs or nutrients. Here we report a simple strategy to fabricate microspheres by block copolymer self-assembly. The microsphere particles have monodispersed nanopores that can act as pH-responsive gates. They contain a highly porous internal structure, which is analogous to the Schwarz P structure. The internal porosity of the particles contributes to their high sorption capacity and sustained release behaviour. We successfully separated similarly sized proteins using these particles. The ease of particle fabrication by macrophase separation and self-assembly, and the robustness of the particles makes them ideal for sorption, separation, transport and sustained delivery of pharmaceutical substances.

  10. Application of modified Rosenbrock's method for optimization of nutrient media used in microorganism culturing.

    PubMed

    Votruba, J; Pilát, P; Prokop, A

    1975-12-01

    The Rosenbrock's procedure has been modified for optimization of nutrient medium composition and has been found to be less tedious than the Box-Wilson method, especially for larger numbers of optimized parameters. Its merits are particularly obvious with multiparameter optimization where the gradient method, so far the only one employed in microbiology from a variety of optimization methods (e.g., refs, 9 and 10), becomes impractical because of the excessive number of experiments required. The method suggested is also more stable during optimization than the gradient methods which are very sensitive to the selection of steps in the direction of the gradient and may thus easily shoot out of the optimized region. It is also anticipated that other direct search methods, particularly simplex design, may be easily adapted for optimization of medium composition. It is obvious that direct search methods may find an application in process improvement in antibiotic and related industries.

  11. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Integrability in AdS/CFT correspondence: quasi-classical analysis

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay

    2009-06-01

    In this review, we consider a quasi-classical method applicable to integrable field theories which is based on a classical integrable structure—the algebraic curve. We apply it to the Green-Schwarz superstring on the AdS5 × S5 space. We show that the proposed method reproduces perfectly the earlier results obtained by expanding the string action for some simple classical solutions. The construction is explicitly covariant and is not based on a particular parameterization of the fields and as a result is free from ambiguities. On the other hand, the finite size corrections in some particularly important scaling limit are studied in this paper for a system of Bethe equations. For the general superalgebra \\su(N|K) , the result for the 1/L corrections is obtained. We find an integral equation which describes these corrections in a closed form. As an application, we consider the conjectured Beisert-Staudacher (BS) equations with the Hernandez-Lopez dressing factor where the finite size corrections should reproduce quasi-classical results around a general classical solution. Indeed, we show that our integral equation can be interpreted as a sum of all physical fluctuations and thus prove the complete one-loop consistency of the BS equations. We demonstrate that any local conserved charge (including the AdS energy) computed from the BS equations is indeed given at one loop by the sum of the charges of fluctuations with an exponential precision for large S5 angular momentum of the string. As an independent result, the BS equations in an \\su(2) sub-sector were derived from Zamolodchikovs's S-matrix. The paper is based on the author's PhD thesis.

  13. An improved reaction path optimization method using a chain of conformations

    NASA Astrophysics Data System (ADS)

    Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro

    2018-05-01

    The efficient fast path optimization (FPO) method is proposed to optimize the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO method to ensure the equal spacing of adjacent conformations. The FPO method is applied to optimize the reaction path on two model potential surfaces. The use of this method enabled the optimization of the reaction paths with a drastically reduced number of optimization cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO method.

  14. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  15. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    NASA Technical Reports Server (NTRS)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  16. Optimization of the gypsum-based materials by the sequential simplex method

    NASA Astrophysics Data System (ADS)

    Doleželová, Magdalena; Vimmrová, Alena

    2017-11-01

    The application of the sequential simplex optimization method for the design of gypsum based materials is described. The principles of simplex method are explained and several examples of the method usage for the optimization of lightweight gypsum and ternary gypsum based materials are given. By this method lightweight gypsum based materials with desired properties and ternary gypsum based material with higher strength (16 MPa) were successfully developed. Simplex method is a useful tool for optimizing of gypsum based materials, but the objective of the optimization has to be formulated appropriately.

  17. Review of design optimization methods for turbomachinery aerodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  18. Fate of Organic Matters in a Soil Erosion Context : Qualitative and Quantitative Monitoring in a Karst Hydrosystem

    NASA Astrophysics Data System (ADS)

    Quiers, M.; Gateuille, D.; Perrette, Y.; Naffrechoux, E.; David, B.; Malet, E.

    2017-12-01

    Soils are a key compartments of hydrosystems, especially in karst aquifers which are characterized by fast hydrologic responses to rainfalls. In steady state, soils are efficient filters preventing karst water from pollutions. But agricultural or forestry land uses can alter or even reverse the role of soils. Thus , soils can act as pollution sources rather than pollution filters. In order to manage water quality together with man activities in karst environment, the development of new tools and procedures designed to monitor the fate of soil organic matter are needed. This study reports two complementary methods applied in a moutain karst system impacted by anthropic activities and environmental stresses. A continuous monitoring of water fluorescence coupled with punctual sampling was analyzed by chemiometric methods and allowed to discriminate the type of organic matter transferred through the karst system along the year (winter / summer) and hydrological stages. As a main result, the modelisation of organic carbone fluxes is dominated by a colloidal or particulate part during highwaters, and a main part dissolved in solution during low water, demonstrating the change of organic carbone source. To confirm this result, a second method was used based on the observation of Polycyclic Aromatic Hydrocarbons (PAH) profiles. Two previous studies (Perrette et al 2013, Schwarz et al 2011) led to opposite conclusions about the fate of PAH from soil to groundwaters. This opposition leads to a potential use of PAH profiles (low molecular weight less hydrophobic ones versus high molecular weight more hydrophobic ones) as an indicator of soil erosion. We validate that use by the anaylsis of these PAH profiles for low and high waters (floods). These results demonstrate if needed the high vulnerability of karst system to soil erosion, and propose a new proxy to record soils erosion in groundwaters and in natural archives as stalagmites or sediments.

  19. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  20. Homotopy method for optimization of variable-specific-impulse low-thrust trajectories

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng

    2017-11-01

    The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.

  1. A universality in pp-waves

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Partha

    2007-06-01

    We discuss a universality property of any covariant field theory in space-time expanded around pp-wave backgrounds. According to this property the space-time lagrangian density evaluated on a restricted set of field configurations, called universal sector, turns out to be same around all the pp-waves, even off-shell, with same transverse space and same profiles for the background scalars. In this paper we restrict our discussion to tensorial fields only. In the context of bosonic string theory we consider on-shell pp-waves and argue that universality requires the existence of a universal sector of world-sheet operators whose correlation functions are insensitive to the pp-wave nature of the metric and the background gauge flux. Such results can also be reproduced using the world-sheet conformal field theory. We also study such pp-waves in non-polynomial closed string field theory (CSFT). In particular, we argue that for an off-shell pp-wave ansatz with flat transverse space and dilaton independent of transverse coordinates the field redefinition relating the low energy effective field theory and CSFT with all the massive modes integrated out is at most quadratic in fields. Because of this simplification it is expected that the off-shell pp-waves can be identified on the two sides. Furthermore, given the massless pp-wave field configurations, an iterative method for computing the higher massive modes using the CSFT equations of motion has been discussed. All our bosonic string theory analyses can be generalised to the common Neveu-Schwarz sector of superstrings.

  2. Fast divide-and-conquer algorithm for evaluating polarization in classical force fields

    NASA Astrophysics Data System (ADS)

    Nocito, Dominique; Beran, Gregory J. O.

    2017-03-01

    Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.

  3. Immune response to measles vaccine in Peruvian children.

    PubMed Central

    Bautista-López, N. L.; Vaisberg, A.; Kanashiro, R.; Hernández, H.; Ward, B. J.

    2001-01-01

    OBJECTIVE: To evaluate the immune response in Peruvian children following measles vaccination. METHODS: Fifty-five Peruvian children received Schwarz measles vaccine (about 10(3) plaque forming units) at about 9 months of age. Blood samples were taken before vaccination, then twice after vaccination: one sample at between 1 and 4 weeks after vaccination and the final sample 3 months post vaccination for evaluation of immune cell phenotype and lymphoproliferative responses to measles and non-measles antigens. Measles-specific antibodies were measured by plaque reduction neutralization. FINDINGS: The humoral response developed rapidly after vaccination; only 4 of the 55 children (7%) had plaque reduction neutralization titres <200 mlU/ml 3 months after vaccination. However, only 8 out of 35 children tested (23%) had lymphoproliferative responses to measles antigens 3-4 weeks after vaccination. Children with poor lymphoproliferative responses to measles antigens had readily detectable lymphoproliferative responses to other antigens. Flow cytometric analysis of peripheral blood mononuclear cells revealed diffuse immune system activation at the time of vaccination in most children. The capacity to mount a lymphoproliferative response to measles antigens was associated with expression of CD45RO on CD4+ T-cells. CONCLUSION: The 55 Peruvian children had excellent antibody responses after measles vaccination, but only 23% (8 out of 35) generated detectable lymphoproliferative responses to measles antigens (compared with 55-67% in children in the industrialized world). This difference may contribute to the less than uniform success of measles vaccination programmes in the developing world. PMID:11731811

  4. Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables

    NASA Astrophysics Data System (ADS)

    Ouyang, Bo; Shang, Weiwei

    2016-03-01

    The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.

  5. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  6. Analysis of neighborhood behavior in lead optimization and array design.

    PubMed

    Papadatos, George; Cooper, Anthony W J; Kadirkamanathan, Visakan; Macdonald, Simon J F; McLay, Iain M; Pickett, Stephen D; Pritchard, John M; Willett, Peter; Gillet, Valerie J

    2009-02-01

    Neighborhood behavior describes the extent to which small structural changes defined by a molecular descriptor are likely to lead to small property changes. This study evaluates two methods for the quantification of neighborhood behavior: the optimal diagonal method of Patterson et al. and the optimality criterion method of Horvath and Jeandenans. The methods are evaluated using twelve different types of fingerprint (both 2D and 3D) with screening data derived from several lead optimization projects at GlaxoSmithKline. The principal focus of the work is the design of chemical arrays during lead optimization, and the study hence considers not only biological activity but also important drug properties such as metabolic stability, permeability, and lipophilicity. Evidence is provided to suggest that the optimality criterion method may provide a better quantitative description of neighborhood behavior than the optimal diagonal method.

  7. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  8. Use of High Fidelity Methods in Multidisciplinary Optimization-A Preliminary Survey

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Multidisciplinary optimization is a key element of design process. To date multidiscipline optimization methods that use low fidelity methods are well advanced. Optimization methods based on simple linear aerodynamic equations and plate structural equations have been applied to complex aerospace configurations. However, use of high fidelity methods such as the Euler/ Navier-Stokes for fluids and 3-D (three dimensional) finite elements for structures has begun recently. As an activity of Multidiscipline Design Optimization Technical Committee (MDO TC) of AIAA (American Institute of Aeronautics and Astronautics), an effort was initiated to assess the status of the use of high fidelity methods in multidisciplinary optimization. Contributions were solicited through the members MDO TC committee. This paper provides a summary of that survey.

  9. PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads

    PubMed Central

    Cheng, Wen-Chang

    2012-01-01

    In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

  10. A modified multi-objective particle swarm optimization approach and its application to the design of a deepwater composite riser

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Chen, J.

    2017-09-01

    A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.

  11. Structural optimization of large structural systems by optimality criteria methods

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo

    1992-01-01

    The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.

  12. Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method

    NASA Astrophysics Data System (ADS)

    Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin

    2017-12-01

    Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.

  13. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  14. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  15. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.« less

  16. INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN

    EPA Science Inventory

    The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...

  17. Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.

    PubMed

    Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan

    2013-01-01

    In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.

  18. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  19. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  20. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  1. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  2. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  3. Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2003-01-01

    Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.

  4. Guided particle swarm optimization method to solve general nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr

    2018-04-01

    The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.

  5. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  6. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  7. A consistent methodology for optimal shape design of graphene sheets to maximize their fundamental frequencies considering topological defects

    NASA Astrophysics Data System (ADS)

    Shi, Jin-Xing; Ohmura, Keiichiro; Shimoda, Masatoshi; Lei, Xiao-Wen

    2018-07-01

    In recent years, shape design of graphene sheets (GSs) by introducing topological defects for enhancing their mechanical behaviors has attracted the attention of scholars. In the present work, we propose a consistent methodology for optimal shape design of GSs using a combination of the molecular mechanics (MM) method, the non-parametric shape optimization method, the phase field crystal (PFC) method, Voronoi tessellation, and molecular dynamics (MD) simulation to maximize their fundamental frequencies. At first, we model GSs as continuum frame models using a link between the MM method and continuum mechanics. Then, we carry out optimal shape design of GSs in fundamental frequency maximization problem based on a developed shape optimization method for frames. However, the obtained optimal shapes of GSs only consisting of hexagonal carbon rings are unstable that do not satisfy the principle of least action, so we relocate carbon atoms on the optimal shapes by introducing topological defects using the PFC method and Voronoi tessellation. At last, we perform the structural relaxation through MD simulation to determine the final optimal shapes of GSs. We design two examples of GSs and the optimal results show that the fundamental frequencies of GSs can be significantly enhanced according to the optimal shape design methodology.

  8. Structural damage detection-oriented multi-type sensor placement with multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong

    2018-05-01

    A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.

  9. The application of artificial intelligence in the optimal design of mechanical systems

    NASA Astrophysics Data System (ADS)

    Poteralski, A.; Szczepanik, M.

    2016-11-01

    The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.

  10. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  11. [Optimized application of nested PCR method for detection of malaria].

    PubMed

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  12. On gauged maximal d  =  8 supergravities

    NASA Astrophysics Data System (ADS)

    Lasso Andino, Óscar; Ortín, Tomás

    2018-04-01

    We study the gauging of maximal d  =  8 supergravity using the embedding tensor formalism. We focus on SO(3) gaugings, study all the possible choices of gauge fields and construct explicitly the bosonic actions (including the complicated Chern–Simons terms) for all these choices, which are parametrized by a parameter associated to the 8-dimensional SL(2, {R}) duality group that relates all the possible choices which are, ultimately, equivalent from the purely 8-dimensional point of view. Our result proves that the theory constructed by Salam and Sezgin by Scherk–Schwarz compactification of d  =  11 supergravity and the theory constructed in Alonso-Alberca (2001 Nucl. Phys. B 602 329) by dimensional reduction of the so called ‘massive 11-dimensional supergravity’ proposed by Meessen and Ortín in (1999 Nucl. Phys. B 541 195) are indeed related by an SL(2, {R}) duality even though they have two completely different 11-dimensional origins.

  13. Yang-Baxter deformations of supercoset sigma models with ℤ4m grading

    NASA Astrophysics Data System (ADS)

    Ke, San-Min; Yang, Wen-Li; Jang, Ke-Xia; Wang, Chun; Shuai, Xue-Min; Wang, Zhan-Yun; Shi, Gang

    2017-11-01

    We have studied Yang-Baxter deformations of supercoset sigma models with ℤ4m grading. The deformations are specified by a skew-symmetric classical r-matrix satisfying the classical Yang-Baxter equations. The deformed action is constructed and the Lax pair is also presented. When m=1, our results reduce to those of the type IIB Green-Schwarz superstring on AdS 5×S 5 background recently given by Kawaguchi, Matsumoto and Yoshida. Supported by National Natural Science Foundation of China (11375141, 11425522, 11547050), Natural Science Foundation of Shaanxi Province (2013JQ1011, 2017ZDJC-32, 2016JM1027), Special Foundation for Basic Scientific Research of Central Colleges (310812152001, 310812172001, 2013G1121082, CHD2012JC019), Scientific Research Program Funded by Shaanxi Provincial Education Department (2013JK0628), Xi’an Shiyou University Science and Technology Foundation (2010QN018) and partly supported by the Basic Research Foundation of Engineering University of CAPF (WJY-201506)

  14. Coherent exciton transport in dendrimers and continuous-time quantum walks

    NASA Astrophysics Data System (ADS)

    Mülken, Oliver; Bierbaum, Veronika; Blumen, Alexander

    2006-03-01

    We model coherent exciton transport in dendrimers by continuous-time quantum walks. For dendrimers up to the second generation the coherent transport shows perfect recurrences when the initial excitation starts at the central node. For larger dendrimers, the recurrence ceases to be perfect, a fact which resembles results for discrete quantum carpets. Moreover, depending on the initial excitation site, we find that the coherent transport to certain nodes of the dendrimer has a very low probability. When the initial excitation starts from the central node, the problem can be mapped onto a line which simplifies the computational effort. Furthermore, the long time average of the quantum mechanical transition probabilities between pairs of nodes shows characteristic patterns and allows us to classify the nodes into clusters with identical limiting probabilities. For the (space) average of the quantum mechanical probability to be still or to be again at the initial site, we obtain, based on the Cauchy-Schwarz inequality, a simple lower bound which depends only on the eigenvalue spectrum of the Hamiltonian.

  15. A chimeric measles virus with a lentiviral envelope replicates exclusively in CD4+/CCR5+ cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mourez, Thomas; APHP, GH Saint-Louis-Lariboisiere, Laboratoire de Bacteriologie-Virologie, F-75010 Paris; Universite Paris 7 Denis Diderot, F-75010 Paris

    2011-10-25

    We generated a replicating chimeric measles virus in which the hemagglutinin and fusion surface glycoproteins were replaced with the gp160 envelope glycoprotein of simian immunodeficiency virus (SIVmac239). Based on a previously cloned live-attenuated Schwarz vaccine strain of measles virus (MV), this chimera was rescued at high titers using reverse genetics in CD4+ target cells. Cytopathic effect consisted in the presence of large cell aggregates evolving to form syncytia, as observed during SIV infection. The morphology of the chimeric virus was identical to that of the parent MV particles. The presence of SIV gp160 as the only envelope protein on chimericmore » particles surface altered the cell tropism of the new virus from CD46+ to CD4+ cells. Used as an HIV candidate vaccine, this MV/SIVenv chimeric virus would mimic transient HIV-like infection, benefiting both from HIV-like tropism and the capacity of MV to replicate in dendritic cells, macrophages and lymphocytes.« less

  16. Open superstring field theory based on the supermoduli space

    NASA Astrophysics Data System (ADS)

    Ohmori, Kantaro; Okawa, Yuji

    2018-04-01

    We present a new approach to formulating open superstring field theory based on the covering of the supermoduli space of super-Riemann surfaces and explicitly construct a gauge-invariant action in the Neveu-Schwarz sector up to quartic interactions. The cubic interaction takes a form of an integral over an odd modulus of disks with three punctures and the associated ghost is inserted. The quartic interaction takes a form of an integral over one even modulus and two odd moduli, and it can be interpreted as the integral over the region of the supermoduli space of disks with four punctures which is not covered by Feynman diagrams with two cubic vertices and one propagator. As our approach is based on the covering of the supermoduli space, the resulting theory naturally realizes an A ∞ structure, and the two-string product and the three-string product used in defining the cubic and quartic interactions are constructed to satisfy the A ∞ relations to this order.

  17. Design study of a high power rotary transformer

    NASA Technical Reports Server (NTRS)

    Weinberger, S. M.

    1982-01-01

    A design study was made on a rotary transformer for transferring electrical power across a rotating spacecraft interface. The analysis was performed for a 100 KW, 20 KHz unit having a ""pancake'' geometry. The rotary transformer had a radial (vertical) gap and consisted of 4-25 KW modules. It was assumed that the power conditioning comprised of a Schwarz resonant circuit with a 20 KHz switching frequency. The rotary transformer, mechanical and structural design, heat rejection system and drive mechanism which provide a complete power transfer device were examined. The rotary transformer losses, efficiency, weight and size were compared with an axial (axial symmetric) gap transformer having the same performance requirements and input characteristics which was designed as part of a previous program. The ""pancake'' geometry results in a heavier rotary transformer primarily because of inefficient use of the core material. It is shown that the radial gap rotary transformer is a feasible approach for the transfer of electrical power across a rotating interface and can be implemented using presently available technology.

  18. Electrostatic repulsive out-of-plane actuator using conductive substrate.

    PubMed

    Wang, Weimin; Wang, Qiang; Ren, Hao; Ma, Wenying; Qiu, Chuankai; Chen, Zexiang; Fan, Bin

    2016-10-07

    A pseudo-three-layer electrostatic repulsive out-of-plane actuator is proposed. It combines the advantages of two-layer and three-layer repulsive actuators, i.e., fabrication requirements and fill factor. A theoretical model for the proposed actuator is developed and solved through the numerical calculation of Schwarz-Christoffel mapping. Theoretical and simulated results show that the pseudo-three-layer actuator offers higher performance than the two-layer and three-layer actuators with regard to the two most important characteristics of actuators, namely, driving force and theoretical stroke. Given that the pseudo-three-layer actuator structure is compatible with both the parallel-plate actuators and these two types of repulsive actuators, a 19-element two-layer repulsive actuated deformable mirror is operated in pseudo-three-layer electrical connection mode. Theoretical and experimental results demonstrate that the pseudo-three-layer mode produces a larger displacement of 0-4.5 μm for a dc driving voltage of 0-100 V, when compared with that in two-layer mode.

  19. Electrostatic repulsive out-of-plane actuator using conductive substrate

    PubMed Central

    Wang, Weimin; Wang, Qiang; Ren, Hao; Ma, Wenying; Qiu, Chuankai; Chen, Zexiang; Fan, Bin

    2016-01-01

    A pseudo-three-layer electrostatic repulsive out-of-plane actuator is proposed. It combines the advantages of two-layer and three-layer repulsive actuators, i.e., fabrication requirements and fill factor. A theoretical model for the proposed actuator is developed and solved through the numerical calculation of Schwarz-Christoffel mapping. Theoretical and simulated results show that the pseudo-three-layer actuator offers higher performance than the two-layer and three-layer actuators with regard to the two most important characteristics of actuators, namely, driving force and theoretical stroke. Given that the pseudo-three-layer actuator structure is compatible with both the parallel-plate actuators and these two types of repulsive actuators, a 19-element two-layer repulsive actuated deformable mirror is operated in pseudo-three-layer electrical connection mode. Theoretical and experimental results demonstrate that the pseudo-three-layer mode produces a larger displacement of 0–4.5 μm for a dc driving voltage of 0–100 V, when compared with that in two-layer mode. PMID:27713542

  20. Algebraic cycles and local anomalies in F-theory

    NASA Astrophysics Data System (ADS)

    Bies, Martin; Mayrhofer, Christoph; Weigand, Timo

    2017-11-01

    We introduce a set of identities in the cohomology ring of elliptic fibrations which are equivalent to the cancellation of gauge and mixed gauge-gravitational anomalies in F-theory compactifications to four and six dimensions. The identities consist in (co)homological relations between complex codimension-two cycles. The same set of relations, once evaluated on elliptic Calabi-Yau three-folds and four-folds, is shown to universally govern the structure of anomalies and their Green-Schwarz cancellation in six- and four-dimensional F-theory vacua, respectively. We furthermore conjecture that these relations hold not only within the cohomology ring, but even at the level of the Chow ring, i.e. as relations among codimension-two cycles modulo rational equivalence. We verify this conjecture in non-trivial examples with Abelian and non-Abelian gauge groups factors. Apart from governing the structure of local anomalies, the identities in the Chow ring relate different types of gauge backgrounds on elliptically fibred Calabi-Yau four-folds.

  1. Numerical Conformal Mapping Using Cross-Ratios and Delaunay Triangulation

    NASA Technical Reports Server (NTRS)

    Driscoll, Tobin A.; Vavasis, Stephen A.

    1996-01-01

    We propose a new algorithm for computing the Riemann mapping of the unit disk to a polygon, also known as the Schwarz-Christoffel transformation. The new algorithm, CRDT, is based on cross-ratios of the prevertices, and also on cross-ratios of quadrilaterals in a Delaunay triangulation of the polygon. The CRDT algorithm produces an accurate representation of the Riemann mapping even in the presence of arbitrary long, thin regions in the polygon, unlike any previous conformal mapping algorithm. We believe that CRDT can never fail to converge to the correct Riemann mapping, but the correctness and convergence proof depend on conjectures that we have so far not been able to prove. We demonstrate convergence with computational experiments. The Riemann mapping has applications to problems in two-dimensional potential theory and to finite-difference mesh generation. We use CRDT to produce a mapping and solve a boundary value problem on long, thin regions for which no other algorithm can solve these problems.

  2. Lie-Hamilton systems on the plane: Properties, classification and applications

    NASA Astrophysics Data System (ADS)

    Ballesteros, A.; Blasco, A.; Herranz, F. J.; de Lucas, J.; Sardón, C.

    2015-04-01

    We study Lie-Hamilton systems on the plane, i.e. systems of first-order differential equations describing the integral curves of a t-dependent vector field taking values in a finite-dimensional real Lie algebra of planar Hamiltonian vector fields with respect to a Poisson structure. We start with the local classification of finite-dimensional real Lie algebras of vector fields on the plane obtained in González-López, Kamran, and Olver (1992) [23] and we interpret their results as a local classification of Lie systems. By determining which of these real Lie algebras consist of Hamiltonian vector fields relative to a Poisson structure, we provide the complete local classification of Lie-Hamilton systems on the plane. We present and study through our results new Lie-Hamilton systems of interest which are used to investigate relevant non-autonomous differential equations, e.g. we get explicit local diffeomorphisms between such systems. We also analyse biomathematical models, the Milne-Pinney equations, second-order Kummer-Schwarz equations, complex Riccati equations and Buchdahl equations.

  3. Computational analysis of amoeboid swimming at low Reynolds number.

    PubMed

    Wang, Qixuan; Othmer, Hans G

    2016-06-01

    Recent experimental work has shown that eukaryotic cells can swim in a fluid as well as crawl on a substrate. We investigate the swimming behavior of Dictyostelium discoideum  amoebae who swim by initiating traveling protrusions at the front that propagate rearward. In our model we prescribe the velocity at the surface of the swimming cell, and use techniques of complex analysis to develop 2D models that enable us to study the fluid-cell interaction. Shapes that approximate the protrusions used by Dictyostelium discoideum  can be generated via the Schwarz-Christoffel transformation, and the boundary-value problem that results for swimmers in the Stokes flow regime is then reduced to an integral equation on the boundary of the unit disk. We analyze the swimming characteristics of several varieties of swimming Dictyostelium discoideum  amoebae, and discuss how the slenderness of the cell body and the shapes of the protrusion effect the swimming of these cells. The results may provide guidance in designing low Reynolds number swimming models.

  4. A Partitioning Algorithm for Block-Diagonal Matrices With Overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina

    2008-02-02

    We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile ismore » performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.« less

  5. LHCb anomalies from a natural perspective

    NASA Astrophysics Data System (ADS)

    García, Isabel García

    2017-03-01

    Tension between the Standard Model (SM) and data concerning b → s processes has become apparent. Most notoriously, concerning the R K ratio, which probes lepton non-universality in b decays, and measurements involving the decays B → K ∗ μ + μ - and B s → ϕμ + μ -. Careful analysis of a wide range of b → s data shows that certain kinds of new physics can significantly ameliorate agreement with experiment. Here, we show that these b → s anomalies can be naturally accommodated in the context of Natural Scherk-Schwarz Theories of the Weak Scale — a class of models designed to address the hierarchy problem. No extra states beyond those naturally present in the theory need to be introduced in order to accommodate these anomalies, and the assumptions required regarding flavor violating couplings are very mild. Moreover, the structure of these models makes sharp predictions regarding B meson decays into final states including τ + τ - pairs, which will provide a future test of this type of theories.

  6. Electromagnetic pulse (EMP) radiation by laser interaction with a solid H2 ribbon

    NASA Astrophysics Data System (ADS)

    De Marco, M.; Krása, J.; Cikhardt, J.; Velyhan, A.; Pfeifer, M.; Dudžák, R.; Dostál, J.; Krouský, E.; Limpouch, J.; Pisarczyk, T.; Kalinowska, Z.; Chodukowski, T.; Ullschmied, J.; Giuffrida, L.; Chatain, D.; Perin, J.-P.; Margarone, D.

    2017-08-01

    The electromagnetic pulses (EMPs) generated during the interaction of a focused 1.315-μm sub-nanosecond laser pulse with a solid hydrogen ribbon were measured. The strength and temporal characteristics of EMPs were found to be dependent on the target density. If a low density target is ionized during the interaction with the laser, and the plasma does not physically touch the target holder, the EMP is weaker in strength and shorter in time duration. It is shown that during the H2 target experiment, the EMP does not strongly affect the response of fast electronic devices. The measurements of the EMP were carried out by Rohde&Schwarz B-Probes, particularly sensitive in the frequency range from 30 MHz and 1 GHz. Numerical simulations of resonant frequencies of the target chamber used in the experiment at the Prague Asterix Laser System kJ-class laser facility elucidate the peaked structure of EMP frequency spectra in the GHz domain.

  7. Editorial--in this issue: innate immunity in normal and pathologic circumstances.

    PubMed

    Bot, Adrian

    2014-01-01

    In this issue of the International Reviews of Immunology, we host several reviews dedicated to the innate immunity in normal and diseased states. Tan et al. discuss the molecular nature of the innate immune response as a consequence of co-engagement of distinct Toll-like receptors. Schwarz et al. present a regulatory loop leading to increased myelopoiesis through the engagement of CD137L by CD137+ T cells. Kolandaswamy et al. present transcriptomic evidence that distinguishes between two major subsets of monocytes. In a different review, Minasyan presents an interesting hypothesis that erythrocytes have a dominant role in clearing bacteria within the blood stream while leukocytes' role is mostly extra-vascular. Yan et al. discuss the pivotal role of the liver, its pre-existing and associated pathology, in sepsis. Zhang outlines the implications of declining neutrophils and impact to long-term management of HIV-associated disease. Finally, Lal et al. discuss the multiple roles of γδT cells in innate and adaptive immunity.

  8. All Chern-Simons invariants of 4D, N = 1 gauged superform hierarchies

    NASA Astrophysics Data System (ADS)

    Becker, Katrin; Becker, Melanie; Linch, William D.; Randall, Stephen; Robbins, Daniel

    2017-04-01

    We give a geometric description of supersymmetric gravity/(non-)abelian p-form hierarchies in superspaces with 4D, N = 1 super-Poincaré invariance. These hierarchies give rise to Chern-Simons-like invariants, such as those of the 5D, N = 1 graviphoton and the eleven-dimensional 3-form but also generalizations such as Green-Schwarz-like/ BF -type couplings. Previous constructions based on prepotential superfields are reinterpreted in terms of p-forms in superspace thereby elucidating the underlying geometry. This vastly simplifies the calculations of superspace field-strengths, Bianchi identities, and Chern-Simons invariants. Using this, we prove the validity of a recursive formula for the conditions defining these actions for any such tensor hierarchy. Solving it at quadratic and cubic orders, we recover the known results for the BF -type and cubic Chern-Simons actions. As an application, we compute the quartic invariant ˜ AdAdAdA + . . . relevant, for example, to seven-dimensional supergravity compactifications.

  9. Analytical observations on the aerodynamics of a delta wing with leading edge flaps

    NASA Technical Reports Server (NTRS)

    Oh, S.; Tavella, D.

    1986-01-01

    The effect of a leading edge flap on the aerodynamics of a low aspect ratio delta wing is studied analytically. The separated flow field about the wing is represented by a simple vortex model composed of a conical straight vortex sheet and a concentrated vortex. The analysis is carried out in the cross flow plane by mapping the wing trace, by means of the Schwarz-Christoffel transformation into the real axis of the transformed plane. Particular attention is given to the influence of the angle of attack and flap deflection angle on lift and drag forces. Both lift and drag decrease with flap deflection, while the lift-to-drag ratioe increases. A simple coordinate transformation is used to obtain a closed form expression for the lift-to-drag ratio as a function of flap deflection. The main effect of leading edge flap deflection is a partial suppression of the separated flow on the leeside of the wing. Qualitative comparison with experiments is presented, showing agreement in the general trends.

  10. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  11. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  12. Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure

    DTIC Science & Technology

    2017-01-05

    AFRL-AFOSR-JP-TR-2017-0002 Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure Manabu...Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report for the project titled ’Advanced Computational Methods for Optimization of

  13. Simultaneous optimization method for absorption spectroscopy postprocessing.

    PubMed

    Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T

    2015-05-10

    A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.

  14. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  15. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  16. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  17. Options for Robust Airfoil Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Li, Wu

    2002-01-01

    A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.

  18. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  19. On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending

    NASA Astrophysics Data System (ADS)

    Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong

    2017-11-01

    A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.

  20. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  1. Topology-optimization-based design method of flexures for mounting the primary mirror of a large-aperture space telescope.

    PubMed

    Hu, Rui; Liu, Shutian; Li, Quhao

    2017-05-20

    For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.

  2. Optimization of structures to satisfy aeroelastic requirements

    NASA Technical Reports Server (NTRS)

    Rudisill, C. S.

    1975-01-01

    A method for the optimization of structures to satisfy flutter velocity constraints is presented along with a method for determining the flutter velocity. A method for the optimization of structures to satisfy divergence velocity constraints is included.

  3. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  4. Fast optimization of glide vehicle reentry trajectory based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jia, Jun; Dong, Ruixing; Yuan, Xuejun; Wang, Chuangwei

    2018-02-01

    An optimization method of reentry trajectory based on genetic algorithm is presented to meet the need of reentry trajectory optimization for glide vehicle. The dynamic model for the glide vehicle during reentry period is established. Considering the constraints of heat flux, dynamic pressure, overload etc., the optimization of reentry trajectory is investigated by utilizing genetic algorithm. The simulation shows that the method presented by this paper is effective for the optimization of reentry trajectory of glide vehicle. The efficiency and speed of this method is comparative with the references. Optimization results meet all constraints, and the on-line fast optimization is potential by pre-processing the offline samples.

  5. Optimizing Dynamical Network Structure for Pinning Control

    NASA Astrophysics Data System (ADS)

    Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo

    2016-04-01

    Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.

  6. Clinical feasibility of exercise-based A-V interval optimization for cardiac resynchronization: a pilot study.

    PubMed

    Choudhuri, Indrajit; MacCarter, Dean; Shaw, Rachael; Anderson, Steve; St Cyr, John; Niazi, Imran

    2014-11-01

    One-third of eligible patients fail to respond to cardiac resynchronization therapy (CRT). Current methods to "optimize" the atrio-ventricular (A-V) interval are performed at rest, which may limit its efficacy during daily activities. We hypothesized that low-intensity cardiopulmonary exercise testing (CPX) could identify the most favorable physiologic combination of specific gas exchange parameters reflecting pulmonary blood flow or cardiac output, stroke volume, and left atrial pressure to guide determination of the optimal A-V interval. We assessed relative feasibility of determining the optimal A-V interval by three methods in 17 patients who underwent optimization of CRT: (1) resting echocardiographic optimization (the Ritter method), (2) resting electrical optimization (intrinsic A-V interval and QRS duration), and (3) during low-intensity, steady-state CPX. Five sequential, incremental A-V intervals were programmed in each method. Assessment of cardiopulmonary stability and potential influence on the CPX-based method were assessed. CPX and determination of a physiological optimal A-V interval was successfully completed in 94.1% of patients, slightly higher than the resting echo-based approach (88.2%). There was a wide variation in the optimal A-V delay determined by each method. There was no observed cardiopulmonary instability or impact of the implant procedure that affected determination of the CPX-based optimized A-V interval. Determining optimized A-V intervals by CPX is feasible. Proposed mechanisms explaining this finding and long-term impact require further study. ©2014 Wiley Periodicals, Inc.

  7. Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method

    NASA Astrophysics Data System (ADS)

    Fitrianingsih, E.; Armellin, R.

    2018-04-01

    One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.

  8. Determining the optimal number of Kanban in multi-products supply chain system

    NASA Astrophysics Data System (ADS)

    Widyadana, G. A.; Wee, H. M.; Chang, Jer-Yuan

    2010-02-01

    Kanban, a key element of just-in-time system, is a re-order card or signboard giving instruction or triggering the pull system to manufacture or supply a component based on actual usage of material. There are two types of Kanban: production Kanban and withdrawal Kanban. This study uses optimal and meta-heuristic methods to determine the Kanban quantity and withdrawal lot sizes in a supply chain system. Although the mix integer programming method gives an optimal solution, it is not time efficient. For this reason, the meta-heuristic methods are suggested. In this study, a genetic algorithm (GA) and a hybrid of genetic algorithm and simulated annealing (GASA) are used. The study compares the performance of GA and GASA with that of the optimal method using MIP. The given problems show that both GA and GASA result in a near optimal solution, and they outdo the optimal method in term of run time. In addition, the GASA heuristic method gives a better performance than the GA heuristic method.

  9. Topology Optimization using the Level Set and eXtended Finite Element Methods: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Villanueva Perez, Carlos Hernan

    Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.

  10. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  11. Design of a rotary dielectric elastomer actuator using a topology optimization method based on pairs of curves

    NASA Astrophysics Data System (ADS)

    Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin

    2018-05-01

    Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.

  12. A Three-Stage Enhanced Reactive Power and Voltage Optimization Method for High Penetration of Solar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Xinda; Huang, Renke; Vallem, Mallikarjuna R.

    This paper presents a three-stage enhanced volt/var optimization method to stabilize voltage fluctuations in transmission networks by optimizing the usage of reactive power control devices. In contrast with existing volt/var optimization algorithms, the proposed method optimizes the voltage profiles of the system, while keeping the voltage and real power output of the generators as close to the original scheduling values as possible. This allows the method to accommodate realistic power system operation and market scenarios, in which the original generation dispatch schedule will not be affected. The proposed method was tested and validated on a modified IEEE 118-bus system withmore » photovoltaic data.« less

  13. A temperature match based optimization method for daily load prediction considering DLC effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.

    This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less

  14. Application’s Method of Quadratic Programming for Optimization of Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro

    Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.

  15. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp; Aoki, Yuriko; Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method,more » and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.« less

  16. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA.

    PubMed

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-07-14

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  17. Twin-Telescope Wettzell (TTW)

    NASA Astrophysics Data System (ADS)

    Hase, H.; Dassing, R.; Kronschnabl, G.; Schlüter, W.; Schwarz, W.; Lauber, P.; Kilger, R.

    2007-07-01

    Following the recommendations made by the VLBI2010 vision report of the IVS, a proposal has been made to construct a Twin Telescope for the Fundamental Station Wettzell in order to meet the future requirements of the next VLBI generation. The Twin Telescope consists of two identical radiotelescopes. It is a project of the Federal Agency for Cartography and Geodesy (BKG). This article summarizes the project and some design ideas for the Twin-Telescope. %ZALMA (2005). Technical Specification for Design, Manufacturing, Transport and Integration on Site of the ALMA ANTENNAS, Doc. ALMA-34.00.00.00.006-BSPE. Behrend, D. (2006). VLBI2010 Antenna Specs, Data sheet. DeBoer, D. (2001). The ATA Offset Gregorian Antenna, ATA Memo #16, February 10. Imbriale, W.A. (2006). Design of a Wideband Radio Telescope, Jet Propulsion Laboratory and S. Weinreb and H. Mandi, California Institute of Technology. Kilger, R. (2007). TWIN-Design studies, Presentation for the IVS board members (internal document),Wettzell. Kronschnabl, G. (2006). Subject: Memo from Bill Petrachenko, E-mail to the Twin-Working Group (in German), July. Lindgren, ETS-Lindgren (2005). The Model 3164-05 Open Boundary Quadridge Horn, Data Sheet. Niell, A., A. Whitney, W. Petrachenko, W. Schlüter, N. Vandenberg, H.Hase, Y. Koyama, C. Ma, H. Schuh, G. Tucari (2006). in: IVS Annual Report 2005, pg. 13-40, NASA/TP-2006-214136, April. Olsson, R., Kildal, P.-S., and Weinreb, S. (2006). IEEE Transactions on Antennas and Propagation, Vol. 54, No. 2, February. Petrachenko, B. (2006). The Case For and Against Multiple Antennas at a Site, IVS Memorandum, 2006-019v01. Petrachenko, B. (2006). IVS Memorandum, 2006-016v01. RFSpin (2004). Double Ridged Waveguide Horn-Model DRH20, Antenna Specifications, Data Sheet. Rohde&Schwarz (2004). SHF Antennas Crossed Log- Periodic Antennas HL024A1/S1, Data Sheet. Rohde&Schwarz (2004). SHF Antennas Log-Periodic Antennas HL050/HL050S1, Data Sheet. Rogers, A.E.E. (2006). Simulations of broadband delay measurements, Mark 5 Memo #043, MIT Haystack Observatory. Rogers, A.E.E. (2006). Some thoughts on the calibration of broadband geodetic VLBI, Mark 5 Memo #044, MIT Haystack Observatory. Rothacher M. (2006). GGOS: the IAG contribution to Earth observation, IGS Workshop 2006 "Perspectives and Visions for 2010 and beyond", May 8-12, Darmstadt, Germany Weinreb, S., Mandi, H. (2006). Pattern and Noise Tests of ETS-Lindgren 3164-05 Quadridge/Vivaldi Antenna, California Institute of Technology. Weinreb, S. (2007). Broadband feeds, E-mail, January. Welch, Wm. J. (2005). The Allen Telescope Array, URSI, UC Berkeley, January.

  18. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  19. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  20. The role of the optimization process in illumination design

    NASA Astrophysics Data System (ADS)

    Gauvin, Michael A.; Jacobsen, David; Byrne, David J.

    2015-07-01

    This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.

  1. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  2. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y.; Borland, Michael

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  3. Implementation of a dose gradient method into optimization of dose distribution in prostate cancer 3D-CRT plans

    PubMed Central

    Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł

    2014-01-01

    Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411

  4. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  5. Performance index and meta-optimization of a direct search optimization method

    NASA Astrophysics Data System (ADS)

    Krus, P.; Ölvander, J.

    2013-10-01

    Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.

  6. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  7. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  8. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  9. Development of optimized segmentation map in dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Yamakawa, Keisuke; Ueki, Hironori

    2012-03-01

    Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.

  10. The relationship between personality traits and sexual self-esteem and its components

    PubMed Central

    Firoozi, Mahbobe; Azmoude, Elham; Asgharipoor, Negar

    2016-01-01

    Background: Women's sexual self-esteem is one of the most important factors that affect women's sexual satisfaction and their sexual anxiety. Various aspects of sexual life are blended with the entire personality. Determining the relationship between personality traits and self-concept aspects such as sexual self-esteem leads to better understanding of sexual behavior in people with different personality traits and helps in identifying the psychological variables affecting their sexual performance. The aim this study was to determine the relationship between personality traits and sexual self-esteem. Materials and Methods: This correlation study was performed on 127 married women who referred to selected health care centers of Mashhad in 2014–2015. Data collection tools included NEO personality inventory dimensions and Zeanah and Schwarz sexual self-esteem questionnaire. Data were analyzed through Pearson correlation coefficient test and stepwise regression model. Results: The results of Pearson correlation test showed a significant relationship between neuroticism personality dimension (r = −0.414), extroversion (r = 0.363), agreeableness (r = 0.420), and conscientiousness (r = 0.364) with sexual self-esteem (P < 0.05). The relationship between openness with sexual self-esteem was not significant (P > 0.05). In addition, based on the results of the stepwise regression model, three dimensions of agreeableness, neuroticism, and extraversion could predict 27% of the women's sexual self-esteem variance. Conclusions: The results showed a correlation between women's personality characteristics and their sexual self-esteem. Paying attention to personality characteristics may be important to identify at-risk group or the women having low sexual self-esteem in premarital and family counseling. PMID:27186198

  11. Consumer acceptable risk: how cigarette companies have responded to accusations that their products are defective

    PubMed Central

    Cummings, K Michael; Brown, Anthony; Douglas, Clifford E

    2006-01-01

    Objective To describe arguments used by cigarette companies to defend themselves against charges that their cigarettes were defective and that they could and should have done more to make cigarettes less hazardous. Methods The data for this paper come from the opening statements made by defendants in four court cases: two class action lawsuits (Engle 1999, and Blankenship 2001) and two individual cases (Boeken 2001, and Schwarz 2002). The transcripts of opening statements were reviewed and statements about product defect claims, product testing, and safe cigarette research were excerpted and coded. Results Responses by cigarette companies to charges that their products were defective has been presented consistently across different cases and by different companies. Essentially the arguments made by cigarette companies boil down to three claims: (1) smoking is risky, but nothing the companies have done has made cigarettes more dangerous than might otherwise be the case; (2) nothing the companies have done or said has kept someone from stopping smoking; and (3) the companies have spent lots of money to make the safest cigarette acceptable to the smoker. Conclusions Cigarette companies have argued that their products are inherently dangerous but not defective, and that they have worked hard to make their products safer by lowering the tar and nicotine content of cigarettes as recommended by members of the public health community. As a counter argument, plaintiff attorneys should focus on how cigarette design changes have actually made smoking more acceptable to smokers, thereby discouraging smoking cessation. PMID:17130628

  12. GUT Model Hierarchies from Intersecting Branes

    NASA Astrophysics Data System (ADS)

    Kokorelis, Christos

    2002-08-01

    By employing D6-branes intersecting at angles in D = 4 type I strings, we construct the first examples of three generation string GUT models (PS-A class), that contain at low energy exactly the standard model spectrum with no extra matter and/or extra gauge group factors. They are based on the group SU(4)C × SU(2)L × SU(2)R. The models are non-supersymmetric, even though SUSY is unbroken in the bulk. Baryon number is gauged and its anomalies are cancelled through a generalized Green-Schwarz mechanism. We also discuss models (PS-B class) which at low energy have the standard model augmented by an anomaly free U(1) symmetry and show that multibrane wrappings correspond to a trivial redefinition of the surviving global U(1) at low energies. There are no colour triplet couplings to mediate proton decay and proton is stable. The models are compatible with a low string scale of energy less that 650 GeV and are directly testable at present or future accelerators as they predict the existence of light left handed weak fermion doublets at energies between 90 and 246 GeV. The neutrinos get a mass through an unconventional see-saw mechanism. The mass relation me = md at the GUT scale is recovered. Imposing supersymmetry at particular intersections generates non-zero Majorana masses for right handed neutrinos as well providing the necessary singlets needed to break the surviving anomaly free U(1), thus suggesting a gauge symmetry breaking method that can be applied in general left-right symmetric models.

  13. A structural topological optimization method for multi-displacement constraints and any initial topology configuration

    NASA Astrophysics Data System (ADS)

    Rong, J. H.; Yi, J. H.

    2010-10-01

    In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.

  14. Evolution of Query Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hameurlain, Abdelkader; Morvan, Franck

    Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).

  15. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  16. Optimel: Software for selecting the optimal method

    NASA Astrophysics Data System (ADS)

    Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina

    Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.

  17. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  18. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  19. Optimal Price Decision Problem for Simultaneous Multi-article Auction and Its Optimal Price Searching Method by Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Masuda, Kazuaki; Aiyoshi, Eitaro

    We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.

  20. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  1. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    PubMed

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. Optimizing some 3-stage W-methods for the time integration of PDEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  3. Optimization of Thick, Large Area YBCO Film Growth Through Response Surface Methods

    NASA Astrophysics Data System (ADS)

    Porzio, J.; Mahoney, C. H.; Sullivan, M. C.

    2014-03-01

    We present our work on the optimization of thick, large area YB2C3O7-δ (YBCO) film growth through response surface methods. Thick, large area films have commercial uses and have recently been used in dramatic demonstrations of levitation and suspension. Our films are grown via pulsed laser deposition and we have optimized growth parameters via response surface methods. Response surface methods is a statistical tool to optimize selected quantities with respect to a set of variables. We optimized our YBCO films' critical temperatures, thicknesses, and structures with respect to three PLD growth parameters: deposition temperature, laser energy, and deposition pressure. We will present an overview of YBCO growth via pulsed laser deposition, the statistical theory behind response surface methods, and the application of response surface methods to pulsed laser deposition growth of YBCO. Results from the experiment will be presented in a discussion of the optimized film quality. Supported by NFS grant DMR-1305637

  4. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  5. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  6. Starting geometry creation and design method for freeform optics.

    PubMed

    Bauer, Aaron; Schiesser, Eric M; Rolland, Jannick P

    2018-05-01

    We describe a method for designing freeform optics based on the aberration theory of freeform surfaces that guides the development of a taxonomy of starting-point geometries with an emphasis on manufacturability. An unconventional approach to the optimization of these starting designs wherein the rotationally invariant 3rd-order aberrations are left uncorrected prior to unobscuring the system is shown to be effective. The optimal starting-point geometry is created for an F/3, 200 mm aperture-class three-mirror imager and is fully optimized using a novel step-by-step method over a 4 × 4 degree field-of-view to exemplify the design method. We then optimize an alternative starting-point geometry that is common in the literature but was quantified here as a sub-optimal candidate for optimization with freeform surfaces. A comparison of the optimized geometries shows the performance of the optimal geometry is at least 16× better, which underscores the importance of the geometry when designing freeform optics.

  7. Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints.

    PubMed

    Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan

    2016-08-22

    Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.

  8. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  9. [Study on baking processing technology of hui medicine Aconitum flavum].

    PubMed

    Fu, Xue-yan; Zhang, Bai-tong; Li, Ting-ting; Dong, Lin; Hao, Wen-jing; Yu, Liang

    2013-12-01

    To screen and optimize the processing technology of Aconitum flavum. The acute-toxicity, anti-inflammatory and analgesic experiments were used as indexes. Four processing methods, including decoction, streaming, baking and processing with Chebulae Fructus decoction, were compared to screen the optimum processing method for Aconitum flavum. The baking time was also optimized. The optimal baked technology was that 1-2 mm decoction pieces was baked at 105 degrees C for 3 hours. The baking method is proved to be the optimal processing method of Aconitum flavum. It is shown that this method is simple and stable.

  10. Phase-Division-Based Dynamic Optimization of Linkages for Drawing Servo Presses

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Gang; Wang, Li-Ping; Cao, Yan-Ke

    2017-11-01

    Existing linkage-optimization methods are designed for mechanical presses; few can be directly used for servo presses, so development of the servo press is limited. Based on the complementarity of linkage optimization and motion planning, a phase-division-based linkage-optimization model for a drawing servo press is established. Considering the motion-planning principles of a drawing servo press, and taking account of work rating and efficiency, the constraints of the optimization model are constructed. Linkage is optimized in two modes: use of either constant eccentric speed or constant slide speed in the work segments. The performances of optimized linkages are compared with those of a mature linkage SL4-2000A, which is optimized by a traditional method. The results show that the work rating of a drawing servo press equipped with linkages optimized by this new method improved and the root-mean-square torque of the servo motors is reduced by more than 10%. This research provides a promising method for designing energy-saving drawing servo presses with high work ratings.

  11. A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2001-01-01

    An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.

  12. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  13. A seismic optimization procedure for reinforced concrete framed buildings based on eigenfrequency optimization

    NASA Astrophysics Data System (ADS)

    Arroyo, Orlando; Gutiérrez, Sergio

    2017-07-01

    Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.

  14. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  15. An Optimal Control Method for Maximizing the Efficiency of Direct Drive Ocean Wave Energy Extraction System

    PubMed Central

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913

  16. Application of Multi-Objective Human Learning Optimization Method to Solve AC/DC Multi-Objective Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Yan, Zheng; He, Guangyu

    2016-06-01

    This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.

  17. An optimal control method for maximizing the efficiency of direct drive ocean wave energy extraction system.

    PubMed

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.

  18. The Improvement of Particle Swarm Optimization: a Case Study of Optimal Operation in Goupitan Reservoir

    NASA Astrophysics Data System (ADS)

    Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui

    2018-02-01

    Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.

  19. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    PubMed

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  20. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  1. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  2. Optimal Spatial Design of Capacity and Quantity of Rainwater Catchment Systems for Urban Flood Mitigation

    NASA Astrophysics Data System (ADS)

    Huang, C.; Hsu, N.

    2013-12-01

    This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.

  3. Finite burn maneuver modeling for a generalized spacecraft trajectory design and optimization system.

    PubMed

    Ocampo, Cesar

    2004-05-01

    The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin.

  4. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  5. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  6. A new rational-based optimal design strategy of ship structure based on multi-level analysis and super-element modeling method

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Deyu

    2011-09-01

    A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.

  7. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1994-01-01

    This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.

  8. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  9. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  10. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  11. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    NASA Astrophysics Data System (ADS)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  12. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  13. Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted

    DTIC Science & Technology

    2016-09-16

    distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and

  14. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  15. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James L.

    2010-01-01

    NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.

  16. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  17. A Non-hydrostatic Atmospheric Model for Global High-resolution Simulation

    NASA Astrophysics Data System (ADS)

    Peng, X.; Li, X.

    2017-12-01

    A three-dimensional non-hydrostatic atmosphere model, GRAPES_YY, is developed on the spherical Yin-Yang grid system in order to enforce global high-resolution weather simulation or forecasting at the CAMS/CMA. The quasi-uniform grid makes the computation be of high efficiency and free of pole problem. Full representation of the three-dimensional Coriolis force is considered in the governing equations. Under the constraint of third-order boundary interpolation, the model is integrated with the semi-implicit semi-Lagrangian method using the same code on both zones. A static halo region is set to ensure computation of cross-boundary transport and updating Dirichlet-type boundary conditions in the solution process of elliptical equations with the Schwarz method. A series of dynamical test cases, including the solid-body advection, the balanced geostrophic flow, zonal flow over an isolated mountain, development of the Rossby-Haurwitz wave and a baroclinic wave, are carried out, and excellent computational stability and accuracy of the dynamic core has been confirmed. After implementation of the physical processes of long and short-wave radiation, cumulus convection, micro-physical transformation of water substances and the turbulent processes in the planetary boundary layer include surface layer vertical fluxes parameterization, a long-term run of the model is then put forward under an idealized aqua-planet configuration to test the model physics and model ability in both short-term and long-term integrations. In the aqua-planet experiment, the model shows an Earth-like structure of circulation. The time-zonal mean temperature, wind components and humidity illustrate reasonable subtropical zonal westerly jet, meridional three-cell circulation, tropical convection and thermodynamic structures. The specific SST and solar insolation being symmetric about the equator enhance the ITCZ and tropical precipitation, which concentrated in tropical region. Additional analysis and tuning of the model is still going on, and preliminary results have demonstrated the possibility of high-resolution application of the model to global weather prediction and even seasonal climate projection.

  18. [Revealing three psychological states before an acting out in 32 patients hospitalized for suicide attempt].

    PubMed

    Vandevoorde, J

    2013-09-01

    The purpose of this study was to reconstruct the psychological state of suicidal subjects at the time of the execution of the gesture according to their thoughts, their emotions, their actions, their fantasy life and consciousness. Thirty-three adult subjects agreed, just days after their suicide attempt, to answer the Interview Method for Suicidal Acts (IMSA). This object of this semi-structured interview is to invite the suicidal to reconstruct mentally and chronologically their suicide attempt. IMSA can follow the thoughts, behavior, consciousness, emotions and activity of the suicidal scenario by helping the patient to reconstruct the phenomenology of his/her actions until the final suicidal gesture. The data were processed using the method of Classification TwoStep on SPSS, based on Schwarz Bayesian criterion. The results highlight three main types of psychological state: (1) a "kinesthetic" psychological state (called "type K") is characterized by a rupture between the subjective sensation of motor movement and effective motility (motor automatism), the presence of a dissociative state, an "empty" feeling of thought and the absence of an external triggering factor; (2) a "cognitive" psychological state (called "type C") is characterized by a significant reflection on the decision to die and infiltration of the morbid thought, an intense fantasy life around the suicidal scenario, a clear state of consciousness, and an absence of loss of motor control; (3) an "emotional" psychological state (called "type E") is characterized by confusing and chaotic emotional processes, the emergence of a dissociative state, and a significant impact of external events on the onset of the suicide attempt. This classification of suicide attempts allows us to identify the different combinations of the suicidal process and opens up new therapeutic strategies. Copyright © 2013 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  19. A novel method for overlapping community detection using Multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa

    2018-09-01

    The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.

  20. A method to optimize the shield compact and lightweight combining the structure with components together by genetic algorithm and MCNP code.

    PubMed

    Cai, Yao; Hu, Huasi; Pan, Ziheng; Hu, Guang; Zhang, Tao

    2018-05-17

    To optimize the shield for neutrons and gamma rays compact and lightweight, a method combining the structure and components together was established employing genetic algorithms and MCNP code. As a typical case, the fission energy spectrum of 235 U which mixed neutrons and gamma rays was adopted in this study. Six types of materials were presented and optimized by the method. Spherical geometry was adopted in the optimization after checking the geometry effect. Simulations have made to verify the reliability of the optimization method and the efficiency of the optimized materials. To compare the materials visually and conveniently, the volume and weight needed to build a shield are employed. The results showed that, the composite multilayer material has the best performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  2. Low thrust spacecraft transfers optimization method with the stepwise control structure in the Earth-Moon system in terms of the L1-L2 transfer

    NASA Astrophysics Data System (ADS)

    Fain, M. K.; Starinova, O. L.

    2016-04-01

    The paper outlines the method for determination of the locally optimal stepwise control structure in the problem of the low thrust spacecraft transfer optimization in the Earth-Moon system, including the L1-L2 transfer. The total flight time as an optimization criterion is considered. The optimal control programs were obtained by using the Pontryagin's maximum principle. As a result of optimization, optimal control programs, corresponding trajectories, and minimal total flight times were determined.

  3. Vecteurs Singuliers des Theories des Champs Conformes Minimales

    NASA Astrophysics Data System (ADS)

    Benoit, Louis

    En 1984 Belavin, Polyakov et Zamolodchikov revolutionnent la theorie des champs en explicitant une nouvelle gamme de theories, les theories quantiques des champs bidimensionnelles invariantes sous les transformations conformes. L'algebre des transformations conformes de l'espace-temps presente une caracteristique remarquable: en deux dimensions elle possede un nombre infini de generateurs. Cette propriete impose de telles conditions aux fonctions de correlations qu'il est possible de les evaluer sans aucune approximation. Les champs des theories conformes appartiennent a des representations de plus haut poids de l'algebre de Virasoro, une extension centrale de l'algebre conforme du plan. Ces representations sont etiquetees par h, le poids conforme de leur vecteur de plus haut poids, et par la charge centrale c, le facteur de l'extension centrale, commune a toutes les representations d'une meme theorie. Les theories conformes minimales sont constituees d'un nombre fini de representations. Parmi celles-ci se trouvent des theories unitaires dont les representation forment la serie discrete de l'algebre de Virasoro; leur poids h a la forme h_{p,q}(m)=[ (p(m+1) -qm)^2-1] (4m(m+1)), ou p,q et m sont des entiers positifs et p+q<= m+1. L'entier m parametrise la charge centrale: c(m)=1 -{6over m(m+1)} avec n>= 2. Ces representations possedent un sous-espace invariant engendre par deux sous-representations avec h_1=h_{p,q} + pq et h_2=h_{p,q} + (m-p)(m+1-q) dont chacun des vecteurs de plus haut poids portent le nom de vecteur singulier et sont notes respectivement |Psi _{p,q}> et |Psi_{m-p,m+1-q}>. . Les theories super-conformes sont une version super-symetrique des theories conformes. Leurs champs appartiennent a des representation de plus haut poids de l'algebre de Neveu-Schwarz, une des deux extensions super -symetriques de l'algebre de Virasoro. Les theories super -conformes minimales possedent la meme structure que les theories conformes minimales. Les representations sont elements de la serie h_{p,q}= [ (p(m+2)-qm)^2-4] /(8m(m+2)) ou p,q et m sont des entiers positifs, p et q etant de meme parite, et p+q<= m+2. La charge centrale est donnee par c(m)={3over 2}-{12over m(m+2)} avec m >= 2. Les vecteurs singuliers | Psi_{p,q}> et |Psi_{m-p,m+2-q} > sont respectivement de poids h _{p,q}+pq/2 et h_ {p,q}+(m-p)(m+2-q)/2.. Les vecteurs singuliers ont une norme nulle et on doit les eliminer des representations pour que celles -ci soient unitaires. Cette elimination engendrent des equations (super-)differentielles qui dependent directement de la forme explicite des vecteurs singuliers et auxquelles doivent obeir les fonctions de correlations de la theorie. Ainsi la connaissance de ces vecteurs singuliers est intimement reliee au calcul des fonctions de correlation. Les equations definissant les vecteurs singuliers forment un systeme lineaire surdetermine dont le nombre d'equations est de l'ordre de N(pq), le nombre de partitions de l'entier pq. Puisque les vecteurs singuliers jouent un role capital en theorie conforme, il est naturel de chercher des formes explicites pour les vecteurs (ou pour des familles infinies de ceux -ci). Nous donnons ici la forme explicite pour la famille infinie de vecteurs singuliers ayant un de ses indices egal a 1, pour les algebres de Virasoro et de Neveu-Schwarz. Depuis ces decouvertes, d'autres techniques de construction des vecteurs singuliers ont ete developpees, dont celle de Bauer, Di Francesco, Itzykson et Zuber pour l'algebre de Virasoro qui reproduit directement l'expression explicite des vecteurs singuliers |Psi _{1,q}> et |Psi_{p,1}>. Ils ont utilise l'algebre des produits d'operateurs et la fusion entre representations irreductibles pour engendrer des relations de recurence produisant les vecteurs singuliers. Dans le dernier chapitre de cette these nous adaptons cet algorithme a la construction des vecteurs singuliers de l'algebre de Neveu-Schwarz.

  4. A three-dimensional topology optimization model for tooth-root morphology.

    PubMed

    Seitz, K-F; Grabe, J; Köhne, T

    2018-02-01

    To obtain the root of a lower incisor through structural optimization, we used two methods: optimization with Solid Isotropic Material with Penalization (SIMP) and Soft-Kill Option (SKO). The optimization was carried out in combination with a finite element analysis in Abaqus/Standard. The model geometry was based on cone-beam tomography scans of 10 adult males with healthy bone-tooth interface. Our results demonstrate that the optimization method using SIMP for minimum compliance could not adequately predict the actual root shape. The SKO method, however, provided optimization results that were comparable to the natural root form and is therefore suitable to set up the basic topology of a dental root.

  5. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  7. RJMCMC based Text Placement to Optimize Label Placement and Quantity

    NASA Astrophysics Data System (ADS)

    Touya, Guillaume; Chassin, Thibaud

    2018-05-01

    Label placement is a tedious task in map design, and its automation has long been a goal for researchers in cartography, but also in computational geometry. Methods that search for an optimal or nearly optimal solution that satisfies a set of constraints, such as label overlapping, have been proposed in the literature. Most of these methods mainly focus on finding the optimal position for a given set of labels, but rarely allow the removal of labels as part of the optimization. This paper proposes to apply an optimization technique called Reversible-Jump Markov Chain Monte Carlo that enables to easily model the removal or addition during the optimization iterations. The method, quite preliminary for now, is tested on a real dataset, and the first results are encouraging.

  8. Extreme Trust Region Policy Optimization for Active Object Recognition.

    PubMed

    Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei

    2018-06-01

    In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.

  9. Review of dynamic optimization methods in renewable natural resource management

    USGS Publications Warehouse

    Williams, B.K.

    1989-01-01

    In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.

  10. Optimal Item Selection with Credentialing Examinations.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…

  11. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  12. Efficient Optimization of Low-Thrust Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul

    2007-01-01

    A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.

  13. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  14. The optimal design support system for shell components of vehicles using the methods of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Szczepanik, M.; Poteralski, A.

    2016-11-01

    The paper is devoted to an application of the evolutionary methods and the finite element method to the optimization of shell structures. Optimization of thickness of a car wheel (shell) by minimization of stress functional is considered. A car wheel geometry is built from three surfaces of revolution: the central surface with the holes destined for the fastening bolts, the surface of the ring of the wheel and the surface connecting the two mentioned earlier. The last one is subjected to the optimization process. The structures are discretized by triangular finite elements and subjected to the volume constraints. Using proposed method, material properties or thickness of finite elements are changing evolutionally and some of them are eliminated. As a result the optimal shape, topology and material or thickness of the structures are obtained. The numerical examples demonstrate that the method based on evolutionary computation is an effective technique for solving computer aided optimal design.

  15. An optimization method for condition based maintenance of aircraft fleet considering prognostics uncertainty.

    PubMed

    Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie

    2014-01-01

    An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.

  16. An Optimization Method for Condition Based Maintenance of Aircraft Fleet Considering Prognostics Uncertainty

    PubMed Central

    Chen, Yiran; Sun, Bo; Li, Songjie

    2014-01-01

    An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success. PMID:24892046

  17. Empty tracks optimization based on Z-Map model

    NASA Astrophysics Data System (ADS)

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  18. Honey Bees Inspired Optimization Method: The Bees Algorithm.

    PubMed

    Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo

    2013-11-06

    Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.

  19. Optimization of radial-type superconducting magnetic bearing using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Ai, Liwang; Zhang, Guomin; Li, Wanjie; Liu, Guole; Liu, Qi

    2018-07-01

    It is important and complicated to model and optimize the levitation behavior of superconducting magnetic bearing (SMB). That is due to the nonlinear constitutive relationships of superconductor and ferromagnetic materials, the relative movement between the superconducting stator and PM rotor, and the multi-parameter (e.g., air-gap, critical current density, and remanent flux density, etc.) affecting the levitation behavior. In this paper, we present a theoretical calculation and optimization method of the levitation behavior for radial-type SMB. A simplified model of levitation force calculation is established using 2D finite element method with H-formulation. In the model, the boundary condition of superconducting stator is imposed by harmonic series expressions to describe the traveling magnetic field generated by the moving PM rotor. Also, experimental measurements of the levitation force are performed and validate the model method. A statistical method called Taguchi method is adopted to carry out an optimization of load capacity for SMB. Then the factor effects of six optimization parameters on the target characteristics are discussed and the optimum parameters combination is determined finally. The results show that the levitation behavior of SMB is greatly improved and the Taguchi method is suitable for optimizing the SMB.

  20. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Optimization of Interior Permanent Magnet Motor by Quality Engineering and Multivariate Analysis

    NASA Astrophysics Data System (ADS)

    Okada, Yukihiro; Kawase, Yoshihiro

    This paper has described the method of optimization based on the finite element method. The quality engineering and the multivariable analysis are used as the optimization technique. This optimizing method consists of two steps. At Step.1, the influence of parameters for output is obtained quantitatively, at Step.2, the number of calculation by the FEM can be cut down. That is, the optimal combination of the design parameters, which satisfies the required characteristic, can be searched for efficiently. In addition, this method is applied to a design of IPM motor to reduce the torque ripple. The final shape can maintain average torque and cut down the torque ripple 65%. Furthermore, the amount of permanent magnets can be reduced.

  2. Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control

    NASA Astrophysics Data System (ADS)

    Hu, Juju; Ke, Qiang; Ji, Yinghua

    2018-02-01

    The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.

  3. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  4. Chemical Safety Assessment Using Read-Across: Assessing the Use of Novel Testing Methods to Strengthen the Evidence Base for Decision Making

    PubMed Central

    Amcoff, Patric; Benigni, Romualdo; Blackburn, Karen; Carney, Edward; Cronin, Mark; Deluyker, Hubert; Gautier, Francoise; Judson, Richard S.; Kass, Georges E.N.; Keller, Detlef; Knight, Derek; Lilienblum, Werner; Mahony, Catherine; Rusyn, Ivan; Schultz, Terry; Schwarz, Michael; Schüürmann, Gerrit; White, Andrew; Burton, Julien; Lostia, Alfonso M.; Munn, Sharon; Worth, Andrew

    2015-01-01

    Background Safety assessment for repeated dose toxicity is one of the largest challenges in the process to replace animal testing. This is also one of the proof of concept ambitions of SEURAT-1, the largest ever European Union research initiative on alternative testing, co-funded by the European Commission and Cosmetics Europe. This review is based on the discussion and outcome of a workshop organized on initiative of the SEURAT-1 consortium joined by a group of international experts with complementary knowledge to further develop traditional read-across and include new approach data. Objectives The aim of the suggested strategy for chemical read-across is to show how a traditional read-across based on structural similarities between source and target substance can be strengthened with additional evidence from new approach data—for example, information from in vitro molecular screening, “-omics” assays and computational models—to reach regulatory acceptance. Methods We identified four read-across scenarios that cover typical human health assessment situations. For each such decision context, we suggested several chemical groups as examples to prove when read-across between group members is possible, considering both chemical and biological similarities. Conclusions We agreed to carry out the complete read-across exercise for at least one chemical category per read-across scenario in the context of SEURAT-1, and the results of this exercise will be completed and presented by the end of the research initiative in December 2015. Citation Berggren E, Amcoff P, Benigni R, Blackburn K, Carney E, Cronin M, Deluyker H, Gautier F, Judson RS, Kass GE, Keller D, Knight D, Lilienblum W, Mahony C, Rusyn I, Schultz T, Schwarz M, Schüürmann G, White A, Burton J, Lostia AM, Munn S, Worth A. 2015. Chemical safety assessment using read-across: assessing the use of novel testing methods to strengthen the evidence base for decision making. Environ Health Perspect 123:1232–1240; http://dx.doi.org/10.1289/ehp.1409342 PMID:25956009

  5. Fuel-optimal low-thrust formation reconfiguration via Radau pseudospectral method

    NASA Astrophysics Data System (ADS)

    Li, Jing

    2016-07-01

    This paper investigates fuel-optimal low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary optimality conditions are derived from the Pontryagin's maximum principle. The fuel-optimal impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as optimization variables, the fuel-optimal low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral method (RPM), which is then solved by a sparse nonlinear optimization software SNOPT. To facilitate optimality verification and, if necessary, further refinement of the optimized solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed optimization method. To fix the problem, generic fuel-optimal low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose optimal solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-optimal low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-optimal impulsive and low-thrust solutions.

  6. Air data system optimization using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  7. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  8. Multi-objective optimization of a continuous bio-dissimilation process of glycerol to 1, 3-propanediol.

    PubMed

    Xu, Gongxian; Liu, Ying; Gao, Qunwang

    2016-02-10

    This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    NASA Astrophysics Data System (ADS)

    Deufel, Christopher L.; Furutani, Keith M.

    2014-02-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  10. Integrative systems modeling and multi-objective optimization

    EPA Science Inventory

    This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...

  11. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  12. Constrained Multi-Level Algorithm for Trajectory Optimization

    NASA Astrophysics Data System (ADS)

    Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi

    The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.

  13. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.

  14. A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour

    NASA Technical Reports Server (NTRS)

    Leyland, Jane Anne

    1996-01-01

    Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.

  15. Adaptive sparsest narrow-band decomposition method and its applications to rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao

    2017-02-01

    Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.

  16. Investigation of another approach in topology optimization

    NASA Astrophysics Data System (ADS)

    Krotkikh, A. A.; Maximov, P. V.

    2018-05-01

    The paper presents investigation of another approach in topology optimization. The authors realized the method of topology optimization with using ideas of the SIMP method which was created by Martin P. Bends0e. There are many ways in objective function formulation of topology optimization methods. In terms of elasticity theory, the objective function of the SIMP method is a compliance of an object which should be minimized. The main idea of this paper was avoiding the filtering procedure in the SIMP method. Reformulation of the statement of the problem in terms of function minimization allows us to solve this by big variety of methods. The authors decided to use the interior point method which was realized in Wolfram Mathematica. This way can generate side effects which should be investigated for preventing their appearing in future. Results comparison of the SIMP method and the suggested method are presented in paper and analyzed.

  17. Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD)

    PubMed Central

    Rhoades, Seth D.

    2017-01-01

    Introduction Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Objective Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). Methods We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. Results LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. Conclusions The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method. PMID:28348510

  18. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Computational study of engine external aerodynamics as a part of multidisciplinary optimization procedure

    NASA Astrophysics Data System (ADS)

    Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr

    2016-10-01

    The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.

  20. Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA

    PubMed Central

    Ma, Xiaoqi

    2015-01-01

    A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time. PMID:26543867

  1. Extremal Optimization: Methods Derived from Co-Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.G.

    1999-07-13

    We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less

  2. A Method of Dynamic Extended Reactive Power Optimization in Distribution Network Containing Photovoltaic-Storage System

    NASA Astrophysics Data System (ADS)

    Wang, Wu; Huang, Wei; Zhang, Yongjun

    2018-03-01

    The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.

  3. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  4. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  5. Robust Dynamic Multi-objective Vehicle Routing Optimization Method.

    PubMed

    Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei

    2017-03-21

    For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.

  6. Fuel-Optimal Altitude Maintenance of Low-Earth-Orbit Spacecrafts by Combined Direct/Indirect Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Ha; Park, Chandeok; Park, Sang-Young

    2015-12-01

    This work presents fuel-optimal altitude maintenance of Low-Earth-Orbit (LEO) spacecrafts experiencing non-negligible air drag and J2 perturbation. A pseudospectral (direct) method is first applied to roughly estimate an optimal fuel consumption strategy, which is employed as an initial guess to precisely determine itself. Based on the physical specifications of KOrea Multi-Purpose SATellite-2 (KOMPSAT-2), a Korean artificial satellite, numerical simulations show that a satellite ascends with full thrust at the early stage of the maneuver period and then descends with null thrust. While the thrust profile is presumably bang-off, it is difficult to precisely determine the switching time by using a pseudospectral method only. This is expected, since the optimal switching epoch does not coincide with one of the collocation points prescribed by the pseudospectral method, in general. As an attempt to precisely determine the switching time and the associated optimal thrust history, a shooting (indirect) method is then employed with the initial guess being obtained through the pseudospectral method. This hybrid process allows the determination of the optimal fuel consumption for LEO spacecrafts and their thrust profiles efficiently and precisely.

  7. Optimal lattice-structured materials

    DOE PAGES

    Messner, Mark C.

    2016-07-09

    This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less

  8. Taguchi optimization of bismuth-telluride based thermoelectric cooler

    NASA Astrophysics Data System (ADS)

    Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank

    2017-07-01

    In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.

  9. Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2006-01-01

    Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more flexible than other methods in dealing with design in the context of both steady and unsteady flows, partial and complete data sets, combined experimental and numerical data, inclusion of various constraints and rules of thumb, and other issues that characterize the aerodynamic design process. Neural networks provide a natural framework within which a succession of numerical solutions of increasing fidelity, incorporating more realistic flow physics, can be represented and utilized for optimization. Neural networks also offer an excellent framework for multiple-objective and multi-disciplinary design optimization. Simulation tools from various disciplines can be integrated within this framework and rapid trade-off studies involving one or many disciplines can be performed. The prospect of combining neural network based optimization methods and evolutionary algorithms to obtain a hybrid method with the best properties of both methods will be included in this presentation. Achieving solution diversity and accurate convergence to the exact Pareto front in multiple objective optimization usually requires a significant computational effort with evolutionary algorithms. In this lecture we will also explore the possibility of using neural networks to obtain estimates of the Pareto optimal front using non-dominated solutions generated by DE as training data. Neural network estimators have the potential advantage of reducing the number of function evaluations required to obtain solution accuracy and diversity, thus reducing cost to design.

  10. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    NASA Astrophysics Data System (ADS)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  11. Topology optimization of unsteady flow problems using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Nørgaard, Sebastian; Sigmund, Ole; Lazarov, Boyan

    2016-02-01

    This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems. The optimization problem is solved with a gradient based method, and the design sensitivities are computed by solving the discrete adjoint problem. For moderate Reynolds number flows, it is demonstrated that topology optimization can successfully account for unsteady effects such as vortex shedding and time-varying boundary conditions. Such effects are relevant in several engineering applications, i.e. fluid pumps and control valves.

  12. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  13. Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.

  14. Simulation Research on Vehicle Active Suspension Controller Based on G1 Method

    NASA Astrophysics Data System (ADS)

    Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui

    2017-09-01

    Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.

  15. Optimization with artificial neural network systems - A mapping principle and a comparison to gradient based methods

    NASA Technical Reports Server (NTRS)

    Leong, Harrison Monfook

    1988-01-01

    General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.

  16. Development of a turbomachinery design optimization procedure using a multiple-parameter nonlinear perturbation method

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.

    1984-01-01

    An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.

  17. An efficient multilevel optimization method for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  18. Optimization of cell seeding in a 2D bio-scaffold system using computational models.

    PubMed

    Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong

    2017-05-01

    The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  20. Local Approximation and Hierarchical Methods for Stochastic Optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.

  1. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  2. Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD).

    PubMed

    Rhoades, Seth D; Weljie, Aalim M

    2016-12-01

    Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method.

  3. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  4. Dynamic Optimization

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.

  5. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  6. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    PubMed

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  7. Accessible Information Without Disturbing Partially Known Quantum States on a von Neumann Algebra

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui

    2018-04-01

    This paper addresses the problem of how much information we can extract without disturbing a statistical experiment, which is a family of partially known normal states on a von Neumann algebra. We define the classical part of a statistical experiment as the restriction of the equivalent minimal sufficient statistical experiment to the center of the outcome space, which, in the case of density operators on a Hilbert space, corresponds to the classical probability distributions appearing in the maximal decomposition by Koashi and Imoto (Phys. Rev. A 66, 022,318 2002). We show that we can access by a Schwarz or completely positive channel at most the classical part of a statistical experiment if we do not disturb the states. We apply this result to the broadcasting problem of a statistical experiment. We also show that the classical part of the direct product of statistical experiments is the direct product of the classical parts of the statistical experiments. The proof of the latter result is based on the theorem that the direct product of minimal sufficient statistical experiments is also minimal sufficient.

  8. The stability analysis of magnetohydrodynamic equilibria - Comparing the thermodynamic approach with the energy principle

    NASA Technical Reports Server (NTRS)

    Brinkmann, R. P.

    1989-01-01

    This paper is a contribution to the stability analysis of current-carrying plasmas, i.e., plasma systems that are forced by external mchanisms to carry a nonrelaxing electrical current. Under restriction to translationally invariant configurations, the thermodynamic stability criterion for a multicomponent plasma is rederived within the framework of nonideal MHD. The chosen dynamics neglects scalar resistivity, but allows for other types of dissipation effects both in Ohm's law and in the equation of motion. In the second section of the paper, the thermodynamic stability criterion is compared with the ideal MHD based energy principle of Bernstein et al. With the help of Schwarz's inequality, it is shown that the former criterion is always more 'pessimistic' than the latter, i.e., that thermodynamic stability implies stability according to the MHD principle, but not vice versa. This reuslt confirms the physical plausible idea that dissipational effects tend to weaken the stability properties of current-carrying plasma equilibria by breaking the constraints of ideal MHD and allowing for possibly destabilizing effects such as magnetic field line reconfiguration.

  9. Approaches for the direct estimation of lambda, and demographic contributions to lambda, using capture-recapture data

    USGS Publications Warehouse

    Nichols, James D.; Hines, James E.

    2002-01-01

    We first consider the estimation of the finite rate of population increase or population growth rate, u i , using capture-recapture data from open populations. We review estimation and modelling of u i under three main approaches to modelling openpopulation data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to u i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, n i , of Pradel (1996). We review estimation of n i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for u i and n i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.

  10. Approaches for the direct estimation of lambda, and demographic contributions to lambda, using capture-recapture data

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.

    2002-01-01

    We first consider the estimation of the finite rate of population increase or population growth rate, lambda sub i, using capture-recapture data from open populations. We review estimation and modelling of lambda sub i under three main approaches to modelling open-population data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to lambda sub i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, gamma sub i, of Pradel (1996). We review estimation of gamma sub i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for lambda sub i and gamma sub i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.

  11. Stabilizing all geometric moduli in heterotic Calabi-Yau vacua

    DOE PAGES

    Anderson, Lara B.; Gray, James; Lukas, Andre; ...

    2011-05-27

    We propose a scenario to stabilize all geometric moduli - that is, the complex structure, Kähler moduli and the dilaton - in smooth heterotic Calabi-Yau compactifications without Neveu-Schwarz three-form flux. This is accomplished using the gauge bundle required in any heterotic compactification, whose perturbative effects on the moduli are combined with non-perturbative corrections. We argue that, for appropriate gauge bundles, all complex structure and a large number of other moduli can be perturbatively stabilized - in the most restrictive case, leaving only one combination of Kähler moduli and the dilaton as a flat direction. At this stage, the remaining modulimore » space consists of Minkowski vacua. That is, the perturbative superpotential vanishes in the vacuum without the necessity to fine-tune flux. Finally, we incorporate non-perturbative effects such as gaugino condensation and/or instantons. These are strongly constrained by the anomalous U(1) symmetries which arise from the required bundle constructions. We present a specific example, with a consistent choice of non-perturbative effects, where all remaining flat directions are stabilized in an AdS vacuum.« less

  12. Laplace-transformed atomic orbital-based Møller–Plesset perturbation theory for relativistic two-component Hamiltonians

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl; Repisky, Michal, E-mail: michal.repisky@uit.no

    2016-07-07

    We present a formulation of Laplace-transformed atomic orbital-based second-order Møller–Plesset perturbation theory (MP2) energies for two-component Hamiltonians in the Kramers-restricted formalism. This low-order scaling technique can be used to enable correlated relativistic calculations for large molecular systems. We show that the working equations to compute the relativistic MP2 energy differ by merely a change of algebra (quaternion instead of real) from their non-relativistic counterparts. With a proof-of-principle implementation we study the effect of the nuclear charge on the magnitude of half-transformed integrals and show that for light elements spin-free and spin-orbit MP2 energies are almost identical. Furthermore, we investigate themore » effect of separation of charge distributions on the Coulomb and exchange energy contributions, which show the same long-range decay with the inter-electronic/atomic distance as for non-relativistic MP2. A linearly scaling implementation is possible if the proper distance behavior is introduced to the quaternion Schwarz-type estimates as for non-relativistic MP2.« less

  13. Anomalies, renormalization group flows, and the a-theorem in six-dimensional (1, 0) theories

    DOE PAGES

    Córdova, Clay; Dumitrescu, Thomas T.; Intriligator, Kenneth

    2016-10-17

    We establish a linear relation between the a-type Weyl anomaly and the ’t Hooft anomaly coeffcients for the R-symmetry and gravitational anomalies in sixdimensional (1,0) superconformal field theories. For RG flows onto the tensor branch, where conformal symmetry is spontaneously broken, supersymmetry relates the anomaly mismatch Δa to the square of a four-derivative interaction for the dilaton. This establishes the a-theorem for all such flows. The four-derivative dilaton interaction is in turn related to the Green-Schwarz-like terms that are needed to match the ’t Hooft anomalies on the tensor branch, thus fixing their relation to Δa. We use our formulamore » to obtain exact expressions for the a-anomaly of N small E 8 instantons, as well as N M 5-branes probing an orbifold singularity, and verify the a-theorem for RG flows onto their Higgs branches. We also discuss aspects of supersymmetric RG flows that terminate in scale but not conformally invariant theories with massless gauge fields.« less

  14. Seroconversion of a trivalent measles, mumps, and rubella vaccine in children aged 9 and 15 months.

    PubMed

    Forleo-Neto, E; Carvalho, E S; Fuentes, I C; Precivale, M S; Forleo, L H; Farhat, C K

    1997-12-01

    The serological response to MMR vaccine was evaluated in 109 9-month-old infants having no history of measles vaccination, and in 98 15-month-old children who had received monocomponent measles immunisation at 9 months. The combined vaccine contained Schwarz, Urabe Am9, and Wistar RA 27/3 live attenuated virus strains. Preimmunisation antibody levels were extremely low for the 9-month-old children, indicating that maternally-transmitted antibodies do not persist at this age. In the case of mumps, preimmunisation antibody levels were significantly higher in the 15-month-old than in the 9-month-old group. A difference between groups in terms of postimmunisation antibody titres was observed only for rubella, with titres being significantly higher in the older group. Seroconversion rates were high in both groups and no serious events attributable to vaccination were observed. The MMR vaccine can thus be administered to children as young as 9 months of age. Evidence for the efficacy of a two-dose schedule, i.e. at 9 and 15 months, is presented.

  15. It felt fluent, and I liked it: subjective feeling of fluency rather than objective fluency determines liking.

    PubMed

    Forster, Michael; Leder, Helmut; Ansorge, Ulrich

    2013-04-01

    According to the processing-fluency explanation of aesthetics, more fluently processed stimuli are preferred (R. Reber, N. Schwarz, & P. Winkielman, 2004, Processing fluency and aesthetic pleasure: Is beauty in the perceiver's processing experience? Personality and Social Psychology Review, Vol. 8, pp. 364-382.). In this view, the subjective feeling of ease of processing is considered important, but this has not been directly tested in perceptual processing. In two experiments, we therefore objectively manipulated fluency (ease of processing) with subliminal perceptual priming (Study 1) and variations in presentation durations (Study 2). We assessed the impact of objective fluency on feelings of fluency and liking, as well as their interdependence. In line with the processing-fluency account, we found that objectively more fluent images were indeed judged as more fluent and were also liked more. Moreover, differences in liking were even stronger when data were analyzed according to felt fluency. These findings demonstrate that perceptual fluency is not only explicitly felt, it can also be reported and is an important determinant of liking. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. Foam morphology, frustration and topological defects in a Negatively curved Hele-Shaw geometry

    NASA Astrophysics Data System (ADS)

    Mughal, Adil; Schroeder-Turk, Gerd; Evans, Myfanwy

    2014-03-01

    We present preliminary simulations of foams and single bubbles confined in a narrow gap between parallel surfaces. Unlike previous work, in which the bounding surfaces are flat (the so called Hele-Shaw geometry), we consider surfaces with non-vanishing Gaussian curvature. We demonstrate that the curvature of the bounding surfaces induce a geometric frustration in the preferred order of the foam. This frustration can be relieved by the introduction of topological defects (disclinations, dislocations and complex scar arrangements). We give a detailed analysis of these defects for foams confined in curved Hele-Shaw cells and compare our results with exotic honeycombs, built by bees on surfaces of varying Gaussian curvature. Our simulations, while encompassing surfaces of constant Gaussian curvature (such as the sphere and the cylinder), focus on surfaces with negative Gaussian curvature and in particular triply periodic minimal surfaces (such as the Schwarz P-surface and the Schoen's Gyroid surface). We use the results from a sphere-packing algorithm to generate a Voronoi partition that forms the basis of a Surface Evolver simulation, which yields a realistic foam morphology.

  17. Genetic differentiation of the stingless bee Tetragonula pagdeni in Thailand using SSCP analysis of a large subunit of mitochondrial ribosomal DNA.

    PubMed

    Thummajitsakul, Sirikul; Klinbunga, Sirawut; Sittipraneed, Siriporn

    2011-08-01

    Genetic diversity and population differentiation of the stingless bee Tetragonula pagdeni (Schwarz) was assessed using single-strand conformational polymorphism (SSCP) analysis of a large subunit of the ribosomal RNA gene (16S rRNA). High levels of genetic variation among individuals within each population (North, Northeast, Central, Prachuap Khiri Khan, Chumphon, and Peninsular Thailand) of T. pagdeni were observed. Analysis of molecular variance indicated significant genetic differentiation among the six geographic populations (Φ (PT) = 0.28, P < 0.001) and between samples collected from north and south of the Isthmus of Kra (Φ (PT) = 0.18, P < 0.001). In addition, Φ (PT) values between all pairwise comparisons were statistically significant (P < 0.01), indicating strong degrees of intraspecific population differentiation. Therefore, PCR-SSCP is a simple and cost-effective technique applicable for routine population genetic analyses in T. pagdeni and other stingless bees. The results also provide an important baseline for the conservation and management of this ecologically important species.

  18. Chikungunya Virus Vaccines: Viral Vector-Based Approaches.

    PubMed

    Ramsauer, Katrin; Tangy, Frédéric

    2016-12-15

    In 2013, a major chikungunya virus (CHIKV) epidemic reached the Americas. In the past 2 years, >1.7 million people have been infected. In light of the current epidemic, with millions of people in North and South America at risk, efforts to rapidly develop effective vaccines have increased. Here, we focus on CHIKV vaccines that use viral-vector technologies. This group of vaccine candidates shares an ability to potently induce humoral and cellular immune responses by use of highly attenuated and safe vaccine backbones. So far, well-described vectors such as modified vaccinia virus Ankara, complex adenovirus, vesicular stomatitis virus, alphavirus-based chimeras, and measles vaccine Schwarz strain (MV/Schw) have been described as potential vaccines. We summarize here the recent data on these experimental vaccines, with a focus on the preclinical and clinical activities on the MV/Schw-based candidate, which is the first CHIKV-vectored vaccine that has completed a clinical trial. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  19. Trivial solutions of generalized supergravity vs non-abelian T-duality anomaly

    NASA Astrophysics Data System (ADS)

    Wulff, Linus

    2018-06-01

    The equations that follow from kappa symmetry of the type II Green-Schwarz string are a certain deformation, by a Killing vector field K, of the type II supergravity equations. We analyze under what conditions solutions of these 'generalized' supergravity equations are trivial in the sense that they solve also the standard supergravity equations. We argue that for this to happen K must be null and satisfy dK =iK H with H = dB the NSNS three-form field strength. Non-trivial examples are provided by symmetric pp-wave solutions. We then analyze the consequences for non-abelian T-duality and the closely related homogenous Yang-Baxter sigma models. When one performs non-abelian T-duality of a string sigma model on a non-unimodular (sub)algebra one generates a non-vanishing K proportional to the trace of the structure constants. This is expected to lead to an anomaly but we show that when K satisfies the same conditions the anomaly in fact goes away leading to more possibilities for non-anomalous non-abelian T-duality.

  20. Advances in modeling the pressure correlation terms in the second moment equations

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Shabbir, Aamir; Lumley, John L.

    1991-01-01

    In developing turbulence models, various model constraints were proposed in an attempt to make the model equations more general (or universal). The most recent of these are the realizability principle, the linearity principle, the rapid distortion theory, and the material indifference principle. Several issues are discussed concerning these principles and special attention is payed to the realizability principle. Realizability (defined as the requirement of non-negative energy and Schwarz' inequality between any fluctuating quantities) is the basic physical and mathematical principle that any modeled equation should obey. Hence, it is the most universal, important and also the minimal requirement for a model equation to prevent it from producing unphysical results. The principle of realizability is described in detail, the realizability conditions are derived for various turbulence models, and the model forms are proposed for the pressure correlation terms in the second moment equations. Detailed comparisons of various turbulence models with experiments and direct numerical simulations are presented. As a special case of turbulence, the two dimensional two-component turbulence modeling is also discussed.

  1. Periodic minimal surfaces

    NASA Astrophysics Data System (ADS)

    Mackay, Alan L.

    1985-04-01

    A minimal surface is one for which, like a soap film with the same pressure on each side, the mean curvature is zero and, thus, is one where the two principal curvatures are equal and opposite at every point. For every closed circuit in the surface, the area is a minimum. Schwarz1 and Neovius2 showed that elements of such surfaces could be put together to give surfaces periodic in three dimensions. These periodic minimal surfaces are geometrical invariants, as are the regular polyhedra, but the former are curved. Minimal surfaces are appropriate for the description of various structures where internal surfaces are prominent and seek to adopt a minimum area or a zero mean curvature subject to their topology; thus they merit more complete numerical characterization. There seem to be at least 18 such surfaces3, with various symmetries and topologies, related to the crystallographic space groups. Recently, glyceryl mono-oleate (GMO) was shown by Longley and McIntosh4 to take the shape of the F-surface. The structure postulated is shown here to be in good agreement with an analysis of the fundamental geometry of periodic minimal surfaces.

  2. A kriging metamodel-assisted robust optimization method based on a reverse model

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  3. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  4. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  5. Optimization design of multiphase pump impeller based on combined genetic algorithm and boundary vortex flux diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-ya; Cai, Shu-jie; Li, Yong-jiang; Li, Yong-jiang; Zhang, Yong-xue

    2017-12-01

    A novel optimization design method for the multiphase pump impeller is proposed through combining the quasi-3D hydraulic design (Q3DHD), the boundary vortex flux (BVF) diagnosis, and the genetic algorithm (GA). The BVF diagnosis based on the Q3DHD is used to evaluate the objection function. Numerical simulations and hydraulic performance tests are carried out to compare the impeller designed only by the Q3DHD method and that optimized by the presented method. The comparisons of both the flow fields simulated under the same condition show that (1) the pressure distribution in the optimized impeller is more reasonable and the gas-liquid separation is more efficiently inhibited, (2) the scales of the gas pocket and the vortex decrease remarkably for the optimized impeller, (3) the unevenness of the BVF distributions near the shroud of the original impeller is effectively eliminated in the optimized impeller. The experimental results show that the differential pressure and the maximum efficiency of the optimized impeller are increased by 4% and 2.5%, respectively. Overall, the study indicates that the optimization design method proposed in this paper is feasible.

  6. Flexible operation strategy for environment control system in abnormal supply power condition

    NASA Astrophysics Data System (ADS)

    Liping, Pang; Guoxiang, Li; Hongquan, Qu; Yufeng, Fang

    2017-04-01

    This paper establishes an optimization method that can be applied to the flexible operation of the environment control system in an abnormal supply power condition. A proposed conception of lifespan is used to evaluate the depletion time of the non-regenerative substance. The optimization objective function is to maximize the lifespans. The optimization variables are the allocated powers of subsystems. The improved Non-dominated Sorting Genetic Algorithm is adopted to obtain the pareto optimization frontier with the constraints of the cabin environmental parameters and the adjustable operating parameters of the subsystems. Based on the same importance of objective functions, the preferred power allocation of subsystems can be optimized. Then the corresponding running parameters of subsystems can be determined to ensure the maximum lifespans. A long-duration space station with three astronauts is used to show the implementation of the proposed optimization method. Three different CO2 partial pressure levels are taken into consideration in this study. The optimization results show that the proposed optimization method can obtain the preferred power allocation for the subsystems when the supply power is at a less-than-nominal value. The method can be applied to the autonomous control for the emergency response of the environment control system.

  7. Optimization and evaluation of a method to detect adenoviruses in river water

    EPA Pesticide Factsheets

    This dataset includes the recoveries of spiked adenovirus through various stages of experimental optimization procedures. This dataset is associated with the following publication:McMinn , B., A. Korajkic, and A. Grimm. Optimization and evaluation of a method to detect adenoviruses in river water. JOURNAL OF VIROLOGICAL METHODS. Elsevier Science Ltd, New York, NY, USA, 231(1): 8-13, (2016).

  8. Optimal fractional order PID design via Tabu Search based algorithm.

    PubMed

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  10. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  11. A sensor network based virtual beam-like structure method for fault diagnosis and monitoring of complex structures with Improved Bacterial Optimization

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-02-01

    This paper proposes a novel method for the fault diagnosis of complex structures based on an optimized virtual beam-like structure approach. A complex structure can be regarded as a combination of numerous virtual beam-like structures considering the vibration transmission path from vibration sources to each sensor. The structural 'virtual beam' consists of a sensor chain automatically obtained by an Improved Bacterial Optimization Algorithm (IBOA). The biologically inspired optimization method (i.e. IBOA) is proposed for solving the discrete optimization problem associated with the selection of the optimal virtual beam for fault diagnosis. This novel virtual beam-like-structure approach needs less or little prior knowledge. Neither does it require stationary response data, nor is it confined to a specific structure design. It is easy to implement within a sensor network attached to the monitored structure. The proposed fault diagnosis method has been tested on the detection of loosening screws located at varying positions in a real satellite-like model. Compared with empirical methods, the proposed virtual beam-like structure method has proved to be very effective and more reliable for fault localization.

  12. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    NASA Astrophysics Data System (ADS)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  13. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  14. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  15. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  16. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  17. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  18. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  19. Shuffle Optimizer: A Program to Optimize DNA Shuffling for Protein Engineering.

    PubMed

    Milligan, John N; Garry, Daniel J

    2017-01-01

    DNA shuffling is a powerful tool to develop libraries of variants for protein engineering. Here, we present a protocol to use our freely available and easy-to-use computer program, Shuffle Optimizer. Shuffle Optimizer is written in the Python computer language and increases the nucleotide homology between two pieces of DNA desired to be shuffled together without changing the amino acid sequence. In addition we also include sections on optimal primer design for DNA shuffling and library construction, a small-volume ultrasonicator method to create sheared DNA, and finally a method to reassemble the sheared fragments and recover and clone the library. The Shuffle Optimizer program and these protocols will be useful to anyone desiring to perform any of the nucleotide homology-dependent shuffling methods.

  20. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

Top