Sample records for fundamental optimal relation

  1. Modeling and Error Analysis of a Superconducting Gravity Gradiometer.

    DTIC Science & Technology

    1979-08-01

    fundamental limit to instrument - -1- sensitivity is the thermal noise of the sensor . For the gradiometer design outlined above, the best sensitivity...Mapoles at Stanford. Chapter IV determines the relation between dynamic range, the sensor Q, and the thermal noise of the cryogenic accelerometer. An...C.1 Accelerometer Optimization (1) Development and optimization of the loaded diaphragm sensor . (2) Determination of the optimal values of the

  2. Tuning quadratic nonlinear photonic crystal fibers for zero group-velocity mismatch.

    PubMed

    Bache, Morten; Nielsen, Hanne; Laegsgaard, Jesper; Bang, Ole

    2006-06-01

    We consider an index-guiding silica photonic crystal fiber with a triangular hole pattern and a periodically poled quadratic nonlinearity. By tuning the pitch and the relative hole size, second-harmonic generation with zero group-velocity mismatch is found for any fundamental wavelength above 780 nm. The nonlinear strength is optimized when the fundamental has maximum confinement in the core. The conversion bandwidth allows for femtosecond-pulse conversion, and 4%-180%W(-1)cm(-2) relative efficiencies were found.

  3. Influence of fundamental mode fill factor on disk laser output power and laser beam quality

    NASA Astrophysics Data System (ADS)

    Cheng, Zhiyong; Yang, Zhuo; Shao, Xichun; Li, Wei; Zhu, Mengzhen

    2017-11-01

    An three-dimensional numerical model based on finite element method and Fox-Li method with angular spectrum diffraction theoy is developed to calculate the output power and power density distribution of Yb:YAG disk laser. We invest the influence of fundamental mode fill factor(the ratio of fundamental mode size and pump spot size) on the output power and laser beam quality. Due to aspherical aberration and soft aperture effect in laser disk, high beam quality can be achieve with relative lower efficiency. The highest output power of fundamental laser mode is influenced by the fundamental mode fill factor. Besides we find that optimal mode fill factor increase with pump spot size.

  4. Differences in Optimal Growth Equations For White Oak in the Interior Highlands

    Treesearch

    Don C. Bragg; James M. Guldin

    2003-01-01

    Optimal growth equations are fundamental to many ecological simulators, but few have been critically examined. This paper reviews some of the behavior of the Potential Relative Increment (PRI) approach. Models for white oak were compared for Arkansas River Valley (ARV), Boston Mountains (BoM), Ouachita Mountains (OM), and Ozark Highlands (OH) ecological sections of the...

  5. Optimal lay-up design of variable stiffness laminated composite plates by a layer-wise optimization technique

    NASA Astrophysics Data System (ADS)

    Houmat, A.

    2018-02-01

    The optimal lay-up design for the maximum fundamental frequency of variable stiffness laminated composite plates is investigated using a layer-wise optimization technique. The design variables are two fibre orientation angles per ply. Thin plate theory is used in conjunction with a p-element to calculate the fundamental frequencies of symmetrically and antisymmetrically laminated composite plates. Comparisons with existing optimal solutions for constant stiffness symmetrically laminated composite plates show excellent agreement. It is observed that the maximum fundamental frequency can be increased considerably using variable stiffness design as compared to constant stiffness design. In addition, optimal lay-ups for the maximum fundamental frequency of variable stiffness symmetrically and antisymmetrically laminated composite plates with different aspect ratios and various combinations of free, simply supported and clamped edge conditions are presented. These should prove a useful benchmark for optimal lay-ups of variable stiffness laminated composite plates.

  6. Numerical Device Modeling, Analysis, and Optimization of Extended-SWIR HgCdTe Infrared Detectors

    NASA Astrophysics Data System (ADS)

    Schuster, J.; DeWames, R. E.; DeCuir, E. A.; Bellotti, E.; Dhar, N.; Wijewarnasuriya, P. S.

    2016-09-01

    Imaging in the extended short-wavelength infrared (eSWIR) spectral band (1.7-3.0 μm) for astronomy applications is an area of significant interest. However, these applications require infrared detectors with extremely low dark current (less than 0.01 electrons per pixel per second for certain applications). In these detectors, sources of dark current that may limit the overall system performance are fundamental and/or defect-related mechanisms. Non-optimized growth/device processing may present material point defects within the HgCdTe bandgap leading to Shockley-Read-Hall dominated dark current. While realizing contributions to the dark current from only fundamental mechanisms should be the goal for attaining optimal device performance, it may not be readily feasible with current technology and/or resources. In this regard, the U.S. Army Research Laboratory performed physics-based, two- and three-dimensional numerical modeling of HgCdTe photovoltaic infrared detectors designed for operation in the eSWIR spectral band. The underlying impetus for this capability and study originates with a desire to reach fundamental performance limits via intelligent device design.

  7. Long-Run Savings and Investment Strategy Optimization

    PubMed Central

    Gerrard, Russell; Guillén, Montserrat; Pérez-Marín, Ana M.

    2014-01-01

    We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration. PMID:24711728

  8. Long-run savings and investment strategy optimization.

    PubMed

    Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M

    2014-01-01

    We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.

  9. A consistent methodology for optimal shape design of graphene sheets to maximize their fundamental frequencies considering topological defects

    NASA Astrophysics Data System (ADS)

    Shi, Jin-Xing; Ohmura, Keiichiro; Shimoda, Masatoshi; Lei, Xiao-Wen

    2018-07-01

    In recent years, shape design of graphene sheets (GSs) by introducing topological defects for enhancing their mechanical behaviors has attracted the attention of scholars. In the present work, we propose a consistent methodology for optimal shape design of GSs using a combination of the molecular mechanics (MM) method, the non-parametric shape optimization method, the phase field crystal (PFC) method, Voronoi tessellation, and molecular dynamics (MD) simulation to maximize their fundamental frequencies. At first, we model GSs as continuum frame models using a link between the MM method and continuum mechanics. Then, we carry out optimal shape design of GSs in fundamental frequency maximization problem based on a developed shape optimization method for frames. However, the obtained optimal shapes of GSs only consisting of hexagonal carbon rings are unstable that do not satisfy the principle of least action, so we relocate carbon atoms on the optimal shapes by introducing topological defects using the PFC method and Voronoi tessellation. At last, we perform the structural relaxation through MD simulation to determine the final optimal shapes of GSs. We design two examples of GSs and the optimal results show that the fundamental frequencies of GSs can be significantly enhanced according to the optimal shape design methodology.

  10. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  11. Milestones of mathematical model for business process management related to cost estimate documentation in petroleum industry

    NASA Astrophysics Data System (ADS)

    Khamidullin, R. I.

    2018-05-01

    The paper is devoted to milestones of the optimal mathematical model for a business process related to cost estimate documentation compiled during construction and reconstruction of oil and gas facilities. It describes the study and analysis of fundamental issues in petroleum industry, which are caused by economic instability and deterioration of a business strategy. Business process management is presented as business process modeling aimed at the improvement of the studied business process, namely main criteria of optimization and recommendations for the improvement of the above-mentioned business model.

  12. The influence of optimism and pessimism on student achievement in mathematics

    NASA Astrophysics Data System (ADS)

    Yates, Shirley M.

    2002-11-01

    Students' causal attributions are not only fundamental motivational variables but are also critical motivators of their persistence in learning. Optimism, pessimism, and achievement in mathematics were measured in a sample of primary and lower secondary students on two occasions. Although achievement in mathematics was most strongly related to prior achievement and grade level, optimism and pessimism were significant factors. In particular, students with a more generally pessimistic outlook on life had a lower level of achievement in mathematics over time. Gender was not a significant factor in achievement. The implications of these findings are discussed.

  13. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  14. Optimization and experimental validation of stiff porous phononic plates for widest complete bandgap of mixed fundamental guided wave modes

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Kersemans, Mathias; Abhary, Kazem; Uddin, Mohammad; Van Paepegem, Wim

    2018-01-01

    Phononic crystal plates (PhPs) have promising application in manipulation of guided waves for design of low-loss acoustic devices and built-in acoustic metamaterial lenses in plate structures. The prominent feature of phononic crystals is the existence of frequency bandgaps over which the waves are stopped, or are resonated and guided within appropriate defects. Therefore, maximized bandgaps of PhPs are desirable to enhance their phononic controllability. Porous PhPs produced through perforation of a uniform background plate, in which the porous interfaces act as strong reflectors of wave energy, are relatively easy to produce. However, the research in optimization of porous PhPs and experimental validation of achieved topologies has been very limited and particularly focused on bandgaps of flexural (asymmetric) wave modes. In this paper, porous PhPs are optimized through an efficient multiobjective genetic algorithm for widest complete bandgap of mixed fundamental guided wave modes (symmetric and asymmetric) and maximized stiffness. The Pareto front of optimization is analyzed and variation of bandgap efficiency with respect to stiffness is presented for various optimized topologies. Selected optimized topologies from the stiff and compliant regimes of Pareto front are manufactured by water-jetting an aluminum plate and their promising bandgap efficiency is experimentally observed. An optimized Pareto topology is also chosen and manufactured by laser cutting a Plexiglas (PMMA) plate, and its performance in self-collimation and focusing of guided waves is verified as compared to calculated dispersion properties.

  15. Thermodynamics fundamentals of energy conversion

    NASA Astrophysics Data System (ADS)

    Dan, Nicolae

    The work reported in the chapters 1-5 focuses on the fundamentals of heat transfer, fluid dynamics, thermodynamics and electrical phenomena related to the conversion of one form of energy to another. Chapter 6 is a re-examination of the fundamental heat transfer problem of how to connect a finite-size heat generating volume to a concentrated sink. Chapter 1 extends to electrical machines the combined thermodynamics and heat transfer optimization approach that has been developed for heat engines. The conversion efficiency at maximum power is 1/2. When, as in specific applications, the operating temperature of windings must not exceed a specified level, the power output is lower and efficiency higher. Chapter 2 addresses the fundamental problem of determining the optimal history (regime of operation) of a battery so that the work output is maximum. Chapters 3 and 4 report the energy conversion aspects of an expanding mixture of hot particles, steam and liquid water. At the elemental level, steam annuli develop around the spherical drops as time increases. At the mixture level, the density decreases while the pressure and velocity increases. Chapter 4 describes numerically, based on the finite element method, the time evolution of the expanding mixture of hot spherical particles, steam and water. The fluid particles are moved in time in a Lagrangian manner to simulate the change of the domain configuration. Chapter 5 describes the process of thermal interaction between the molten material and water. In the second part of the chapter the model accounts for the irreversibility due to the flow of the mixture through the cracks of the mixing vessel. The approach presented in this chapter is based on exergy analysis and represents a departure from the line of inquiry that was followed in chapters 3-4. Chapter 6 shows that the geometry of the heat flow path between a volume and one point can be optimized in two fundamentally different ways. In the "growth" method the structure is optimized starting from the smallest volume element of fixed size. In "design" method the overall volume is fixed, and the designer works "inward" by increasing the internal complexity of the paths for heat flow.

  16. An effective model for ergonomic optimization applied to a new automotive assembly line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less

  17. Criticism of generally accepted fundamentals and methodologies of traffic and transportation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerner, Boris S.

    It is explained why the set of the fundamental empirical features of traffic breakdown (a transition from free flow to congested traffic) should be the empirical basis for any traffic and transportation theory that can be reliable used for control and optimization in traffic networks. It is shown that generally accepted fundamentals and methodologies of traffic and transportation theory are not consistent with the set of the fundamental empirical features of traffic breakdown at a highway bottleneck. To these fundamentals and methodologies of traffic and transportation theory belong (i) Lighthill-Whitham-Richards (LWR) theory, (ii) the General Motors (GM) model class (formore » example, Herman, Gazis et al. GM model, Gipps’s model, Payne’s model, Newell’s optimal velocity (OV) model, Wiedemann’s model, Bando et al. OV model, Treiber’s IDM, Krauß’s model), (iii) the understanding of highway capacity as a particular stochastic value, and (iv) principles for traffic and transportation network optimization and control (for example, Wardrop’s user equilibrium (UE) and system optimum (SO) principles). Alternatively to these generally accepted fundamentals and methodologies of traffic and transportation theory, we discuss three-phase traffic theory as the basis for traffic flow modeling as well as briefly consider the network breakdown minimization (BM) principle for the optimization of traffic and transportation networks with road bottlenecks.« less

  18. Coexistence of positive and negative refractive index sensitivity in the liquid-core photonic crystal fiber based plasmonic sensor.

    PubMed

    Shuai, Binbin; Xia, Li; Liu, Deming

    2012-11-05

    We present and numerically characterize a liquid-core photonic crystal fiber based plasmonic sensor. The coupling properties and sensing performance are investigated by the finite element method. It is found that not only the plasmonic mode dispersion relation but also the fundamental mode dispersion relation is rather sensitive to the analyte refractive index (RI). The positive and negative RI sensitivity coexist in the proposed design. It features a positive RI sensitivity when the increment of the SPP mode effective index is larger than that of the fundamental mode, but the sensor shows a negative RI sensitivity once the increment of the fundamental mode gets larger. A maximum negative RI sensitivity of -5500nm/RIU (Refractive Index Unit) is achieved in the sensing range of 1.50-1.53. The effects of the structural parameters on the plasmonic excitations are also studied, with a view of tuning and optimizing the resonant spectrum.

  19. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  20. The Optimal Size for Discussion Groups. Exchange Bibliography No. 378.

    ERIC Educational Resources Information Center

    Petty, Robert M.

    Many variables relate to the successful functioning of groups, but one that is fundamental is the size of the group. Part 1 of this bibliography includes a selection of studies from small-group research in experimental social psychology. Part 2 of this report represents an attempt at a rigorous review of the feelings of clinicians and counselors…

  1. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  2. The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.

  3. Remediation Optimization: Definition, Scope and Approach

    EPA Pesticide Factsheets

    This document provides a general definition, scope and approach for conducting optimization reviews within the Superfund Program and includes the fundamental principles and themes common to optimization.

  4. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  5. Multi-Mode Cavity Accelerator Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yong; Hirshfield, Jay Leonard

    2016-11-10

    This project aimed to develop a prototype for a novel accelerator structure comprising coupled cavities that are tuned to support modes with harmonically-related eigenfrequencies, with the goal of reaching an acceleration gradient >200 MeV/m and a breakdown rate <10 -7/pulse/meter. Phase I involved computations, design, and preliminary engineering of a prototype multi-harmonic cavity accelerator structure; plus tests of a bimodal cavity. A computational procedure was used to design an optimized profile for a bimodal cavity with high shunt impedance and low surface fields to maximize the reduction in temperature rise ΔT. This cavity supports the TM010 mode and its 2ndmore » harmonic TM011 mode. Its fundamental frequency is at 12 GHz, to benchmark against the empirical criteria proposed within the worldwide High Gradient collaboration for X-band copper structures; namely, a surface electric field E sur max< 260 MV/m and pulsed surface heating ΔT max< 56 °K. With optimized geometry, amplitude and relative phase of the two modes, reductions are found in surface pulsed heating, modified Poynting vector, and total RF power—as compared with operation at the same acceleration gradient using only the fundamental mode.« less

  6. Power system modeling and optimization methods vis-a-vis integrated resource planning (IRP)

    NASA Astrophysics Data System (ADS)

    Arsali, Mohammad H.

    1998-12-01

    The state-of-the-art restructuring of power industries is changing the fundamental nature of retail electricity business. As a result, the so-called Integrated Resource Planning (IRP) strategies implemented on electric utilities are also undergoing modifications. Such modifications evolve from the imminent considerations to minimize the revenue requirements and maximize electrical system reliability vis-a-vis capacity-additions (viewed as potential investments). IRP modifications also provide service-design bases to meet the customer needs towards profitability. The purpose of this research as deliberated in this dissertation is to propose procedures for optimal IRP intended to expand generation facilities of a power system over a stretched period of time. Relevant topics addressed in this research towards IRP optimization are as follows: (1) Historical prospective and evolutionary aspects of power system production-costing models and optimization techniques; (2) A survey of major U.S. electric utilities adopting IRP under changing socioeconomic environment; (3) A new technique designated as the Segmentation Method for production-costing via IRP optimization; (4) Construction of a fuzzy relational database of a typical electric power utility system for IRP purposes; (5) A genetic algorithm based approach for IRP optimization using the fuzzy relational database.

  7. A scaling law derived from optimal dendritic wiring

    PubMed Central

    Cuntz, Hermann; Mathy, Alexandre; Häusser, Michael

    2012-01-01

    The wide diversity of dendritic trees is one of the most striking features of neural circuits. Here we develop a general quantitative theory relating the total length of dendritic wiring to the number of branch points and synapses. We show that optimal wiring predicts a 2/3 power law between these measures. We demonstrate that the theory is consistent with data from a wide variety of neurons across many different species and helps define the computational compartments in dendritic trees. Our results imply fundamentally distinct design principles for dendritic arbors compared with vascular, bronchial, and botanical trees. PMID:22715290

  8. Governing Laws of Complex System Predictability under Co-evolving Uncertainty Sources: Theory and Nonlinear Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Perdigão, R. A. P.

    2017-12-01

    Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.

  9. Optimal joint measurements of complementary observables by a single trapped ion

    NASA Astrophysics Data System (ADS)

    Xiong, T. P.; Yan, L. L.; Ma, Z. H.; Zhou, F.; Chen, L.; Yang, W. L.; Feng, M.; Busch, P.

    2017-06-01

    The uncertainty relations, pioneered by Werner Heisenberg nearly 90 years ago, set a fundamental limitation on the joint measurability of complementary observables. This limitation has long been a subject of debate, which has been reignited recently due to new proposed forms of measurement uncertainty relations. The present work is associated with a new error trade-off relation for compatible observables approximating two incompatible observables, in keeping with the spirit of Heisenberg’s original ideas of 1927. We report the first direct test and confirmation of the tight bounds prescribed by such an error trade-off relation, based on an experimental realisation of optimal joint measurements of complementary observables using a single ultracold {}40{{{Ca}}}+ ion trapped in a harmonic potential. Our work provides a prototypical determination of ultimate joint measurement error bounds with potential applications in quantum information science for high-precision measurement and information security.

  10. The relative entropy is fundamental to adaptive resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less

  11. The relative entropy is fundamental to adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  12. Experimental and Modeling Studies of the Combustion Characteristics of Conventional and Alternative Jet Fuels. Final Report

    NASA Technical Reports Server (NTRS)

    Meeks, Ellen; Naik, Chitral V.; Puduppakkam, Karthik V.; Modak, Abhijit; Egolfopoulos, Fokion N.; Tsotsis, Theo; Westbrook, Charles K.

    2011-01-01

    The objectives of this project have been to develop a comprehensive set of fundamental data regarding the combustion behavior of jet fuels and appropriately associated model fuels. Based on the fundamental study results, an auxiliary objective was to identify differentiating characteristics of molecular fuel components that can be used to explain different fuel behavior and that may ultimately be used in the planning and design of optimal fuel-production processes. The fuels studied in this project were Fischer-Tropsch (F-T) fuels and biomass-derived jet fuels that meet certain specifications of currently used jet propulsion applications. Prior to this project, there were no systematic experimental flame data available for such fuels. One of the key goals has been to generate such data, and to use this data in developing and verifying effective kinetic models. The models have then been reduced through automated means to enable multidimensional simulation of the combustion characteristics of such fuels in real combustors. Such reliable kinetic models, validated against fundamental data derived from laminar flames using idealized flow models, are key to the development and design of optimal combustors and fuels. The models provide direct information about the relative contribution of different molecular constituents to the fuel performance and can be used to assess both combustion and emissions characteristics.

  13. Optimization of the magnetic dynamo.

    PubMed

    Willis, Ashley P

    2012-12-21

    In stars and planets, magnetic fields are believed to originate from the motion of electrically conducting fluids in their interior, through a process known as the dynamo mechanism. In this Letter, an optimization procedure is used to simultaneously address two fundamental questions of dynamo theory: "Which velocity field leads to the most magnetic energy growth?" and "How large does the velocity need to be relative to magnetic diffusion?" In general, this requires optimization over the full space of continuous solenoidal velocity fields possible within the geometry. Here the case of a periodic box is considered. Measuring the strength of the flow with the root-mean-square amplitude, an optimal velocity field is shown to exist, but without limitation on the strain rate, optimization is prone to divergence. Measuring the flow in terms of its associated dissipation leads to the identification of a single optimal at the critical magnetic Reynolds number necessary for a dynamo. This magnetic Reynolds number is found to be only 15% higher than that necessary for transient growth of the magnetic field.

  14. Methods, systems and apparatus for optimization of third harmonic current injection in a multi-phase machine

    DOEpatents

    Gallegos-Lopez, Gabriel

    2012-10-02

    Methods, system and apparatus are provided for increasing voltage utilization in a five-phase vector controlled machine drive system that employs third harmonic current injection to increase torque and power output by a five-phase machine. To do so, a fundamental current angle of a fundamental current vector is optimized for each particular torque-speed of operating point of the five-phase machine.

  15. Fundamental limits of repeaterless quantum communications

    PubMed Central

    Pirandola, Stefano; Laurenza, Riccardo; Ottaviani, Carlo; Banchi, Leonardo

    2017-01-01

    Quantum communications promises reliable transmission of quantum information, efficient distribution of entanglement and generation of completely secure keys. For all these tasks, we need to determine the optimal point-to-point rates that are achievable by two remote parties at the ends of a quantum channel, without restrictions on their local operations and classical communication, which can be unlimited and two-way. These two-way assisted capacities represent the ultimate rates that are reachable without quantum repeaters. Here, by constructing an upper bound based on the relative entropy of entanglement and devising a dimension-independent technique dubbed ‘teleportation stretching', we establish these capacities for many fundamental channels, namely bosonic lossy channels, quantum-limited amplifiers, dephasing and erasure channels in arbitrary dimension. In particular, we exactly determine the fundamental rate-loss tradeoff affecting any protocol of quantum key distribution. Our findings set the limits of point-to-point quantum communications and provide precise and general benchmarks for quantum repeaters. PMID:28443624

  16. Fundamental limits of repeaterless quantum communications.

    PubMed

    Pirandola, Stefano; Laurenza, Riccardo; Ottaviani, Carlo; Banchi, Leonardo

    2017-04-26

    Quantum communications promises reliable transmission of quantum information, efficient distribution of entanglement and generation of completely secure keys. For all these tasks, we need to determine the optimal point-to-point rates that are achievable by two remote parties at the ends of a quantum channel, without restrictions on their local operations and classical communication, which can be unlimited and two-way. These two-way assisted capacities represent the ultimate rates that are reachable without quantum repeaters. Here, by constructing an upper bound based on the relative entropy of entanglement and devising a dimension-independent technique dubbed 'teleportation stretching', we establish these capacities for many fundamental channels, namely bosonic lossy channels, quantum-limited amplifiers, dephasing and erasure channels in arbitrary dimension. In particular, we exactly determine the fundamental rate-loss tradeoff affecting any protocol of quantum key distribution. Our findings set the limits of point-to-point quantum communications and provide precise and general benchmarks for quantum repeaters.

  17. Motor control for a brushless DC motor

    NASA Technical Reports Server (NTRS)

    Peterson, William J. (Inventor); Faulkner, Dennis T. (Inventor)

    1985-01-01

    This invention relates to a motor control system for a brushless DC motor having an inverter responsively coupled to the motor control system and in power transmitting relationship to the motor. The motor control system includes a motor rotor speed detecting unit that provides a pulsed waveform signal proportional to rotor speed. This pulsed waveform signal is delivered to the inverter to thereby cause an inverter fundamental current waveform output to the motor to be switched at a rate proportional to said rotor speed. In addition, the fundamental current waveform is also pulse width modulated at a rate proportional to the rotor speed. A fundamental current waveform phase advance circuit is controllingly coupled to the inverter. The phase advance circuit is coupled to receive the pulsed waveform signal from the motor rotor speed detecting unit and phase advance the pulsed waveform signal as a predetermined function of motor speed to thereby cause the fundamental current waveform to be advanced and thereby compensate for fundamental current waveform lag due to motor winding reactance which allows the motor to operate at higher speeds than the motor is rated while providing optimal torque and therefore increased efficiency.

  18. Skill-Based and Planned Active Play Versus Free-Play Effects on Fundamental Movement Skills in Preschoolers.

    PubMed

    Roach, Lindsay; Keats, Melanie

    2018-01-01

    Fundamental movement skill interventions are important for promoting physical activity, but the optimal intervention model for preschool children remains unclear. We compared two 8-week interventions, a structured skill-station and a planned active play approach, to a free-play control condition on pre- and postintervention fundamental movement skills. We also collected data regarding program attendance and perceived enjoyment. We found a significant interaction effect between intervention type and time. A Tukey honest significant difference analysis supported a positive intervention effect showing a significant difference between both interventions and the free-play control condition. There was a significant between-group difference in group attendance such that mean attendance was higher for both the free-play and planned active play groups relative to the structured skill-based approach. There were no differences in attendance between free-play and planned active play groups, and there were no differences in enjoyment ratings between the two intervention groups. In sum, while both interventions led to improved fundamental movement skills, the active play approach offered several logistical advantages. Although these findings should be replicated, they can guide feasible and sustainable fundamental movement skill programs within day care settings.

  19. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  20. Optimized two- and three-colour laser pulses for the intense terahertz wave generation

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Guo-Li; Zhou, Xiao-Xin

    2016-11-01

    Based on the photocurrent model, we perform a theoretical study on the optimization of terahertz (THz) wave emission from argon gas irradiated by the two- and three-colour laser fields. To obtain stronger THz radiation for the given conditions, a genetic algorithm method is applied to search for the optimum laser parameters. For the two-colour field, our optimizations reveal two types of optimal scheme, and each one dominates the THz generation in different regions of intensity ratio for a given total laser intensity. One scheme is the combination of a fundamental laser pulse and its second harmonic, while the other is the fundamental pulse with its fourth harmonic. For each scheme, the optimal intensity ratio and phase delay are obtained. For the three-colour case, our optimization shows that the excellent waveform for the strongest THz radiation is composed of a fundamental laser pulse, and its second, third harmonics, with appropriate intensity ratio and carrier-envelope phase. Such a 3-colour field can generate strong THz radiation comparable with a 10-colour sawtooth wave [Martínez et al., Phys. Rev. Lett. 114, 183901 (2015)]. The physical mechanisms for the enhancement of THz wave emission in gases are also discussed in detail. Our results give helpful guidance for intense THz generation with tabletop femtosecond laser device in experiment.

  1. Optimal observables for multiparameter seismic tomography

    NASA Astrophysics Data System (ADS)

    Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner

    2014-08-01

    We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.

  2. Financial fluctuations anchored to economic fundamentals: A mesoscopic network approach.

    PubMed

    Sharma, Kiran; Gopalakrishnan, Balagopal; Chakrabarti, Anindya S; Chakraborti, Anirban

    2017-08-14

    We demonstrate the existence of an empirical linkage between nominal financial networks and the underlying economic fundamentals, across countries. We construct the nominal return correlation networks from daily data to encapsulate sector-level dynamics and infer the relative importance of the sectors in the nominal network through measures of centrality and clustering algorithms. Eigenvector centrality robustly identifies the backbone of the minimum spanning tree defined on the return networks as well as the primary cluster in the multidimensional scaling map. We show that the sectors that are relatively large in size, defined with three metrics, viz., market capitalization, revenue and number of employees, constitute the core of the return networks, whereas the periphery is mostly populated by relatively smaller sectors. Therefore, sector-level nominal return dynamics are anchored to the real size effect, which ultimately shapes the optimal portfolios for risk management. Our results are reasonably robust across 27 countries of varying degrees of prosperity and across periods of market turbulence (2008-09) as well as periods of relative calmness (2012-13 and 2015-16).

  3. Structural optimization of large structural systems by optimality criteria methods

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo

    1992-01-01

    The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.

  4. Basic Knowledge for Market Principle: Approaches to the Price Coordination Mechanism by Using Optimization Theory and Algorithm

    NASA Astrophysics Data System (ADS)

    Aiyoshi, Eitaro; Masuda, Kazuaki

    On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.

  5. Inbreeding parents should invest more resources in fewer offspring.

    PubMed

    Duthie, A Bradley; Lee, Aline M; Reid, Jane M

    2016-11-30

    Inbreeding increases parent-offspring relatedness and commonly reduces offspring viability, shaping selection on reproductive interactions involving relatives and associated parental investment (PI). Nevertheless, theories predicting selection for inbreeding versus inbreeding avoidance and selection for optimal PI have only been considered separately, precluding prediction of optimal PI and associated reproductive strategy given inbreeding. We unify inbreeding and PI theory, demonstrating that optimal PI increases when a female's inbreeding decreases the viability of her offspring. Inbreeding females should therefore produce fewer offspring due to the fundamental trade-off between offspring number and PI. Accordingly, selection for inbreeding versus inbreeding avoidance changes when females can adjust PI with the degree that they inbreed. By contrast, optimal PI does not depend on whether a focal female is herself inbred. However, inbreeding causes optimal PI to increase given strict monogamy and associated biparental investment compared with female-only investment. Our model implies that understanding evolutionary dynamics of inbreeding strategy, inbreeding depression, and PI requires joint consideration of the expression of each in relation to the other. Overall, we demonstrate that existing PI and inbreeding theories represent special cases of a more general theory, implying that intrinsic links between inbreeding and PI affect evolution of behaviour and intrafamilial conflict. © 2016 The Authors.

  6. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  7. Automotive Exterior Noise Optimization Using Grey Relational Analysis Coupled with Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shuming; Wang, Dengfeng; Liu, Bo

    This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.

  8. The implications of fundamental cause theory for priority setting.

    PubMed

    Goldberg, Daniel S

    2014-10-01

    Application of fundamental cause theory to Powers and Faden's model of social justice highlights the ethical superiority of upstream public health interventions. In this article, I assess the ramifications of fundamental cause theory specifically in context of public health priority setting. Ethically optimal public health policy simultaneously maximizes overall population health and compresses health inequalities. The fundamental cause theory is an important framework in helping to identify which categories of public health interventions are most likely to advance these twin goals.

  9. Emotion: The Self-regulatory Sense

    PubMed Central

    2014-01-01

    While emotion is a central component of human health and well-being, traditional approaches to understanding its biological function have been wanting. A dynamic systems model, however, broadly redefines and recasts emotion as a primary sensory system—perhaps the first sensory system to have emerged, serving the ancient autopoietic function of “self-regulation.” Drawing upon molecular biology and revelations from the field of epigenetics, the model suggests that human emotional perceptions provide an ongoing stream of “self-relevant” sensory information concerning optimally adaptive states between the organism and its immediate environment, along with coupled behavioral corrections that honor a universal self-regulatory logic, one still encoded within cellular signaling and immune functions. Exemplified by the fundamental molecular circuitry of sensorimotor control in the E coli bacterium, the model suggests that the hedonic (affective) categories emerge directly from positive and negative feedback processes, their good/bad binary appraisals relating to dual self-regulatory behavioral regimes—evolutionary purposes, through which organisms actively participate in natural selection, and through which humans can interpret optimal or deficit states of balanced being and becoming. The self-regulatory sensory paradigm transcends anthropomorphism, unites divergent theoretical perspectives and isolated bodies of literature, while challenging time-honored assumptions. While suppressive regulatory strategies abound, it suggests that emotions are better understood as regulating us, providing a service crucial to all semantic language, learning systems, evaluative decision-making, and fundamental to optimal physical, mental, and social health. PMID:24808986

  10. Effective Teaching of Economics: A Constrained Optimization Problem?

    ERIC Educational Resources Information Center

    Hultberg, Patrik T.; Calonge, David Santandreu

    2017-01-01

    One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…

  11. Considerations on the Optimal and Efficient Processing of Information-Bearing Signals

    ERIC Educational Resources Information Center

    Harms, Herbert Andrew

    2013-01-01

    Noise is a fundamental hurdle that impedes the processing of information-bearing signals, specifically the extraction of salient information. Processing that is both optimal and efficient is desired; optimality ensures the extracted information has the highest fidelity allowed by the noise, while efficiency ensures limited resource usage. Optimal…

  12. Routh's algorithm - A centennial survey

    NASA Technical Reports Server (NTRS)

    Barnett, S.; Siljak, D. D.

    1977-01-01

    One hundred years have passed since the publication of Routh's fundamental work on determining the stability of constant linear systems. The paper presents an outline of the algorithm and considers such aspects of it as the distribution of zeros and applications of it that relate to the greatest common divisor, the abscissa of stability, continued fractions, canonical forms, the nonnegativity of polynomials and polynomial matrices, the absolute stability, optimality and passivity of dynamic systems, and the stability of two-dimensional circuits.

  13. Medicinal Chemical Properties of Successful Central Nervous System Drugs

    PubMed Central

    Pajouhesh, Hassan; Lenz, George R.

    2005-01-01

    Summary: Fundamental physiochemical features of CNS drugs are related to their ability to penetrate the blood-brain barrier affinity and exhibit CNS activity. Factors relevant to the success of CNS drugs are reviewed. CNS drugs show values of molecular weight, lipophilicity, and hydrogen bond donor and acceptor that in general have a smaller range than general therapeutics. Pharmacokinetic properties can be manipulated by the medicinal chemist to a significant extent. The solubility, permeability, metabolic stability, protein binding, and human ether-ago-go-related gene inhibition of CNS compounds need to be optimized simultaneously with potency, selectivity, and other biological parameters. The balance between optimizing the physiochemical and pharmacokinetic properties to make the best compromises in properties is critical for designing new drugs likely to penetrate the blood brain barrier and affect relevant biological systems. This review is intended as a guide to designing CNS therapeutic agents with better drug-like properties. PMID:16489364

  14. Volatile decision dynamics: experiments, stochastic description, intermittency control and traffic optimization

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Schönhof, Martin; Kern, Daniel

    2002-06-01

    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.

  15. How biochemical constraints of cellular growth shape evolutionary adaptations in metabolism.

    PubMed

    Berkhout, Jan; Bosdriesz, Evert; Nikerel, Emrah; Molenaar, Douwe; de Ridder, Dick; Teusink, Bas; Bruggeman, Frank J

    2013-06-01

    Evolutionary adaptations in metabolic networks are fundamental to evolution of microbial growth. Studies on unneeded-protein synthesis indicate reductions in fitness upon nonfunctional protein synthesis, showing that cell growth is limited by constraints acting on cellular protein content. Here, we present a theory for optimal metabolic enzyme activity when cells are selected for maximal growth rate given such growth-limiting biochemical constraints. We show how optimal enzyme levels can be understood to result from an enzyme benefit minus cost optimization. The constraints we consider originate from different biochemical aspects of microbial growth, such as competition for limiting amounts of ribosomes or RNA polymerases, or limitations in available energy. Enzyme benefit is related to its kinetics and its importance for fitness, while enzyme cost expresses to what extent resource consumption reduces fitness through constraint-induced reductions of other enzyme levels. A metabolic fitness landscape is introduced to define the fitness potential of an enzyme. This concept is related to the selection coefficient of the enzyme and can be expressed in terms of its fitness benefit and cost.

  16. Harmonic Optimization in Voltage Source Inverter for PV Application using Heuristic Algorithms

    NASA Astrophysics Data System (ADS)

    Kandil, Shaimaa A.; Ali, A. A.; El Samahy, Adel; Wasfi, Sherif M.; Malik, O. P.

    2016-12-01

    Selective Harmonic Elimination (SHE) technique is the fundamental switching frequency scheme that is used to eliminate specific order harmonics. Its application to minimize low order harmonics in a three level inverter is proposed in this paper. The modulation strategy used here is SHEPWM and the nonlinear equations, that characterize the low order harmonics, are solved using Harmony Search Algorithm (HSA) to obtain the optimal switching angles that minimize the required harmonics and maintain the fundamental at the desired value. Total Harmonic Distortion (THD) of the output voltage is minimized maintaining selected harmonics within allowable limits. A comparison has been drawn between HSA, Genetic Algorithm (GA) and Newton Raphson (NR) technique using MATLAB software to determine the effectiveness of getting optimized switching angles.

  17. Using Monte Carlo Simulations to Develop an Understanding of the Hyperpolarizability Near the Fundamental Limit

    NASA Astrophysics Data System (ADS)

    Shafei, Shoresh; Kuzyk, Mark C.; Kuzyk, Mark G.

    2010-03-01

    The hyperpolarizability governs all light-matter interactions. In recent years, quantum mechanical calculations have shown that there is a fundamental limit of the hyperpolarizability of all materials. The fundamental limits are calculated only under the assumption that the Thomas Kuhn sum rules and the three-level ansatz hold. (The three-level ansatz states that for optimized hyperpolarizability, only two excited states contribute to the hyperpolarizability.) All molecules ever characterized have hyperpolarizabilities that fall well below the limits. However, Monte Carlo simulations of the nonlinear polarizability have shown that attaining values close to the fundamental limit is theoretically possible; but, the calculations do not provide guidance with regards to what potentials are optimized. The focus of our work is to use Monte Carlo techniques to determine sets of energies and transition moments that are consistent with the sum rules, and study the constraints on their signs. This analysis will be used to implement a numerical proof of three-level ansatz.

  18. A Bayesian Account of Vocal Adaptation to Pitch-Shifted Auditory Feedback

    PubMed Central

    Hahnloser, Richard H. R.

    2017-01-01

    Motor systems are highly adaptive. Both birds and humans compensate for synthetically induced shifts in the pitch (fundamental frequency) of auditory feedback stemming from their vocalizations. Pitch-shift compensation is partial in the sense that large shifts lead to smaller relative compensatory adjustments of vocal pitch than small shifts. Also, compensation is larger in subjects with high motor variability. To formulate a mechanistic description of these findings, we adapt a Bayesian model of error relevance. We assume that vocal-auditory feedback loops in the brain cope optimally with known sensory and motor variability. Based on measurements of motor variability, optimal compensatory responses in our model provide accurate fits to published experimental data. Optimal compensation correctly predicts sensory acuity, which has been estimated in psychophysical experiments as just-noticeable pitch differences. Our model extends the utility of Bayesian approaches to adaptive vocal behaviors. PMID:28135267

  19. High-frequency AC/DC converter with unity power factor and minimum harmonic distortion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wernekinch, E.R.

    1987-01-01

    The power factor is controlled by adjusting the relative position of the fundamental component of an optimized PWM-type voltage with respect to the supply voltage. Current harmonic distortion is minimized by the use of optimized firing angles for the converter at a frequency where GTO's can be used. This feature makes this approach very attractive at power levels of 100 to 600 kW. To obtain the optimized PWM pattern, a steepest descent digital computer algorithm is used. Digital-computer simulations are performed and a low-power model is constructed and tested to verify the concepts and the behavior of the model. Experimentalmore » results show that unity power factor is achieved and that the distortion in the phase currents is 10.4% at 90% of full load. This is less than achievable with sinusoidal PWM, harmonic elimination, hysteresis control, and deadbeat control for the same switching frequency.« less

  20. Fundamental Principles of Tremor Propagation in the Upper Limb.

    PubMed

    Davidson, Andrew D; Charles, Steven K

    2017-04-01

    Although tremor is the most common movement disorder, there exist few effective tremor-suppressing devices, in part because the characteristics of tremor throughout the upper limb are unknown. To clarify, optimally suppressing tremor requires a knowledge of the mechanical origin, propagation, and distribution of tremor throughout the upper limb. Here we present the first systematic investigation of how tremor propagates between the shoulder, elbow, forearm, and wrist. We simulated tremor propagation using a linear, time-invariant, lumped-parameter model relating joint torques and the resulting joint displacements. The model focused on the seven main degrees of freedom from the shoulder to the wrist and included coupled joint inertia, damping, and stiffness. We deliberately implemented a simple model to focus first on the most basic effects. Simulating tremorogenic joint torque as a sinusoidal input, we used the model to establish fundamental principles describing how input parameters (torque location and frequency) and joint impedance (inertia, damping, and stiffness) affect tremor propagation. We expect that the methods and principles presented here will serve as the groundwork for future refining studies to understand the origin, propagation, and distribution of tremor throughout the upper limb in order to enable the future development of optimal tremor-suppressing devices.

  1. Fundamental Principles of Tremor Propagation in the Upper Limb

    PubMed Central

    Davidson, Andrew D.; Charles, Steven K.

    2017-01-01

    Although tremor is the most common movement disorder, there exist few effective tremor-suppressing devices, in part because the characteristics of tremor throughout the upper limb are unknown. To clarify, optimally suppressing tremor requires a knowledge of the mechanical origin, propagation, and distribution of tremor throughout the upper limb. Here we present the first systematic investigation of how tremor propagates between the shoulder, elbow, forearm, and wrist. We simulated tremor propagation using a linear, time-invariant, lumped-parameter model relating joint torques and the resulting joint displacements. The model focused on the seven main degrees of freedom from the shoulder to the wrist and included coupled joint inertia, damping, and stiffness. We deliberately implemented a simple model to focus first on the most basic effects. Simulating tremorogenic joint torque as a sinusoidal input, we used the model to establish fundamental principles describing how input parameters (torque location and frequency) and joint impedance (inertia, damping, and stiffness) affect tremor propagation. We expect that the methods and principles presented here will serve as the groundwork for future refining studies to understand the origin, propagation, and distribution of tremor throughout the upper limb in order to enable the future development of optimal tremor-suppressing devices. PMID:27957608

  2. From Finite Time to Finite Physical Dimensions Thermodynamics: The Carnot Engine and Onsager's Relations Revisited

    NASA Astrophysics Data System (ADS)

    Feidt, Michel; Costea, Monica

    2018-04-01

    Many works have been devoted to finite time thermodynamics since the Curzon and Ahlborn [1] contribution, which is generally considered as its origin. Nevertheless, previous works in this domain have been revealed [2], [3], and recently, results of the attempt to correlate Finite Time Thermodynamics with Linear Irreversible Thermodynamics according to Onsager's theory were reported [4]. The aim of the present paper is to extend and improve the approach relative to thermodynamic optimization of generic objective functions of a Carnot engine with linear response regime presented in [4]. The case study of the Carnot engine is revisited within the steady state hypothesis, when non-adiabaticity of the system is considered, and heat loss is accounted for by an overall heat leak between the engine heat reservoirs. The optimization is focused on the main objective functions connected to engineering conditions, namely maximum efficiency or power output, except the one relative to entropy that is more fundamental. Results given in reference [4] relative to the maximum power output and minimum entropy production as objective function are reconsidered and clarified, and the change from finite time to finite physical dimension was shown to be done by the heat flow rate at the source. Our modeling has led to new results of the Carnot engine optimization and proved that the primary interest for an engineer is mainly connected to what we called Finite Physical Dimensions Optimal Thermodynamics.

  3. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  4. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  5. Use of combined radar and radiometer systems in space for precipitation measurement: Some ideas

    NASA Technical Reports Server (NTRS)

    Moore, R. K.

    1981-01-01

    A brief survey is given of some fundamental physical concepts of optimal polarization characteristics of a transmission path or scatter ensemble of hydrometers. It is argued that, based on this optimization concept, definite advances in remote atmospheric sensing are to be expected. Basic properties of Kennaugh's optimal polarization theory are identified.

  6. The design and networking of dynamic satellite constellations for global mobile communication systems

    NASA Technical Reports Server (NTRS)

    Cullen, Cionaith J.; Benedicto, Xavier; Tafazolli, Rahim; Evans, Barry

    1993-01-01

    Various design factors for mobile satellite systems, whose aim is to provide worldwide voice and data communications to users with hand-held terminals, are examined. Two network segments are identified - the ground segment (GS) and the space segment (SS) - and are seen to be highly dependent on each other. The overall architecture must therefore be adapted to both of these segments, rather than each being optimized according to its own criteria. Terrestrial networks are grouped and called the terrestrial segment (TS). In the SS, of fundamental importance is the constellation altitude. The effect of the altitude on decisions such as constellation design choice and on network aspects like call handover statistics are fundamental. Orbit resonance is introduced and referred to throughout. It is specifically examined for its useful properties relating to GS/SS connectivities.

  7. Humans make efficient use of natural image statistics when performing spatial interpolation.

    PubMed

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  8. Lightweight structure design for supporting plate of primary mirror

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Wang, Wei; Liu, Bei; Qu, Yan Jun; Li, Xu Peng

    2017-10-01

    A topological optimization design for the lightweight technology of supporting plate of the primary mirror is presented in this paper. The supporting plate of the primary mirror is topologically optimized under the condition of determined shape, loads and environment. And the optimal structure is obtained. The diameter of the primary mirror in this paper is 450mm, and the material is SiC1 . It is better to select SiC/Al as the supporting material. Six points of axial relative displacement can be used as constraints in optimization2 . Establishing the supporting plate model and setting up the model parameters. After analyzing the force of the main mirror on the supporting plate, the model is applied with force and constraints. Modal analysis and static analysis of supporting plates are calculated. The continuum structure topological optimization mathematical model is created with the variable-density method. The maximum deformation of the surface of supporting plate under the gravity of the mirror and the first model frequency are assigned to response variable, and the entire volume of supporting structure is converted to object function. The structures before and after optimization are analyzed using the finite element method. Results show that the optimized fundamental frequency increases 29.85Hz and has a less displacement compared with the traditional structure.

  9. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  10. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.

  11. Comparison of fundamental, second harmonic, and superharmonic imaging: a simulation study.

    PubMed

    van Neer, Paul L M J; Danilouchkine, Mikhail G; Verweij, Martin D; Demi, Libertario; Voormolen, Marco M; van der Steen, Anton F W; de Jong, Nico

    2011-11-01

    In medical ultrasound, fundamental imaging (FI) uses the reflected echoes from the same spectral band as that of the emitted pulse. The transmission frequency determines the trade-off between penetration depth and spatial resolution. Tissue harmonic imaging (THI) employs the second harmonic of the emitted frequency band to construct images. Recently, superharmonic imaging (SHI) has been introduced, which uses the third to the fifth (super) harmonics. The harmonic level is determined by two competing phenomena: nonlinear propagation and frequency dependent attenuation. Thus, the transmission frequency yielding the optimal trade-off between the spatial resolution and the penetration depth differs for THI and SHI. This paper quantitatively compares the concepts of fundamental, second harmonic, and superharmonic echocardiography at their optimal transmission frequencies. Forward propagation is modeled using a 3D-KZK implementation and the iterative nonlinear contrast source (INCS) method. Backpropagation is assumed to be linear. Results show that the fundamental lateral beamwidth is the narrowest at focus, while the superharmonic one is narrower outside the focus. The lateral superharmonic roll-off exceeds the fundamental and second harmonic roll-off. Also, the axial resolution of SHI exceeds that of FI and THI. The far-field pulse-echo superharmonic pressure is lower than that of the fundamental and second harmonic. SHI appears suited for echocardiography and is expected to improve its image quality at the cost of a slight reduction in depth-of-field.

  12. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  13. Planning Under Uncertainty: Methods and Applications

    DTIC Science & Technology

    2010-06-09

    begun research into fundamental algorithms for optimization and re?optimization of continuous optimization problems (such as linear and quadratic... algorithm yields a 14.3% improvement over the original design while saving 68.2 % of the simulation evaluations compared to standard sample-path...They provide tools for building and justifying computational algorithms for such problems. Year. 2010 Month: 03 Final Research under this grant

  14. Hall thruster with grooved walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Hong; Ning Zhongxi; Yu Daren

    2013-02-28

    Axial-oriented and azimuthal-distributed grooves are formed on channel walls of a Hall thruster after the engine undergoes a long-term operation. Existing studies have demonstrated the relation between the grooves and the near-wall physics, such as sheath and electron near-wall transport. The idea to optimize the thruster performance with such grooves was also proposed. Therefore, this paper is devoted to explore the effects of wall grooves on the discharge characteristics of a Hall thruster. With experimental measurements, the variations on electron conductivity, ionization distribution, and integrated performance are obtained. The involved physical mechanisms are then analyzed and discussed. The findings helpmore » to not only better understand the working principle of Hall thruster discharge but also establish a physical fundamental for the subsequent optimization with artificial grooves.« less

  15. [A modern Kaspar Hauser].

    PubMed

    Etzersdorfer, Elmar; Schäfer, Monika; Becker-Pfaff, Johannes

    2002-07-01

    The case of a mentally disturbed young man is described, whose identity could not be revealed over more than four months. It is compared to Kaspar Hauser, similarities and differences are highlighted. The "Kaspar Hauser of Stuttgart" is retrospectively revealed as a young man with experiences of severe deprivation. The missing case history and independent history were an obstacle in the treatment, albeit not fundamental, advantages being reached with a relation-oriented approach. Informations obtained after clarification of the identity of the young man suggest a multiplication of unfavourable circumstances, leading to sub-optimal treatment and finally the "Kaspar-Hauser-Situation". Contributing were his belonging to the less integrated group of migrants in Germany, as well as severe deprivation in the family and a generally sub-optimal treatment of mentally disturbed young patients.

  16. Can market-based policies accomplish the optimal floodplain management? A gap between static and dynamic models.

    PubMed

    Mori, Koichiro

    2009-02-01

    The purpose of this short article is to set static and dynamic models for optimal floodplain management and to compare policy implications from the models. River floodplains are important multiple resources in that they provide various ecosystem services. It is fundamentally significant to consider environmental externalities that accrue from ecosystem services of natural floodplains. There is an interesting gap between static and dynamic models about policy implications for floodplain management, although they are based on the same assumptions. Essentially, we can derive the same optimal conditions, which imply that the marginal benefits must equal the sum of the marginal costs and the social external costs related to ecosystem services. Thus, we have to internalise the external costs by market-based policies. In this respect, market-based policies seem to be effective in a static model. However, they are not sufficient in the context of a dynamic model because the optimal steady state turns out to be unstable. Based on a dynamic model, we need more coercive regulation policies.

  17. Estimating the relative utility of screening mammography.

    PubMed

    Abbey, Craig K; Eckstein, Miguel P; Boone, John M

    2013-05-01

    The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.

  18. Small Changes: Using Assessment to Direct Instructional Practices in Large-Enrollment Biochemistry Courses

    PubMed Central

    Xu, Xiaoying; Lewis, Jennifer E.; Loertscher, Jennifer; Minderhout, Vicky; Tienson, Heather L.

    2017-01-01

    Multiple-choice assessments provide a straightforward way for instructors of large classes to collect data related to student understanding of key concepts at the beginning and end of a course. By tracking student performance over time, instructors receive formative feedback about their teaching and can assess the impact of instructional changes. The evidence of instructional effectiveness can in turn inform future instruction, and vice versa. In this study, we analyzed student responses on an optimized pretest and posttest administered during four different quarters in a large-enrollment biochemistry course. Student performance and the effect of instructional interventions related to three fundamental concepts—hydrogen bonding, bond energy, and pKa—were analyzed. After instructional interventions, a larger proportion of students demonstrated knowledge of these concepts compared with data collected before instructional interventions. Student responses trended from inconsistent to consistent and from incorrect to correct. The instructional effect was particularly remarkable for the later three quarters related to hydrogen bonding and bond energy. This study supports the use of multiple-choice instruments to assess the effectiveness of instructional interventions, especially in large classes, by providing instructors with quick and reliable feedback on student knowledge of each specific fundamental concept. PMID:28188280

  19. BaTiO3-based piezoelectrics: Fundamentals, current status, and perspectives

    NASA Astrophysics Data System (ADS)

    Acosta, M.; Novak, N.; Rojas, V.; Patel, S.; Vaish, R.; Koruza, J.; Rossetti, G. A.; Rödel, J.

    2017-12-01

    We present a critical review that encompasses the fundamentals and state-of-the-art knowledge of barium titanate-based piezoelectrics. First, the essential crystallography, thermodynamic relations, and concepts necessary to understand piezoelectricity and ferroelectricity in barium titanate are discussed. Strategies to optimize piezoelectric properties through microstructure control and chemical modification are also introduced. Thereafter, we systematically review the synthesis, microstructure, and phase diagrams of barium titanate-based piezoelectrics and provide a detailed compilation of their functional and mechanical properties. The most salient materials treated include the (Ba,Ca)(Zr,Ti)O3, (Ba,Ca)(Sn,Ti)O3, and (Ba,Ca)(Hf,Ti)O3 solid solution systems. The technological relevance of barium titanate-based piezoelectrics is also discussed and some potential market indicators are outlined. Finally, perspectives on productive lines of future research and promising areas for the applications of these materials are presented.

  20. Enhancing Electrochemical Water-Splitting Kinetics by Polarization-Driven Formation of Near-Surface Iron(0): An In Situ XPS Study on Perovskite-Type Electrodes**

    PubMed Central

    Opitz, Alexander K; Nenning, Andreas; Rameshan, Christoph; Rameshan, Raffael; Blume, Raoul; Hävecker, Michael; Knop-Gericke, Axel; Rupprechter, Günther; Fleig, Jürgen; Klötzer, Bernhard

    2015-01-01

    In the search for optimized cathode materials for high-temperature electrolysis, mixed conducting oxides are highly promising candidates. This study deals with fundamentally novel insights into the relation between surface chemistry and electrocatalytic activity of lanthanum ferrite based electrolysis cathodes. For this means, near-ambient-pressure X-ray photoelectron spectroscopy (NAP-XPS) and impedance spectroscopy experiments were performed simultaneously on electrochemically polarized La0.6Sr0.4FeO3−δ (LSF) thin film electrodes. Under cathodic polarization the formation of Fe0 on the LSF surface could be observed, which was accompanied by a strong improvement of the electrochemical water splitting activity of the electrodes. This correlation suggests a fundamentally different water splitting mechanism in presence of the metallic iron species and may open novel paths in the search for electrodes with increased water splitting activity. PMID:25557533

  1. Phase retrieval from intensity-only data by relative entropy minimization.

    PubMed

    Deming, Ross W

    2007-11-01

    A recursive algorithm, which appears to be new, is presented for estimating the amplitude and phase of a wave field from intensity-only measurements on two or more scan planes at different axial positions. The problem is framed as a nonlinear optimization, in which the angular spectrum of the complex field model is adjusted in order to minimize the relative entropy, or Kullback-Leibler divergence, between the measured and reconstructed intensities. The most common approach to this so-called phase retrieval problem is a variation of the well-known Gerchberg-Saxton algorithm devised by Misell (J. Phys. D6, L6, 1973), which is efficient and extremely simple to implement. The new algorithm has a computational structure that is very similar to Misell's approach, despite the fundamental difference in the optimization criteria used for each. Based upon results from noisy simulated data, the new algorithm appears to be more robust than Misell's approach and to produce better results from low signal-to-noise ratio data. The convergence of the new algorithm is examined.

  2. Optimal design of high-rise buildings with respect to fundamental eigenfrequency

    NASA Astrophysics Data System (ADS)

    Alavi, Arsalan; Rahgozar, Reza; Torkzadeh, Peyman; Hajabasi, Mohamad Ali

    2017-12-01

    In modern tall and slender structures, dynamic responses are usually the dominant design requirements, instead of strength criteria. Resonance is often a threatening phenomenon for such structures. To avoid this problem, the fundamental eigenfrequency, an eigenfrequency of higher order, should be maximized. An optimization problem with this objective is constructed in this paper and is applied to a high-rise building. Using variational method, the objective function is maximized, contributing to a particular profile for the first mode shape. Based on this preselected profile, a parametric formulation for flexural stiffness is calculated. Due to some near-zero values for stiffness, the obtained formulation will be modified by adding a lower bound constraint. To handle this constraint some new parameters are introduced; thereby allowing for construction of a model relating the unknown parameters. Based on this mathematical model, a design algorithmic procedure is presented. For the sake of convenience, a single-input design graph is presented as well. The main merit of the proposed method, compared to previous researches, is its hand calculation aspect, suitable for parametric studies and sensitivity analysis. As the presented formulations are dimensionless, they are applicable in any dimensional system. Accuracy and practicality of the proposed method is illustrated at the end by applying it to a real-life structure.

  3. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  4. Optimization of landscape pattern [Chapter 8

    Treesearch

    John Hof; Curtis Flather

    2007-01-01

    A fundamental assumption in landscape ecology is that spatial patterns have significant influences on the flows of materials, energy, and information while processes create, modify, and maintain spatial patterns. Thus, it is of paramount importance in both theory and practice to address the questions of landscape pattern optimization ... For example, can landscape...

  5. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  6. Measurement uncertainty relations: characterising optimal error bounds for qubits

    NASA Astrophysics Data System (ADS)

    Bullock, T.; Busch, P.

    2018-07-01

    In standard formulations of the uncertainty principle, two fundamental features are typically cast as impossibility statements: two noncommuting observables cannot in general both be sharply defined (for the same state), nor can they be measured jointly. The pioneers of quantum mechanics were acutely aware and puzzled by this fact, and it motivated Heisenberg to seek a mitigation, which he formulated in his seminal paper of 1927. He provided intuitive arguments to show that the values of, say, the position and momentum of a particle can at least be unsharply defined, and they can be measured together provided some approximation errors are allowed. Only now, nine decades later, a working theory of approximate joint measurements is taking shape, leading to rigorous and experimentally testable formulations of associated error tradeoff relations. Here we briefly review this new development, explaining the concepts and steps taken in the construction of optimal joint approximations of pairs of incompatible observables. As a case study, we deduce measurement uncertainty relations for qubit observables using two distinct error measures. We provide an operational interpretation of the error bounds and discuss some of the first experimental tests of such relations.

  7. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  8. Portfolio theory of optimal isometric force production: Variability predictions and nonequilibrium fluctuation dissipation theorem

    NASA Astrophysics Data System (ADS)

    Frank, T. D.; Patanarapeelert, K.; Beek, P. J.

    2008-05-01

    We derive a fundamental relationship between the mean and the variability of isometric force. The relationship arises from an optimal collection of active motor units such that the force variability assumes a minimum (optimal isometric force). The relationship is shown to be independent of the explicit motor unit properties and of the dynamical features of isometric force production. A constant coefficient of variation in the asymptotic regime and a nonequilibrium fluctuation-dissipation theorem for optimal isometric force are predicted.

  9. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  10. An effective and comprehensive model for optimal rehabilitation of separate sanitary sewer systems.

    PubMed

    Diogo, António Freire; Barros, Luís Tiago; Santos, Joana; Temido, Jorge Santos

    2018-01-15

    In the field of rehabilitation of separate sanitary sewer systems, a large number of technical, environmental, and economic aspects are often relevant in the decision-making process, which may be modelled as a multi-objective optimization problem. Examples are those related with the operation and assessment of networks, optimization of structural, hydraulic, sanitary, and environmental performance, rehabilitation programmes, and execution works. In particular, the cost of investment, operation and maintenance needed to reduce or eliminate Infiltration from the underground water table and Inflows of storm water surface runoff (I/I) using rehabilitation techniques or related methods can be significantly lower than the cost of transporting and treating these flows throughout the lifespan of the systems or period studied. This paper presents a comprehensive I/I cost-benefit approach for rehabilitation that explicitly considers all elements of the systems and shows how the approximation is incorporated as an objective function in a general evolutionary multi-objective optimization model. It takes into account network performance and wastewater treatment costs, average values of several input variables, and rates that can reflect the adoption of different predictable or limiting scenarios. The approach can be used as a practical and fast tool to support decision-making in sewer network rehabilitation in any phase of a project. The fundamental aspects, modelling, implementation details and preliminary results of a two-objective optimization rehabilitation model using a genetic algorithm, with a second objective function related to the structural condition of the network and the service failure risk, are presented. The basic approach is applied to three real world cases studies of sanitary sewerage systems in Coimbra and the results show the simplicity, suitability, effectiveness, and usefulness of the approximation implemented and of the objective function proposed. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  12. Theoretical Foundation of Copernicus: A Unified System for Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Ocampo, Cesar; Senent, Juan S.; Williams, Jacob

    2010-01-01

    The fundamental methods are described for the general spacecraft trajectory design and optimization software system called Copernicus. The methods rely on a unified framework that is used to model, design, and optimize spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The trajectory model, with its associated equations of motion and maneuver models, are discussed.

  13. Optimizing atomic force microscopy for characterization of diamond-protein interfaces

    NASA Astrophysics Data System (ADS)

    Rezek, Bohuslav; Ukraintsev, Egor; Kromka, Alexander

    2011-12-01

    Atomic force microscopy (AFM) in contact mode and tapping mode is employed for high resolution studies of soft organic molecules (fetal bovine serum proteins) on hard inorganic diamond substrates in solution and air. Various effects in morphology and phase measurements related to the cantilever spring constant, amplitude of tip oscillations, surface approach, tip shape and condition are demonstrated and discussed based on the proposed schematic models. We show that both diamond and proteins can be mechanically modified by Si AFM cantilever. We propose how to choose suitable cantilever type, optimize scanning parameters, recognize and minimize various artifacts, and obtain reliable AFM data both in solution and in air to reveal microscopic characteristics of protein-diamond interfaces. We also suggest that monocrystalline diamond is well defined substrate that can be applicable for fundamental studies of molecules on surfaces in general.

  14. Partial discharge localization in power transformers based on the sequential quadratic programming-genetic algorithm adopting acoustic emission techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hua-Long; Liu, Hua-Dong

    2014-10-01

    Partial discharge (PD) in power transformers is one of the prime reasons resulting in insulation degradation and power faults. Hence, it is of great importance to study the techniques of the detection and localization of PD in theory and practice. The detection and localization of PD employing acoustic emission (AE) techniques, as a kind of non-destructive testing, plus due to the advantages of powerful capability of locating and high precision, have been paid more and more attention. The localization algorithm is the key factor to decide the localization accuracy in AE localization of PD. Many kinds of localization algorithms exist for the PD source localization adopting AE techniques including intelligent and non-intelligent algorithms. However, the existed algorithms possess some defects such as the premature convergence phenomenon, poor local optimization ability and unsuitability for the field applications. To overcome the poor local optimization ability and easily caused premature convergence phenomenon of the fundamental genetic algorithm (GA), a new kind of improved GA is proposed, namely the sequence quadratic programming-genetic algorithm (SQP-GA). For the hybrid optimization algorithm, SQP-GA, the sequence quadratic programming (SQP) algorithm which is used as a basic operator is integrated into the fundamental GA, so the local searching ability of the fundamental GA is improved effectively and the premature convergence phenomenon is overcome. Experimental results of the numerical simulations of benchmark functions show that the hybrid optimization algorithm, SQP-GA, is better than the fundamental GA in the convergence speed and optimization precision, and the proposed algorithm in this paper has outstanding optimization effect. At the same time, the presented SQP-GA in the paper is applied to solve the ultrasonic localization problem of PD in transformers, then the ultrasonic localization method of PD in transformers based on the SQP-GA is proposed. And localization results based on the SQP-GA are compared with some algorithms such as the GA, some other intelligent and non-intelligent algorithms. The results of calculating examples both stimulated and spot experiments demonstrate that the localization method based on the SQP-GA can effectively prevent the results from getting trapped into the local optimum values, and the localization method is of great feasibility and very suitable for the field applications, and the precision of localization is enhanced, and the effectiveness of localization is ideal and satisfactory.

  15. HUMAN SPEECH: A RESTRICTED USE OF THE MAMMALIAN LARYNX

    PubMed Central

    Titze, Ingo R.

    2016-01-01

    Purpose Speech has been hailed as unique to human evolution. While the inventory of distinct sounds producible with vocal tract articulators is a great advantage in human oral communication, it is argued here that the larynx as a sound source in speech is limited in its range and capability because a low fundamental frequency is ideal for phonemic intelligibility and source-filter independence. Method Four existing data sets were combined to make an argument regarding exclusive use of the larynx for speech: (1) range of fundamental frequency, (2) laryngeal muscle activation, (3) vocal fold length in relation to sarcomere length of the major laryngeal muscles, and (4) vocal fold morphological development. Results Limited data support the notion that speech tends to produce a contracture of the larynx. The morphological design of the human vocal folds, like that of primates and other mammals, is optimized for vocal communication over distances for which higher fundamental frequency, higher intensity, and fewer unvoiced segments are utilized than in conversational speech. Conclusion The positive message is that raising one’s voice to call, shout, or sing, or executing pitch glides to stretch the vocal folds, can counteract this trend toward a contracted state. PMID:27397113

  16. Applications of artificial neural nets in structural mechanics

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hajela, Prabhat

    1990-01-01

    A brief introduction to the fundamental of Neural Nets is given, followed by two applications in structural optimization. In the first case, the feasibility of simulating with neural nets the many structural analyses performed during optimization iterations was studied. In the second case, the concept of using neural nets to capture design expertise was studied.

  17. Applications of artificial neural nets in structural mechanics

    NASA Technical Reports Server (NTRS)

    Berke, L.; Hajela, P.

    1992-01-01

    A brief introduction to the fundamental of Neural Nets is given, followed by two applications in structural optimization. In the first case, the feasibility of simulating with neural nets the many structural analyses performed during optimization iterations was studied. In the second case, the concept of using neural nets to capture design expertise was studied.

  18. Fundamental role of bistability in optimal homeostatic control

    NASA Astrophysics Data System (ADS)

    Wang, Guanyu

    2013-03-01

    Bistability is a fundamental phenomenon in nature and has a number of fine properties. However, these properties are consequences of bistability at the physiological level, which do not explain why it had to emerge during evolution. Using optimal homeostasis as the first principle and Pontryagin's Maximum Principle as the optimization approach, I find that bistability emerges as an indispensable control mechanism. Because the mathematical model is general and the result is independent of parameters, it is likely that most biological systems use bistability to control homeostasis. Glucose homeostasis represents a good example. It turns out that bistability is the only solution to a dilemma in glucose homeostasis: high insulin efficiency is required for rapid plasma glucose clearance, whereas an insulin sparing state is required to guarantee the brain's safety during fasting. This new perspective can illuminate studies on the twin epidemics of obesity and diabetes and the corresponding intervening strategies. For example, overnutrition and sedentary lifestyle may represent sudden environmental changes that cause the lose of optimality, which may contribute to the marked rise of obesity and diabetes in our generation.

  19. Diffusion and surface alloying of gradient nanostructured metals

    PubMed Central

    Lu, Ke

    2017-01-01

    Gradient nanostructures (GNSs) have been optimized in recent years for desired performance. The diffusion behavior in GNS metals is crucial for understanding the diffusion mechanism and relative characteristics of different interfaces that provide fundamental understanding for advancing the traditional surface alloying processes. In this paper, atomic diffusion, reactive diffusion, and surface alloying processes are reviewed for various metals with a preformed GNS surface layer. We emphasize the promoted atomic diffusion and reactive diffusion in the GNS surface layer that are related to a higher interfacial energy state with respect to those in relaxed coarse-grained samples. Accordingly, different surface alloying processes, such as nitriding and chromizing, have been modified significantly, and some diffusion-related properties have been enhanced. Finally, the perspectives on current research in this field are discussed. PMID:28382244

  20. Fundamental Limits of Delay and Security in Device-to-Device Communication

    DTIC Science & Technology

    2013-01-01

    systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a

  1. Assessing the antecedents and consequences of threat appraisal of an acute psychosocial stressor: the role of optimism, displacement behavior, and physiological responses.

    PubMed

    Zandara, Martina; Villada, Carolina; Hidalgo, Vanesa; Salvador, Alicia

    2018-03-13

    The feeling of stress is increasing in today's societies, particularly in young adults subjected to social evaluative situations in highly competitive academic and work contexts. Threat appraisal is a primary and fundamental reaction when people face a stressful situation. The aim of this study was to investigate the role of dispositional optimism as an antecedent and displacement behavior as a consequence of threat appraisal of a social-evaluative situation of stress. A second objective was to verify the moderating role of physiological responses to stress (heart rate and cortisol reactivity) in the relationship between threat appraisal and displacement behavior. To do this, we combined the Trier Social Stress Test (TSST) with ethological analysis, self-report questionnaires, and physiological data. As expected, people who scored higher on dispositional optimism perceived stress as less threatening, and a higher perception of threat was positively related to displacement behavior patterns. Moreover, the results showed that threat appraisal fully mediates the relationship between dispositional optimism and displacement behavior, and that only heart rate reactivity (not cortisol) moderates the relationship between threat appraisal and displacement behavior.

  2. Optimal and Approximately Optimal Control Policies for Queues in Heavy Traffic,

    DTIC Science & Technology

    1987-03-01

    optimal and ’nearly optimal’ control problems for the open queueing networks in heavy traffic of the type dealt with in the fundamental papers of Reiman ...then the covariance is precisely that obtained by Reiman [1] (with a different notation used there). It is evident from (4.4) and the cited...wU’ ’U, d A K . " -50- References [1] M.I. Reiman , "Open queueing networks in heavy traffic", Math. of Operations Research, 9, 1984, p. 441-458. [2] J

  3. Observations on the Proper Orthogonal Decomposition

    NASA Technical Reports Server (NTRS)

    Berkooz, Gal

    1992-01-01

    The Proper Orthogonal Decomposition (P.O.D.), also known as the Karhunen-Loeve expansion, is a procedure for decomposing a stochastic field in an L(2) optimal sense. It is used in diverse disciplines from image processing to turbulence. Recently the P.O.D. is receiving much attention as a tool for studying dynamics of systems in infinite dimensional space. This paper reviews the mathematical fundamentals of this theory. Also included are results on the span of the eigenfunction basis, a geometric corollary due to Chebyshev's inequality and a relation between the P.O.D. symmetry and ergodicity.

  4. Rotational strength of dye-helix complexes as studied by a potential model theory

    NASA Astrophysics Data System (ADS)

    Kamiya, Mamoru

    1988-03-01

    The fundamental features of the induced optical activity in dye-helix complexes are clarified by the trap potential model. The effect of the potential depth on the induced rotational strength is explained in terms of the relative magnitudes of the wave-phase and helix-phase variations in the path of an electron moving along a restricted helix segment just like an exciton trapped around a dye intercalation site. The potential parameters have been optimized so as to reproduce the ionic strength effect upon the rotational strengths induced in proflavine-DNA intercalation complexes.

  5. A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Rosu, Serban

    2011-09-01

    In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.

  6. A charge optimized many-body potential for titanium nitride (TiN).

    PubMed

    Cheng, Y-T; Liang, T; Martinez, J A; Phillpot, S R; Sinnott, S B

    2014-07-02

    This work presents a new empirical, variable charge potential for TiN systems in the charge-optimized many-body potential framework. The potential parameters were determined by fitting them to experimental data for the enthalpy of formation, lattice parameters, and elastic constants of rocksalt structured TiN. The potential does a good job of describing the fundamental physical properties (defect formation and surface energies) of TiN relative to the predictions of first-principles calculations. This potential is used in classical molecular dynamics simulations to examine the interface of fcc-Ti(0 0 1)/TiN(0 0 1) and to characterize the adsorption of oxygen atoms and molecules on the TiN(0 0 1) surface. The results indicate that the potential is well suited to model TiN thin films and to explore the chemistry associated with their oxidation.

  7. Fundamentals of Coherent Synchrotron Radiation in Storage Rings

    NASA Astrophysics Data System (ADS)

    Sannibale, F.; Byrd, J. M.; Loftsdottir, A.; Martin, M. C.; Venturini, M.

    2004-05-01

    We present the fundamental concepts for producing stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The analysis includes distortion of bunch shape from the synchrotron radiation (SR), enhancing higher frequency coherent emission and limits to stable emission due to a microbunching instability excited by the SR. We use these concepts to optimize the performance of a source for CSR emission.

  8. Real-Time Mapping Using Stereoscopic Vision Optimization

    DTIC Science & Technology

    2005-03-01

    pinhole geometry . . . . . . . . . . . . . . 17 2.8. Artificially textured scenes . . . . . . . . . . . . . . . . . . . . 23 3.1. Bilbo the robot...geometry. 2.2.1 The Fundamental Matrix. The fundamental matrix (F) describes the relationship between a pair of 2D pictures of a 3D scene . This is...eight CCD cameras to compute a mesh model of the environment from a large number of overlapped 3D images. In [1,17], a range scanner is combined with a

  9. Particle Engineering of Excipients for Direct Compression: Understanding the Role of Material Properties.

    PubMed

    Mangal, Sharad; Meiser, Felix; Morton, David; Larson, Ian

    2015-01-01

    Tablets represent the preferred and most commonly dispensed pharmaceutical dosage form for administering active pharmaceutical ingredients (APIs). Minimizing the cost of goods and improving manufacturing output efficiency has motivated companies to use direct compression as a preferred method of tablet manufacturing. Excipients dictate the success of direct compression, notably by optimizing powder formulation compactability and flow, thus there has been a surge in creating excipients specifically designed to meet these needs for direct compression. Greater scientific understanding of tablet manufacturing coupled with effective application of the principles of material science and particle engineering has resulted in a number of improved direct compression excipients. Despite this, significant practical disadvantages of direct compression remain relative to granulation, and this is partly due to the limitations of direct compression excipients. For instance, in formulating high-dose APIs, a much higher level of excipient is required relative to wet or dry granulation and so tablets are much bigger. Creating excipients to enable direct compression of high-dose APIs requires the knowledge of the relationship between fundamental material properties and excipient functionalities. In this paper, we review the current understanding of the relationship between fundamental material properties and excipient functionality for direct compression.

  10. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  11. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  12. Rationally optimized cryopreservation of multiple mouse embryonic stem cell lines: I--Comparative fundamental cryobiology of multiple mouse embryonic stem cell lines and the implications for embryonic stem cell cryopreservation protocols.

    PubMed

    Kashuba, Corinna M; Benson, James D; Critser, John K

    2014-04-01

    The post-thaw recovery of mouse embryonic stem cells (mESCs) is often assumed to be adequate with current methods. However as this publication will show, this recovery of viable cells actually varies significantly by genetic background. Therefore there is a need to improve the efficiency and reduce the variability of current mESC cryopreservation methods. To address this need, we employed the principles of fundamental cryobiology to improve the cryopreservation protocol of four mESC lines from different genetic backgrounds (BALB/c, CBA, FVB, and 129R1 mESCs) through a comparative study characterizing the membrane permeability characteristics and membrane integrity osmotic tolerance limits of each cell line. In the companion paper, these values were used to predict optimal cryoprotectants, cooling rates, warming rates, and plunge temperatures, and then these predicted optimal protocols were validated against standard freezing protocols. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  14. Use of CFD for static sampling hood design: An example for methane flux assessment on landfill surfaces.

    PubMed

    Lucernoni, Federico; Rizzotto, Matteo; Tapparo, Federica; Capelli, Laura; Sironi, Selena; Busini, Valentina

    2016-11-01

    The work focuses on the principles for the design of a specific static hood and on the definition of an optimal sampling procedure for the assessment of landfill gas (LFG) surface emissions. This is carried out by means of computational fluid dynamics (CFD) simulations to investigate the fluid dynamics conditions of the hood. The study proves that understanding the fluid dynamic conditions is fundamental in order to understand the sampling results and correctly interpret the measured concentration values by relating them to a suitable LFG emission model, and therefore to estimate emission rates. For this reason, CFD is a useful tool for the design and evaluation of sampling systems, among others, to verify the fundamental hypotheses on which the mass balance for the sampling hood is defined. The procedure here discussed, which is specific for the case of the investigated landfill, can be generalized to be applied also to different scenarios, where hood sampling is involved. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Direct measurement and characterization of active photosynthesis zones inside wastewater remediating and biofuel producing microalgal biofilms.

    PubMed

    Bernstein, Hans C; Kesaano, Maureen; Moll, Karen; Smith, Terence; Gerlach, Robin; Carlson, Ross P; Miller, Charles D; Peyton, Brent M; Cooksey, Keith E; Gardner, Robert D; Sims, Ronald C

    2014-03-01

    Microalgal biofilm based technologies are of keen interest due to their high biomass concentrations and ability to utilize light and CO2. While photoautotrophic biofilms have long been used for wastewater remediation, biofuel production represents a relatively new and under-represented focus area. However, the direct measurement and characterization of fundamental parameters required for industrial control are challenging due to biofilm heterogeneity. This study evaluated oxygenic photosynthesis and respiration on two distinct microalgal biofilms cultured using a novel rotating algal biofilm reactor operated at field- and laboratory-scales. Clear differences in oxygenic photosynthesis and respiration were observed based on different culturing conditions, microalgal composition, light intensity and nitrogen availability. The cultures were also evaluated as potential biofuel synthesis strategies. Nitrogen depletion was not found to have the same effect on lipid accumulation compared to traditional planktonic microalgal studies. Physiological characterizations of these microalgal biofilms identify fundamental parameters needed to understand and control process optimization. Published by Elsevier Ltd.

  16. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  17. Optimal design strategy of switching converters employing current injected control

    NASA Astrophysics Data System (ADS)

    Lee, F. C.; Fang, Z. D.; Lee, T. H.

    1985-01-01

    This paper analyzes a buck/boost regulator employing current-injected control (CIC). It reveals the complex interactions between the dc loop and the current-injected loop and underlines the fundamental principle that governs the loop gain determination. Three commonly used compensation techniques are compared. The integral and lead/lag compensation are shown to be most desirable for performance optimization and stability.

  18. Age grouping to optimize augmentation success.

    PubMed

    Gordon, Robert W

    2010-05-01

    This article has described the different age groups that present for noninvasive injectable lip and perioral augmentation, as well as the breakdown of 3 subgroups that present within the 4 general age groups. With the fundamental understanding of these presenting groups and subgroups, the practicing augmenter will be able to better treatment plan and educate the patient on realistic and optimal aesthetic outcomes.

  19. Fundamental Study of the Delivery of Nanoiron to DNAPL Source Zones in Naturally Heterogeneous Field Systems

    DTIC Science & Technology

    2012-09-01

    121 Published text books , book chapters, and theses.........................................................................125...optimize the rate and method of injection (e.g. direct push, hydraulic fracture ), or to optimize the nanoiron properties for specific site geology...expected that higher injection rates will increase the radius of influence by decreasing the efficiency of all three attachment mechanisms (diffusion

  20. A look at ligand binding thermodynamics in drug discovery.

    PubMed

    Claveria-Gimeno, Rafael; Vega, Sonia; Abian, Olga; Velazquez-Campoy, Adrian

    2017-04-01

    Drug discovery is a challenging endeavor requiring the interplay of many different research areas. Gathering information on ligand binding thermodynamics may help considerably in reducing the risk within a high uncertainty scenario, allowing early rejection of flawed compounds and pushing forward optimal candidates. In particular, the free energy, the enthalpy, and the entropy of binding provide fundamental information on the intermolecular forces driving such interaction. Areas covered: The authors review the current status and recent developments in the application of ligand binding thermodynamics in drug discovery. The thermodynamic binding profile (Gibbs energy, enthalpy, and entropy of binding) can be used for lead selection and optimization (binding enthalpy, selectivity, and adaptability). Expert opinion: Binding thermodynamics provides fundamental information on the forces driving the formation of the drug-target complex. It has been widely accepted that binding thermodynamics may be used as a decision criterion along the ligand optimization process in drug discovery and development. In particular, the binding enthalpy may be used as a guide when selecting and optimizing compounds over a set of potential candidates. However, this has been recently called into question by arguing certain difficulties and in the light of certain experimental examples.

  1. Achieving Optimal Quantum Acceleration of Frequency Estimation Using Adaptive Coherent Control.

    PubMed

    Naghiloo, M; Jordan, A N; Murch, K W

    2017-11-03

    Precision measurements of frequency are critical to accurate time keeping and are fundamentally limited by quantum measurement uncertainties. While for time-independent quantum Hamiltonians the uncertainty of any parameter scales at best as 1/T, where T is the duration of the experiment, recent theoretical works have predicted that explicitly time-dependent Hamiltonians can yield a 1/T^{2} scaling of the uncertainty for an oscillation frequency. This quantum acceleration in precision requires coherent control, which is generally adaptive. We experimentally realize this quantum improvement in frequency sensitivity with superconducting circuits, using a single transmon qubit. With optimal control pulses, the theoretically ideal frequency precision scaling is reached for times shorter than the decoherence time. This result demonstrates a fundamental quantum advantage for frequency estimation.

  2. Real-time in situ study of femtosecond-laser-induced periodic structures on metals by linear and nonlinear optics.

    PubMed

    Zhang, Jihua; He, Yizhuo; Lam, Billy; Guo, Chunlei

    2017-08-21

    Femtosecond-laser surface structuring on metals is investigated in real time by both fundamental and second harmonic generation (SHG) signals. The onset of surface modification and its progress can be monitored by both the fundamental and SHG probes. However, the dynamics of femtosecond-laser-induced periodic surface structures (FLIPSSs) formation can only be revealed by SHG but not fundamental because of the higher sensitivity of SHG to structural geometry on metal. Our technique provides a simple and effective way to monitor the surface modification and FLIPSS formation thresholds and allows us to obtain the optimal FLIPSS for SHG enhancement.

  3. Multiobjective optimization of combinatorial libraries.

    PubMed

    Agrafiotis, D K

    2002-01-01

    Combinatorial chemistry and high-throughput screening have caused a fundamental shift in the way chemists contemplate experiments. Designing a combinatorial library is a controversial art that involves a heterogeneous mix of chemistry, mathematics, economics, experience, and intuition. Although there seems to be little agreement as to what constitutes an ideal library, one thing is certain: only one property or measure seldom defines the quality of the design. In most real-world applications, a good experiment requires the simultaneous optimization of several, often conflicting, design objectives, some of which may be vague and uncertain. In this paper, we discuss a class of algorithms for subset selection rooted in the principles of multiobjective optimization. Our approach is to employ an objective function that encodes all of the desired selection criteria, and then use a simulated annealing or evolutionary approach to identify the optimal (or a nearly optimal) subset from among the vast number of possibilities. Many design criteria can be accommodated, including diversity, similarity to known actives, predicted activity and/or selectivity determined by quantitative structure-activity relationship (QSAR) models or receptor binding models, enforcement of certain property distributions, reagent cost and availability, and many others. The method is robust, convergent, and extensible, offers the user full control over the relative significance of the various objectives in the final design, and permits the simultaneous selection of compounds from multiple libraries in full- or sparse-array format.

  4. The application of defaults to optimize parents' health-based choices for children.

    PubMed

    Loeb, Katharine L; Radnitz, Cynthia; Keller, Kathleen; Schwartz, Marlene B; Marcus, Sue; Pierson, Richard N; Shannon, Michael; DeLaurentis, Danielle

    2017-06-01

    Optimal defaults is a compelling model from behavioral economics and the psychology of human decision-making, designed to shape or "nudge" choices in a positive direction without fundamentally restricting options. The current study aimed to test the effectiveness of optimal (less obesogenic) defaults and parent empowerment priming on health-based decisions with parent-child (ages 3-8) dyads in a community-based setting. Two proof-of-concept experiments (one on breakfast food selections and one on activity choice) were conducted comparing the main and interactive effects of optimal versus suboptimal defaults, and parent empowerment priming versus neutral priming, on parents' health-related choices for their children. We hypothesized that in each experiment, making the default option more optimal will lead to more frequent health-oriented choices, and that priming parents to be the ultimate decision-makers on behalf of their child's health will potentiate this effect. Results show that in both studies, default condition, but not priming condition or the interaction between default and priming, significantly predicted choice (healthier vs. less healthy option). There was also a significant main effect for default condition (and no effect for priming condition or the interaction term) on the quantity of healthier food children consumed in the breakfast experiment. These pilot studies demonstrate that optimal defaults can be practicably implemented to improve parents' food and activity choices for young children. Results can inform policies and practices pertaining to obesogenic environmental factors in school, restaurant, and home environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Multi-Parameter Aerosol Scattering Sensor

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Fischer, David G.

    2011-01-01

    This work relates to the development of sensors that measure specific aerosol properties. These properties are in the form of integrated moment distributions, i.e., total surface area, total mass, etc., or mathematical combinations of these moment distributions. Specifically, the innovation involves two fundamental features: a computational tool to design and optimize such sensors and the embodiment of these sensors in actual practice. The measurement of aerosol properties is a problem of general interest. Applications include, but are not limited to, environmental monitoring, assessment of human respiratory health, fire detection, emission characterization and control, and pollutant monitoring. The objectives for sensor development include increased accuracy and/or dynamic range, the inclusion in a single sensor of the ability to measure multiple aerosol properties, and developing an overall physical package that is rugged, compact, and low in power consumption, so as to enable deployment in harsh or confined field applications, and as distributed sensor networks. Existing instruments for this purpose include scattering photometers, direct-reading mass instruments, Beta absorption devices, differential mobility analyzers, and gravitational samplers. The family of sensors reported here is predicated on the interaction of light and matter; specifically, the scattering of light from distributions of aerosol particles. The particular arrangement of the sensor, e.g. the wavelength(s) of incident radiation, the number and location of optical detectors, etc., can be derived so as to optimize the sensor response to aerosol properties of practical interest. A key feature of the design is the potential embodiment as an extremely compact, integrated microsensor package. This is of fundamental importance, as it enables numerous previously inaccessible applications. The embodiment of these sensors is inherently low maintenance and high reliability by design. The novel and unique features include the underlying computational underpinning that allows the optimization for specific applications, and the physical embodiment that affords the construction of a compact, durable, and reliable integrated package. The advantage appears in the form of increased accuracy relative to existing instruments, and the applications enabled by the physical attributes of the resulting configuration

  6. Relativities of fundamentality

    NASA Astrophysics Data System (ADS)

    McKenzie, Kerry

    2017-08-01

    S-dualities have been held to have radical implications for our metaphysics of fundamentality. In particular, it has been claimed that they make the fundamentality status of a physical object theory-relative in an important new way. But what physicists have had to say on the issue has not been clear or consistent, and in particular seems to be ambiguous between whether S-dualities demand an anti-realist interpretation of fundamentality talk or merely a revised realism. This paper is an attempt to bring some clarity to the matter. After showing that even antecedently familiar fundamentality claims are true only relative to a raft of metaphysical, physical, and mathematical assumptions, I argue that the relativity of fundamentality inherent in S-duality nevertheless represents something new, and that part of the reason for this is that it has both realist and anti-realist implications for fundamentality talk. I close by discussing the broader significance that S-dualities have for structuralist metaphysics and for fundamentality metaphysics more generally.

  7. A Scientific Rationale to Improve Resistance Training Prescription in Exercise Oncology.

    PubMed

    Fairman, Ciaran M; Zourdos, Michael C; Helms, Eric R; Focht, Brian C

    2017-08-01

    To date, the prevailing evidence in the field of exercise oncology supports the safety and efficacy of resistance training to attenuate many oncology treatment-related adverse effects, such as risk for cardiovascular disease, increased fatigue, and diminished physical functioning and quality of life. Moreover, findings in the extant literature supporting the benefits of exercise for survivors of and patients with cancer have resulted in the release of exercise guidelines from several international agencies. However, despite research progression and international recognition, current exercise oncology-based exercise prescriptions remain relatively basic and underdeveloped, particularly in regards to resistance training. Recent publications have called for a more precise manipulation of training variables such as volume, intensity, and frequency (i.e., periodization), given the large heterogeneity of a cancer population, to truly optimize clinically relevant patient-reported outcomes. Indeed, increased attention to integrating fundamental principles of exercise physiology into the exercise prescription process could optimize the safety and efficacy of resistance training during cancer care. The purpose of this article is to give an overview of the current state of resistance training prescription and discuss novel methods that can contribute to improving approaches to exercise prescription. We hope this article may facilitate further evaluation of best practice regarding resistance training prescription, monitoring, and modification to ultimately optimize the efficacy of integrating resistance training as a supportive care intervention for survivors or and patients with cancer.

  8. Temperature Scaling Law for Quantum Annealing Optimizers.

    PubMed

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-15

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  9. Multi-Disciplinary Analysis and Optimization Frameworks

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2009-01-01

    Since July 2008, the Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed one major milestone, Define Architecture & Interfaces for Next Generation Open Source MDAO Framework Milestone (9/30/08), and is completing the Generation 1 Framework validation milestone, which is due December 2008. Included in the presentation are: details of progress on developing the Open MDAO framework, modeling and testing the Generation 1 Framework, progress toward establishing partnerships with external parties, and discussion of additional potential collaborations

  10. Multidisciplinary Analysis and Optimization Generation 1 and Next Steps

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2008-01-01

    The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.

  11. Thermodynamic metrics and optimal paths.

    PubMed

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  12. 3D Data Acquisition Platform for Human Activity Understanding

    DTIC Science & Technology

    2016-03-02

    address fundamental research problems of representation and invariant description of3D data, human motion modeling and applications of human activity analysis, and computational optimization of large-scale 3D data.

  13. 3D Data Acquisition Platform for Human Activity Understanding

    DTIC Science & Technology

    2016-03-02

    address fundamental research problems of representation and invariant description of 3D data, human motion modeling and applications of human activity analysis, and computational optimization of large-scale 3D data.

  14. A quantitative theory of gamma synchronization in macaque V1.

    PubMed

    Lowet, Eric; Roberts, Mark J; Peter, Alina; Gips, Bart; De Weerd, Peter

    2017-08-31

    Gamma-band synchronization coordinates brief periods of excitability in oscillating neuronal populations to optimize information transmission during sensation and cognition. Commonly, a stable, shared frequency over time is considered a condition for functional neural synchronization. Here, we demonstrate the opposite: instantaneous frequency modulations are critical to regulate phase relations and synchronization. In monkey visual area V1, nearby local populations driven by different visual stimulation showed different gamma frequencies. When similar enough, these frequencies continually attracted and repulsed each other, which enabled preferred phase relations to be maintained in periods of minimized frequency difference. Crucially, the precise dynamics of frequencies and phases across a wide range of stimulus conditions was predicted from a physics theory that describes how weakly coupled oscillators influence each other's phase relations. Hence, the fundamental mathematical principle of synchronization through instantaneous frequency modulations applies to gamma in V1 and is likely generalizable to other brain regions and rhythms.

  15. A quantitative theory of gamma synchronization in macaque V1

    PubMed Central

    Roberts, Mark J; Peter, Alina; Gips, Bart; De Weerd, Peter

    2017-01-01

    Gamma-band synchronization coordinates brief periods of excitability in oscillating neuronal populations to optimize information transmission during sensation and cognition. Commonly, a stable, shared frequency over time is considered a condition for functional neural synchronization. Here, we demonstrate the opposite: instantaneous frequency modulations are critical to regulate phase relations and synchronization. In monkey visual area V1, nearby local populations driven by different visual stimulation showed different gamma frequencies. When similar enough, these frequencies continually attracted and repulsed each other, which enabled preferred phase relations to be maintained in periods of minimized frequency difference. Crucially, the precise dynamics of frequencies and phases across a wide range of stimulus conditions was predicted from a physics theory that describes how weakly coupled oscillators influence each other’s phase relations. Hence, the fundamental mathematical principle of synchronization through instantaneous frequency modulations applies to gamma in V1 and is likely generalizable to other brain regions and rhythms. PMID:28857743

  16. I-SWOT as instrument to individually optimize therapy of thoracoabdominal aortic aneurysms: Effective, norm-compliant and meeting the needs.

    PubMed

    Sachweh, A; von Kodolitsch, Y; Kölbel, T; Larena-Avellaneda, A; Wipper, S; Bernhardt, A M; Girdauskas, E; Detter, C; Reichenspurner, H; Blankart, C R; Debus, E S

    2017-01-01

    Guidelines summarize medical evidence, they identify the most efficient therapy under study conditions and recommend this therapy for use. The physician now has the challenge to translate a therapy that is efficient under laboratory conditions to a patient who is an individual person. To accomplish this task the physician has to make sure that (I) the ideal typical therapy is applicable and effective in this individual patient taking the special features into consideration, that (II) therapy is compliant with the norm including guidelines, laws and ethical requirements (conformity) and that (III) the therapy meets the patient's needs. How can physicians together with the patients translate the medical evidence into an individually optimized therapy? At the German Aortic Center in Hamburg we use I‑SWOT as an instrument to identify such individually optimized therapy. With I‑SWOT, we present an instrument with which we have developed an (I) efficient, (II) conform and (III) needs-oriented therapeutic strategy for individual patients. I-SWOT cross-tabulates strengths (S) and weaknesses (W) related to therapy with opportunities (O) and threats (T) related to individual patients. This I‑SWOT matrix identifies four fundamental types of strategy, which comprise "SO" maximizing strengths and opportunities, "WT" minimizing weaknesses and threats, "WO" minimizing weaknesses and maximizing opportunities and "ST" maximizing strengths and minimizing threats. We discuss the case of a patient with asymptomatic thoracoabdominal aneurysm to show how I‑SWOT is used to identify an individually optimized therapy strategy.

  17. Shifts in growth strategies reflect tradeoffs in cellular economics

    PubMed Central

    Molenaar, Douwe; van Berlo, Rogier; de Ridder, Dick; Teusink, Bas

    2009-01-01

    The growth rate-dependent regulation of cell size, ribosomal content, and metabolic efficiency follows a common pattern in unicellular organisms: with increasing growth rates, cell size and ribosomal content increase and a shift to energetically inefficient metabolism takes place. The latter two phenomena are also observed in fast growing tumour cells and cell lines. These patterns suggest a fundamental principle of design. In biology such designs can often be understood as the result of the optimization of fitness. Here we show that in basic models of self-replicating systems these patterns are the consequence of maximizing the growth rate. Whereas most models of cellular growth consider a part of physiology, for instance only metabolism, the approach presented here integrates several subsystems to a complete self-replicating system. Such models can yield fundamentally different optimal strategies. In particular, it is shown how the shift in metabolic efficiency originates from a tradeoff between investments in enzyme synthesis and metabolic yields for alternative catabolic pathways. The models elucidate how the optimization of growth by natural selection shapes growth strategies. PMID:19888218

  18. Algorithms for Data Intensive Applications on Intelligent and Smart Memories

    DTIC Science & Technology

    2003-03-01

    editors). Parallel Algorithms and Architectures. North Holland, 1986. [8] P. Diniz . USC ISI, Personal Communication, March, 2001. [9] M. Frigo, C. E ...hierarchy as well as the Translation Lookaside Buer TLB aect the e ectiveness of cache friendly optimizations These penalties vary among...processors and cause large variations in the e ectiveness of cache performance optimizations The area of graph problems is fundamental in a wide variety of

  19. Space Shuttle Main Engine performance analysis

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1993-01-01

    For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both measurement and balance uncertainty estimates. The reconciler attempts to select operational parameters that minimize the difference between theoretical prediction and observation. Selected values are further constrained to fall within measurement uncertainty limits and to satisfy fundamental physical relations (mass conservation, energy conservation, pressure drop relations, etc.) within uncertainty estimates for all SSME subsystems. The parameter selection problem described above is a traditional nonlinear programming problem. The reconciler employs a mixed penalty method to determine optimum values of SSME operating parameters associated with this problem formulation.

  20. Fundamental Quantum 1/F Noise in Ultrasmall Semiconductor Devices and Their Optimal Design Principles

    DTIC Science & Technology

    1988-05-31

    Hooge parameter. 2. 1 / f Noise of the Recombination Current Generated in the Depletion Region The quantum i/ f ...theory. There are two forms of quantum 11f noise . In the first place C~ and Cn4 p n to quantum 1 / f noise theory. This would yield Hooge parameters S...Fundamental Quantum 1 / f Noise in Ultrasmall S~ iodcrD’vesadOtm.Dsgn P in. 12. PERSONAL AUTHOR(S) Handel, Peter H. (Princioal investiaat r) 13a. TYPE

  1. Dose optimization of total or partial skin electron irradiation by thermoluminescent dosimetry.

    PubMed

    Schüttrumpf, Lars; Neumaier, Klement; Maihoefer, Cornelius; Niyazi, Maximilian; Ganswindt, Ute; Li, Minglun; Lang, Peter; Reiner, Michael; Belka, Claus; Corradini, Stefanie

    2018-05-01

    Due to the complex surface of the human body, total or partial skin irradiation using large electron fields is challenging. The aim of the present study was to quantify the magnitude of dose optimization required after the application of standard fields. Total skin electron irradiation (TSEI) was applied using the Stanford technique with six dual-fields. Patients presenting with localized lesions were treated with partial skin electron irradiation (PSEI) using large electron fields, which were individually adapted. In order to verify and validate the dose distribution, in vivo dosimetry with thermoluminescent dosimeters (TLD) was performed during the first treatment fraction to detect potential dose heterogeneity and to allow for an individual dose optimization with adjustment of the monitor units (MU). Between 1984 and 2017, a total of 58 patients were treated: 31 patients received TSEI using 12 treatment fields, while 27 patients underwent PSEI and were treated with 4-8 treatment fields. After evaluation of the dosimetric results, an individual dose optimization was necessary in 21 patients. Of these, 7 patients received TSEI (7/31). Monitor units (MU) needed to be corrected by a mean value of 117 MU (±105, range 18-290) uniformly for all 12 treatment fields, corresponding to a mean relative change of 12% of the prescribed MU. In comparison, the other 14 patients received PSEI (14/27) and the mean adjustment of monitor units was 282 MU (±144, range 59-500) to single or multiple fields, corresponding to a mean relative change of 22% of the prescribed MU. A second dose optimization to obtain a satisfying dose at the prescription point was need in 5 patients. Thermoluminescent dosimetry allows an individual dose optimization in TSEI and PSEI to enable a reliable adjustment of the MUs to obtain the prescription dose. Especially in PSEI in vivo dosimetry is of fundamental importance.

  2. Guidelines on CV networking information flow optimization for Texas.

    DOT National Transportation Integrated Search

    2017-03-01

    Recognizing the fundamental role of information flow in future transportation applications, the research team investigated the quality and security of information flow in the connected vehicle (CV) environment. The research team identified key challe...

  3. Tunable Catalysis of Water to Peroxide with Anionic, Cationic, and Neutral Atomic Au, Ag, Pd, Rh, and Os

    NASA Astrophysics Data System (ADS)

    Suggs, K.; Kiros, F.; Tesfamichael, A.; Felfli, Z.; Msezane, A. Z.

    2015-05-01

    Fundamental anionic, cationic, and neutral atomic metal predictions utilizing density functional theory calculations validate the recent discovery identifying the interplay between Regge resonances and Ramsauer-Townsend minima obtained through complex angular momentum analysis as the fundamental atomic mechanism underlying nanoscale catalysis. Here we investigate the optimization of the catalytic behavior of Au, Ag, Pd, Rh, and Os atomic systems via polarization effects and conclude that anionic atomic systems are optimal and therefore ideal for catalyzing the oxidation of water to peroxide, with anionic Os being the best candidate. The discovery that cationic systems increase the transition energy barrier in the synthesis of peroxide could be important as inhibitors in controlling and regulating catalysis. These findings usher in a fundamental and comprehensive atomic theoretical framework for the generation of tunable catalytic systems. The ultimate aim is to design giant atomic catalysts and sensors, in the context of the recently synthesized tri-metal Ag@Au@Pt and bimetal Ag@Au nanoparticles for greatly enhanced plasmonic properties and improved chemical stability for chemical and biological sensing. Research was supported by U.S. DOE Office of Basic Energy Sciences.

  4. Fundamental principles in periodontal plastic surgery and mucosal augmentation--a narrative review.

    PubMed

    Burkhardt, Rino; Lang, Niklaus P

    2014-04-01

    To provide a narrative review of the current literature elaborating on fundamental principles of periodontal plastic surgical procedures. Based on a presumptive outline of the narrative review, MESH terms have been used to search the relevant literature electronically in the PubMed and Cochrane Collaboration databases. If possible, systematic reviews were included. The review is divided into three phases associated with periodontal plastic surgery: a) pre-operative phase, b) surgical procedures and c) post-surgical care. The surgical procedures were discussed in the light of a) flap design and preparation, b) flap mobilization and c) flap adaptation and stabilization. Pre-operative paradigms include the optimal plaque control and smoking counselling. Fundamental principles in surgical procedures address basic knowledge in anatomy and vascularity, leading to novel appropriate flap designs with papilla preservation. Flap mobilization based on releasing incisions can be performed up to 5 mm. Flap adaptation and stabilization depend on appropriate wound bed characteristics, undisturbed blood clot formation, revascularization and wound stability through adequate suturing. Delicate tissue handling and tension free wound closure represent prerequisites for optimal healing outcomes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Multidimensional protein identification technology (MudPIT): technical overview of a profiling method optimized for the comprehensive proteomic investigation of normal and diseased heart tissue.

    PubMed

    Kislinger, Thomas; Gramolini, Anthony O; MacLennan, David H; Emili, Andrew

    2005-08-01

    An optimized analytical expression profiling strategy based on gel-free multidimensional protein identification technology (MudPIT) is reported for the systematic investigation of biochemical (mal)-adaptations associated with healthy and diseased heart tissue. Enhanced shotgun proteomic detection coverage and improved biological inference is achieved by pre-fractionation of excised mouse cardiac muscle into subcellular components, with each organellar fraction investigated exhaustively using multiple repeat MudPIT analyses. Functional-enrichment, high-confidence identification, and relative quantification of hundreds of organelle- and tissue-specific proteins are achieved readily, including detection of low abundance transcriptional regulators, signaling factors, and proteins linked to cardiac disease. Important technical issues relating to data validation, including minimization of artifacts stemming from biased under-sampling and spurious false discovery, together with suggestions for further fine-tuning of sample preparation, are discussed. A framework for follow-up bioinformatic examination, pattern recognition, and data mining is also presented in the context of a stringent application of MudPIT for probing fundamental aspects of heart muscle physiology as well as the discovery of perturbations associated with heart failure.

  6. Unleashing elastic energy: dynamics of energy release in rubber bands and impulsive biological systems

    NASA Astrophysics Data System (ADS)

    Ilton, Mark; Cox, Suzanne; Egelmeers, Thijs; Patek, S. N.; Crosby, Alfred J.

    Impulsive biological systems - which include mantis shrimp, trap-jaw ants, and venus fly traps - can reach high speeds by using elastic elements to store and rapidly release energy. The material behavior and shape changes critical to achieving rapid energy release in these systems are largely unknown due to limitations of materials testing instruments operating at high speed and large displacement. In this work, we perform fundamental, proof-of-concept measurements on the tensile retraction of elastomers. Using high speed imaging, the kinematics of retraction are measured for elastomers with varying mechanical properties and geometry. Based on the kinematics, the rate of energy dissipation in the material is determined as a function of strain and strain-rate, along with a scaling relation which describes the dependence of maximum velocity on material properties. Understanding this scaling relation along with the material failure limits of the elastomer allows the prediction of material properties required for optimal performance. We demonstrate this concept experimentally by optimizing for maximum velocity in our synthetic model system, and achieve retraction velocities that exceed those in biological impulsive systems. This model system provides a foundation for future work connecting continuum performance to molecular architecture in impulsive systems.

  7. Propagation speed of a starting wave in a queue of pedestrians.

    PubMed

    Tomoeda, Akiyasu; Yanagisawa, Daichi; Imamura, Takashi; Nishinari, Katsuhiro

    2012-09-01

    The propagation speed of a starting wave, which is a wave of people's successive reactions in the relaxation process of a queue, has an essential role for pedestrians and vehicles to achieve smooth movement. For example, a queue of vehicles with appropriate headway (or density) alleviates traffic jams since the delay of reaction to start is minimized. In this paper, we have investigated the fundamental relation between the propagation speed of a starting wave and the initial density by both our mathematical model built on the stochastic cellular automata and experimental measurements. Analysis of our mathematical model implies that the relation is characterized by the power law αρ-β (β≠1), and the experimental results verify this feature. Moreover, when the starting wave is characterized by the power law (β>1), we have revealed the existence of optimal density, where the required time, i.e., the sum of the waiting time until the starting wave reaches the last pedestrian in a queue and his/her travel time to pass the head position of the initial queue, is minimized. This optimal density inevitably plays a significant role in achieving a smooth movement of crowds and vehicles in a queue.

  8. The fundamental problem of treating light incoherence in photovoltaics and its practical consequences

    NASA Astrophysics Data System (ADS)

    Herman, Aline; Sarrazin, Michaël; Deparis, Olivier

    2014-01-01

    The incoherence of sunlight has long been suspected to have an impact on solar cell energy conversion efficiency, although the extent of this is unclear. Existing computational methods used to optimize solar cell efficiency under incoherent light are based on multiple time-consuming runs and statistical averaging. These indirect methods show limitations related to the complexity of the solar cell structure. As a consequence, complex corrugated cells, which exploit light trapping for enhancing the efficiency, have not yet been accessible for optimization under incoherent light. To overcome this bottleneck, we developed an original direct method which has the key advantage that the treatment of incoherence can be totally decoupled from the complexity of the cell. As an illustration, surface-corrugated GaAs and c-Si thin-films are considered. The spectrally integrated absorption in these devices is found to depend strongly on the degree of light coherence and, accordingly, the maximum achievable photocurrent can be higher under incoherent light than under coherent light. These results show the importance of taking into account sunlight incoherence in solar cell optimization and point out the ability of our direct method to deal with complex solar cell structures.

  9. Analytical display design for flight tasks conducted under instrument meteorological conditions. [human factors engineering of pilot performance for display device design in instrument landing systems

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1976-01-01

    Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.

  10. Hypothesis-driven methods to augment human cognition by optimizing cortical oscillations

    PubMed Central

    Horschig, Jörn M.; Zumer, Johanna M.; Bahramisharif, Ali

    2014-01-01

    Cortical oscillations have been shown to represent fundamental functions of a working brain, e.g., communication, stimulus binding, error monitoring, and inhibition, and are directly linked to behavior. Recent studies intervening with these oscillations have demonstrated effective modulation of both the oscillations and behavior. In this review, we collect evidence in favor of how hypothesis-driven methods can be used to augment cognition by optimizing cortical oscillations. We elaborate their potential usefulness for three target groups: healthy elderly, patients with attention deficit/hyperactivity disorder, and healthy young adults. We discuss the relevance of neuronal oscillations in each group and show how each of them can benefit from the manipulation of functionally-related oscillations. Further, we describe methods for manipulation of neuronal oscillations including direct brain stimulation as well as indirect task alterations. We also discuss practical considerations about the proposed techniques. In conclusion, we propose that insights from neuroscience should guide techniques to augment human cognition, which in turn can provide a better understanding of how the human brain works. PMID:25018706

  11. ABCluster: the artificial bee colony algorithm for cluster global optimization.

    PubMed

    Zhang, Jun; Dolg, Michael

    2015-10-07

    Global optimization of cluster geometries is of fundamental importance in chemistry and an interesting problem in applied mathematics. In this work, we introduce a relatively new swarm intelligence algorithm, i.e. the artificial bee colony (ABC) algorithm proposed in 2005, to this field. It is inspired by the foraging behavior of a bee colony, and only three parameters are needed to control it. We applied it to several potential functions of quite different nature, i.e., the Coulomb-Born-Mayer, Lennard-Jones, Morse, Z and Gupta potentials. The benchmarks reveal that for long-ranged potentials the ABC algorithm is very efficient in locating the global minimum, while for short-ranged ones it is sometimes trapped into a local minimum funnel on a potential energy surface of large clusters. We have released an efficient, user-friendly, and free program "ABCluster" to realize the ABC algorithm. It is a black-box program for non-experts as well as experts and might become a useful tool for chemists to study clusters.

  12. Controlling Hyperhydricity in Date Palm In Vitro Culture by Reduced Concentration of Nitrate Nutrients.

    PubMed

    El-Dawayati, Maiada M; Zayed, Zeinab E

    2017-01-01

    Hyperhydricity (or vitrification) is a fundamental physiological disorder in date palm micropropagation. Several factors have been ascribed as being responsible for hyperhydricity, which are related to the explant, medium, culture vessel, and environment. The optimization of inorganic nutrients in the culture medium improves in vitro growth and morphogenesis, in addition to controlling hyperhydricity. This chapter describes a protocol for controlling hyperhydricity during the embryogenic callus stage by optimizing the ratio of nitrogen salts of the Murashige and Skoog (MS) nutrient culture medium. The best results of differentiation from cured hyperhydric callus are obtained using modification at a ratio of NH 4+ /NO 3- at 10:15 (825:1425 mg/L) of the MS culture medium to remedy hyperhydric date palm callus and achieve the recovery of normal embryogenic callus and subsequent regeneration of plantlets. Based on the results of this study, nutrient medium composition has an important role in avoiding hyperhydricity problems during date palm micropropagation.

  13. Studies in organic and physical photochemistry - an interdisciplinary approach.

    PubMed

    Oelgemöller, Michael; Hoffmann, Norbert

    2016-08-21

    Traditionally, organic photochemistry when applied to synthesis strongly interacts with physical chemistry. The aim of this review is to illustrate this very fruitful interdisciplinary approach and cooperation. A profound understanding of the photochemical reactivity and reaction mechanisms is particularly helpful for optimization and application of these reactions. Some typical reactions and particular aspects are reported such as the Norrish-Type II reaction and the Yang cyclization and related transformations, the [2 + 2] photocycloadditions, particularly the Paternò-Büchi reaction, photochemical electron transfer induced transformations, different kinds of catalytic reactions such as photoredox catalysis for organic synthesis and photooxygenation are discussed. Particular aspects such as the structure and reactivity of aryl cations, photochemical reactions in the crystalline state, chiral memory, different mechanisms of hydrogen transfer in photochemical reactions or fundamental aspects of stereoselectivity are discussed. Photochemical reactions are also investigated in the context of chemical engineering. Particularly, continuous flow reactors are of interest. Novel reactor systems are developed and modeling of photochemical transformations and different reactors play a key role in such studies. This research domain builds a bridge between fundamental studies of organic photochemical reactions and their industrial application.

  14. Spectroscopic analysis of cinnamic acid using quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Vinod, K. S.; Periandy, S.; Govindarajan, M.

    2015-02-01

    In this present study, FT-IR, FT-Raman, 13C NMR and 1H NMR spectra for cinnamic acid have been recorded for the vibrational and spectroscopic analysis. The observed fundamental frequencies (IR and Raman) were assigned according to their distinctiveness region. The computed frequencies and optimized parameters have been calculated by using HF and DFT (B3LYP) methods and the corresponding results are tabulated. On the basis of the comparison between computed and experimental results assignments of the fundamental vibrational modes are examined. A study on the electronic and optical properties; absorption wavelengths, excitation energy, dipole moment and frontier molecular orbital energies, were performed by HF and DFT methods. The alternation of the vibration pattern of the pedestal molecule related to the substitutions was analyzed. The 13C and 1H NMR spectra have been recorded and the chemical shifts have been calculated using the gauge independent atomic orbital (GIAO) method. The Mulliken charges, UV spectral analysis and HOMO-LUMO analysis of have been calculated and reported. The molecular electrostatic potential (MEP) was constructed.

  15. Decision-theoretic saliency: computational principles, biological plausibility, and implications for neurophysiology and psychophysics.

    PubMed

    Gao, Dashan; Vasconcelos, Nuno

    2009-01-01

    A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.

  16. Analysis of temporal gene expression profiles: clustering by simulated annealing and determining the optimal number of clusters.

    PubMed

    Lukashin, A V; Fuchs, R

    2001-05-01

    Cluster analysis of genome-wide expression data from DNA microarray hybridization studies has proved to be a useful tool for identifying biologically relevant groupings of genes and samples. In the present paper, we focus on several important issues related to clustering algorithms that have not yet been fully studied. We describe a simple and robust algorithm for the clustering of temporal gene expression profiles that is based on the simulated annealing procedure. In general, this algorithm guarantees to eventually find the globally optimal distribution of genes over clusters. We introduce an iterative scheme that serves to evaluate quantitatively the optimal number of clusters for each specific data set. The scheme is based on standard approaches used in regular statistical tests. The basic idea is to organize the search of the optimal number of clusters simultaneously with the optimization of the distribution of genes over clusters. The efficiency of the proposed algorithm has been evaluated by means of a reverse engineering experiment, that is, a situation in which the correct distribution of genes over clusters is known a priori. The employment of this statistically rigorous test has shown that our algorithm places greater than 90% genes into correct clusters. Finally, the algorithm has been tested on real gene expression data (expression changes during yeast cell cycle) for which the fundamental patterns of gene expression and the assignment of genes to clusters are well understood from numerous previous studies.

  17. CHP Fundamentals, NFMT High Performance Buildings (Presentation) – June 3, 2015

    EPA Pesticide Factsheets

    This presentation discusses how CHP can improve energy efficiency at a building or facility, and play a major role in reducing carbon emissions, optimizing fuel flexibility, lowering operating costs, and earning LEED points.

  18. Technology for Educational Change

    ERIC Educational Resources Information Center

    Mitchell, P. David

    1973-01-01

    Five fundamental manifestations of technology for educational change are examined with particular reference to Canadian activities. These foci are: psychotechnology, information and communications technology, organizational technology, cybernetic systems technology and educational planning. Each is vitally concerned with the optimal organization…

  19. Lower bound buckling loads for design of laminate composite cylinders

    NASA Astrophysics Data System (ADS)

    Croll, James G. A.; Wang, Hongtao

    2017-01-01

    Over a period of more than 45 years, an extensive research program has allowed a series of very simple propositions, relating to the safe design of shells experiencing imperfection sensitive buckling, to be recast in the form of a series of lemmas. These are briefly summarized and their practical use is illustrated in relation to the prediction of safe lower bounds to the imperfection sensitive buckling of axially loaded, fiber reinforced polymeric, laminated cylinders. With a fundamental aspect of the approach, sometimes referred to as the reduced stiffness method, being the delineation of the various shell membrane and bending stiffness (or perhaps more appropriately energy) components contributing to the buckling resistance, the method will be shown to also provide a powerful way of making rational design decisions to optimize the use of fiber reinforcement.

  20. JET DT Scenario Extrapolation and Optimization with METIS

    NASA Astrophysics Data System (ADS)

    Urban, Jakub; Jaulmes, Fabien; Artaud, Jean-Francois

    2017-10-01

    Prospective JET (Joint European Torus) DT operation scenarios are modelled by the fast integrated code METIS. METIS combines scaling laws, e.g. for global and pedestal energy or density peaking, with simplified transport and source models, while retaining fundamental nonlinear couplings, in particular in the fusion power. We have tuned METIS parameters to match JET-ILW high performance experiments, including baseline and hybrid. Based on recent observations, we assume a weaker input power scaling than IPB98 and a 10% confinement improvement due to the higher ion mass. The rapidity of METIS is utilized to scan the performance of JET DT scenarios with respect to fundamental parameters, such as plasma current, magnetic field, density or heating power. Simplified, easily parameterized waveforms are used to study the effect the ramp-up speed or heating timing. Finally, an efficient Bayesian optimizer is employed to seek the most performant scenarios in terms of the fusion power or gain.

  1. Research using qualitative, quantitative or mixed methods and choice based on the research.

    PubMed

    McCusker, K; Gunaydin, S

    2015-10-01

    Research is fundamental to the advancement of medicine and critical to identifying the most optimal therapies unique to particular societies. This is easily observed through the dynamics associated with pharmacology, surgical technique and the medical equipment used today versus short years ago. Advancements in knowledge synthesis and reporting guidelines enhance the quality, scope and applicability of results; thus, improving health science and clinical practice and advancing health policy. While advancements are critical to the progression of optimal health care, the high cost associated with these endeavors cannot be ignored. Research fundamentally needs to be evaluated to identify the most efficient methods of evaluation. The primary objective of this paper is to look at a specific research methodology when applied to the area of clinical research, especially extracorporeal circulation and its prognosis for the future. © The Author(s) 2014.

  2. Navy Strategy for Achieving Information Dominance, 2013-2017. Optimizing Navy’s Primacy in the Maritime and Information Domains

    DTIC Science & Technology

    2013-01-01

    and resources to optimize decision making and maximize warfighting effects, Navy Information Dominance has become a leading Service priority. In 2009...This Strategy for Achieving Information Dominance provides the framework through which the Navy s information capabilities will be mainstreamed into...the Navy s culture as a distinct warfighting discipline. The strategy focuses on the three fundamental Information Dominance capabilities of Assured

  3. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.

  4. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing. The strengths and limitations of GA for launch vehicle design optimization is studied.

  5. The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space

    PubMed Central

    Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri

    2015-01-01

    When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes—phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass. PMID:26465336

  6. Fundamental limitations on V/STOL terminal guidance due to aircraft characteristics

    NASA Technical Reports Server (NTRS)

    Wolkovitch, J.; Lamont, C. W.; Lochtie, D. W.

    1971-01-01

    A review is given of limitations on approach flight paths of V/STOL aircraft, including limits on descent angle due to maximum drag/lift ratio. A method of calculating maximum drag/lift ratio of tilt-wing and deflected slipstream aircraft is presented. Derivatives and transfer functions for the CL-84 tilt-wing and X-22A tilt-duct aircraft are presented. For the unaugmented CL-84 in steep descents the transfer function relating descent angle to thrust contains a right-half plane zero. Using optimal control theory, it is shown that this zero causes a serious degradation in the accuracy with which steep flight paths can be followed in the presence of gusts.

  7. Characterization of quantum interference control of injected currents in LT-GaAs for carrier-envelope phase measurements.

    PubMed

    Roos, Peter; Quraishi, Qudsia; Cundiff, Steven; Bhat, Ravi; Sipe, J

    2003-08-25

    We use two mutually coherent, harmonically related pulse trains to experimentally characterize quantum interference control (QIC) of injected currents in low-temperature-grown gallium arsenide. We observe real-time QIC interference fringes, optimize the QIC signal fidelity, uncover critical signal dependences regarding beam spatial position on the sample, measure signal dependences on the fundamental and second harmonic average optical powers, and demonstrate signal characteristics that depend on the focused beam spot sizes. Following directly from our motivation for this study, we propose an initial experiment to measure and ultimately control the carrier-envelope phase evolution of a single octave-spanning pulse train using the QIC phenomenon.

  8. Pattern separation: a common function for new neurons in hippocampus and olfactory bulb.

    PubMed

    Sahay, Amar; Wilson, Donald A; Hen, René

    2011-05-26

    While adult-born neurons in the olfactory bulb (OB) and the dentate gyrus (DG) subregion of the hippocampus have fundamentally different properties, they may have more in common than meets the eye. Here, we propose that new granule cells in the OB and DG may function as modulators of principal neurons to influence pattern separation and that adult neurogenesis constitutes an adaptive mechanism to optimally encode contextual or olfactory information. See the related Perspective from Aimone, Deng, and Gage, "Resolving New Memories: A Critical Look at the Dentate Gyrus, Adult Neurogenesis, and Pattern Separation," in this issue of Neuron. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Precision engineering: an evolutionary perspective.

    PubMed

    Evans, Chris J

    2012-08-28

    Precision engineering is a relatively new name for a technology with roots going back over a thousand years; those roots span astronomy, metrology, fundamental standards, manufacturing and money-making (literally). Throughout that history, precision engineers have created links across disparate disciplines to generate innovative responses to society's needs and wants. This review combines historical and technological perspectives to illuminate precision engineering's current character and directions. It first provides us a working definition of precision engineering and then reviews the subject's roots. Examples will be given showing the contributions of the technology to society, while simultaneously showing the creative tension between the technological convergence that spurs new directions and the vertical disintegration that optimizes manufacturing economics.

  10. Optimal and robust control of quantum state transfer by shaping the spectral phase of ultrafast laser pulses.

    PubMed

    Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun

    2018-04-04

    Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.

  11. Predicting Achievable Fundamental Frequency Ranges in Vocalization Across Species

    PubMed Central

    Titze, Ingo; Riede, Tobias; Mau, Ted

    2016-01-01

    Vocal folds are used as sound sources in various species, but it is unknown how vocal fold morphologies are optimized for different acoustic objectives. Here we identify two main variables affecting range of vocal fold vibration frequency, namely vocal fold elongation and tissue fiber stress. A simple vibrating string model is used to predict fundamental frequency ranges across species of different vocal fold sizes. While average fundamental frequency is predominantly determined by vocal fold length (larynx size), range of fundamental frequency is facilitated by (1) laryngeal muscles that control elongation and by (2) nonlinearity in tissue fiber tension. One adaptation that would increase fundamental frequency range is greater freedom in joint rotation or gliding of two cartilages (thyroid and cricoid), so that vocal fold length change is maximized. Alternatively, tissue layers can develop to bear a disproportionate fiber tension (i.e., a ligament with high density collagen fibers), increasing the fundamental frequency range and thereby vocal versatility. The range of fundamental frequency across species is thus not simply one-dimensional, but can be conceptualized as the dependent variable in a multi-dimensional morphospace. In humans, this could allow for variations that could be clinically important for voice therapy and vocal fold repair. Alternative solutions could also have importance in vocal training for singing and other highly-skilled vocalizations. PMID:27309543

  12. An Investigation of Grade 12 Students' Misconceptions Relating to Fundamental Characteristics of Molecules and Atoms.

    ERIC Educational Resources Information Center

    Griffiths, Alan Keith; Preston, Kirk R.

    An understanding of the concepts of atoms and molecules is fundamental to the learning of chemistry. Any misconceptions and alternative conceptions related to these concepts which students harbor will impede much further learning. This paper identifies misconceptions related to the fundamental characteristics of atoms and molecules which Grade 12…

  13. Shortcomings of adherence counselling provided to caregivers of children receiving antiretroviral therapy in rural South Africa.

    PubMed

    Coetzee, Bronwyne; Kagee, Ashraf; Bland, Ruth

    2016-03-01

    In order to achieve optimal benefits of antiretroviral therapy (ART), caregivers of children receiving ART are required to attend routine clinic visits monthly and administer medication to the child as prescribed. Yet, the level of adherence to these behaviours varies considerably in many settings. As a way to achieve optimal adherence in rural KwaZulu-Natal, caregivers are required to attend routine counselling sessions at HIV treatment clinics that are centred on imparting information, motivation, and behavioural skills related to medication administration. According to the information-motivation-behavioural skills model, information related to adherence, motivation, and behavioural skills are necessary and fundamental determinants of adherence to ART. The purpose of the study was to observe and document the content of adherence counselling sessions that caregivers attending rural clinics in KwaZulu Natal receive. We observed 25 adherence counselling sessions, which lasted on average 8.1 minutes. Counselling typically consisted of counsellors recording patient attendance, reporting CD4 count and viral load results to caregivers, emphasising dose times, and asking caregivers to name their medications and dosage amounts. Patients were seldom asked to demonstrate how they measure the medication. They were also not probed for problems regarding treatment, even when an unsuppressed VL was reported to a caregiver. This paper calls attention to the sub-optimal level of counselling provided to patients on ART and the urgent need to standardise and improve the training, support, and debriefing provided to counsellors.

  14. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    PubMed

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  15. Overview: MURI Center on spectroscopic and time domain detection of trace explosives in condensed and vapor phases

    NASA Astrophysics Data System (ADS)

    Spicer, James B.; Dagdigian, Paul; Osiander, Robert; Miragliotta, Joseph A.; Zhang, Xi-Cheng; Kersting, Roland; Crosley, David R.; Hanson, Ronald K.; Jeffries, Jay

    2003-09-01

    The research center established by Army Research Office under the Multidisciplinary University Research Initiative program pursues a multidisciplinary approach to investigate and advance the use of complementary analytical techniques for sensing of explosives and/or explosive-related compounds as they occur in the environment. The techniques being investigated include Terahertz (THz) imaging and spectroscopy, Laser-Induced Breakdown Spectroscopy (LIBS), Cavity Ring Down Spectroscopy (CRDS) and Resonance Enhanced Multiphoton Ionization (REMPI). This suite of techniques encompasses a diversity of sensing approaches that can be applied to detection of explosives in condensed phases such as adsorbed species in soil or can be used for vapor phase detection above the source. Some techniques allow for remote detection while others have highly specific and sensitive analysis capabilities. This program is addressing a range of fundamental, technical issues associated with trace detection of explosive related compounds using these techniques. For example, while both LIBS and THz can be used to carry-out remote analysis of condensed phase analyte from a distance in excess several meters, the sensitivities of these techniques to surface adsorbed explosive-related compounds are not currently known. In current implementations, both CRDS and REMPI require sample collection techniques that have not been optimized for environmental applications. Early program elements will pursue the fundamental advances required for these techniques including signature identification for explosive-related compounds/interferents and trace analyte extraction. Later program tasks will explore simultaneous application of two or more techniques to assess the benefits of sensor fusion.

  16. Fundamental resource-allocating model in colleges and universities based on Immune Clone Algorithms

    NASA Astrophysics Data System (ADS)

    Ye, Mengdie

    2017-05-01

    In this thesis we will seek the combination of antibodies and antigens converted from the optimal course arrangement and make an analogy with Immune Clone Algorithms. According to the character of the Algorithms, we apply clone, clone gene and clone selection to arrange courses. Clone operator can combine evolutionary search and random search, global search and local search. By cloning and clone mutating candidate solutions, we can find the global optimal solution quickly.

  17. Optimization of PECVD Chamber Cleans Through Fundamental Studies of Electronegative Fluorinated Gas Discharges.

    NASA Astrophysics Data System (ADS)

    Langan, John

    1996-10-01

    The predominance of multi-level metalization schemes in advanced integrated circuit manufacturing has greatly increased the importance of plasma enhanced chemical vapor deposition (PECVD) and in turn in-situ plasma chamber cleaning. In order to maintain the highest throughput for these processes the clean step must be as short as possible. In addition, there is an increasing desire to minimize the fluorinated gas usage during the clean, while maximizing its efficiency, not only to achieve lower costs, but also because many of the gases used in this process are global warming compounds. We have studied the fundamental properties of discharges of NF_3, CF_4, and C_2F6 under conditions relevant to chamber cleaning in the GEC rf reference cell. Using electrical impedance analysis and optical emission spectroscopy we have determined that the electronegative nature of these discharges defines the optimal processing conditions by controlling the power coupling efficiency and mechanisms of power dissipation in the discharge. Examples will be presented where strategies identified by these studies have been used to optimize actual manufacturing chamber clean processes. (This work was performed in collaboration with Mark Sobolewski, National Institute of Standards and Technology, and Brian Felker, Air Products and Chemicals, Inc.)

  18. Theory of the fundamental vibration-rotation-translation spectrum of H2 in a C60 lattice

    NASA Astrophysics Data System (ADS)

    Herman, Roger M.; Lewis, John Courtenay

    2006-04-01

    Calculations are presented for the fundamental vibration-rotation spectrum of H2 in fcc C60 (fullerite) lattices. The principal features are identified as lattice-shifted “vibration-rotation-translation” state absorption transitions. The level spacings of the H2 modes are calculated numerically for the potential function resulting from the summation of the individual C-H2 potentials for all C atoms in the six nearest neighbor C60 molecules. The potential is approximately separable in Cartesian coordinates, giving a very good approximation to exactly calculated translational energies for the lower levels. The positions and relative strengths of the individual transitions are calculated from the eigenfunctions for this separable potential. The line shapes are assumed to be Lorentzian, and the widths are chosen so as to give good fits to the DRIFT spectrum of FitzGerald [Phys. Rev. B 65, 140302(R) (2002)]. A theory of the C-H2 induced dipole moment is developed with which to calculate intensities. In order to fit the observed DRIFTS transition frequencies it is found necessary to take the overlap part of the C-H2 potential to be about 13% longer in range than the C-H2 potential in graphene. Furthermore, differences in the theoretical spectra obtained with a near-optimal exp-6 potential and near-optimal Lennard-Jones 12-6 potential are clearly evident, with the exp-6 potential giving a better fit to observation than the Lennard-Jones potential. Similarly, Lorentzian line shapes assumed for the individual transitions yield better agreement with observation than Gaussian line shapes.

  19. Promoting patient-centred fundamental care in acute healthcare systems.

    PubMed

    Feo, Rebecca; Kitson, Alison

    2016-05-01

    Meeting patients' fundamental care needs is essential for optimal safety and recovery and positive experiences within any healthcare setting. There is growing international evidence, however, that these fundamentals are often poorly executed in acute care settings, resulting in patient safety threats, poorer and costly care outcomes, and dehumanising experiences for patients and families. Whilst care standards and policy initiatives are attempting to address these issues, their impact has been limited. This discussion paper explores, through a series of propositions, why fundamental care can be overlooked in sophisticated, high technology acute care settings. We argue that the central problem lies in the invisibility and subsequent devaluing of fundamental care. Such care is perceived to involve simple tasks that require little skill to execute and have minimal impact on patient outcomes. The propositions explore the potential origins of this prevailing perception, focusing upon the impact of the biomedical model, the consequences of managerial approaches that drive healthcare cultures, and the devaluing of fundamental care by nurses themselves. These multiple sources of invisibility and devaluing surrounding fundamental care have rendered the concept underdeveloped and misunderstood both conceptually and theoretically. Likewise, there remains minimal role clarification around who should be responsible for and deliver such care, and a dearth of empirical evidence and evidence-based metrics. In explicating these propositions, we argue that key to transforming the delivery of acute healthcare is a substantial shift in the conceptualisation of fundamental care. The propositions present a cogent argument that counters the prevailing perception that fundamental care is basic and does not require systematic investigation. We conclude by calling for the explicit valuing and embedding of fundamental care in healthcare education, research, practice and policy. Without this re-conceptualisation and subsequent action, poor quality, depersonalised fundamental care will prevail. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The ENGAGE study: Integrating neuroimaging, virtual reality and smartphone sensing to understand self-regulation for managing depression and obesity in a precision medicine model.

    PubMed

    Williams, Leanne M; Pines, Adam; Goldstein-Piekarski, Andrea N; Rosas, Lisa G; Kullar, Monica; Sacchet, Matthew D; Gevaert, Olivier; Bailenson, Jeremy; Lavori, Philip W; Dagum, Paul; Wandell, Brian; Correa, Carlos; Greenleaf, Walter; Suppes, Trisha; Perry, L Michael; Smyth, Joshua M; Lewis, Megan A; Venditti, Elizabeth M; Snowden, Mark; Simmons, Janine M; Ma, Jun

    2018-02-01

    Precision medicine models for personalizing achieving sustained behavior change are largely outside of current clinical practice. Yet, changing self-regulatory behaviors is fundamental to the self-management of complex lifestyle-related chronic conditions such as depression and obesity - two top contributors to the global burden of disease and disability. To optimize treatments and address these burdens, behavior change and self-regulation must be better understood in relation to their neurobiological underpinnings. Here, we present the conceptual framework and protocol for a novel study, "Engaging self-regulation targets to understand the mechanisms of behavior change and improve mood and weight outcomes (ENGAGE)". The ENGAGE study integrates neuroscience with behavioral science to better understand the self-regulation related mechanisms of behavior change for improving mood and weight outcomes among adults with comorbid depression and obesity. We collect assays of three self-regulation targets (emotion, cognition, and self-reflection) in multiple settings: neuroimaging and behavioral lab-based measures, virtual reality, and passive smartphone sampling. By connecting human neuroscience and behavioral science in this manner within the ENGAGE study, we develop a prototype for elucidating the underlying self-regulation mechanisms of behavior change outcomes and their application in optimizing intervention strategies for multiple chronic diseases. Copyright © 2017. Published by Elsevier Ltd.

  1. Pneumatic oscillator circuits for timing and control of integrated microfluidics.

    PubMed

    Duncan, Philip N; Nguyen, Transon V; Hui, Elliot E

    2013-11-05

    Frequency references are fundamental to most digital systems, providing the basis for process synchronization, timing of outputs, and waveform synthesis. Recently, there has been growing interest in digital logic systems that are constructed out of microfluidics rather than electronics, as a possible means toward fully integrated laboratory-on-a-chip systems that do not require any external control apparatus. However, the full realization of this goal has not been possible due to the lack of on-chip frequency references, thus requiring timing signals to be provided from off-chip. Although microfluidic oscillators have been demonstrated, there have been no reported efforts to characterize, model, or optimize timing accuracy, which is the fundamental metric of a clock. Here, we report pneumatic ring oscillator circuits built from microfluidic valves and channels. Further, we present a compressible-flow analysis that differs fundamentally from conventional circuit theory, and we show the utility of this physically based model for the optimization of oscillator stability. Finally, we leverage microfluidic clocks to demonstrate circuits for the generation of phase-shifted waveforms, self-driving peristaltic pumps, and frequency division. Thus, pneumatic oscillators can serve as on-chip frequency references for microfluidic digital logic circuits. On-chip clocks and pumps both constitute critical building blocks on the path toward achieving autonomous laboratory-on-a-chip devices.

  2. Fundamental insights into interfacial catalysis.

    PubMed

    Gong, Jinlong; Bao, Xinhe

    2017-04-03

    Surface and interfacial catalysis plays a vital role in chemical industries, electrochemistry and photochemical reactions. The challenges of modern chemistry are to optimize the chemical reaction processes and understand the detailed mechanism of chemical reactions. Since the early 1960s, the foundation of surface science systems has allowed the study of surface and interfacial phenomena on atomic/molecular level, and thus brought a number of significant developments to fundamental and technological processes, such as catalysis, material science and biochemistry, just to name a few. This themed issue describes the recent advances and developments in the fundamental understanding of surface and interfacial catalysis, encompassing areas of knowledge from metal to metal oxide, carbide, graphene, hexagonal boron nitride, and transition metal dichalcogenides under ultrahigh vacuum conditions, as well as under realistic reaction conditions.

  3. Unlocking Flexibility: Integrated Optimization and Control of Multienergy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Mancarella, Pierluigi; Monti, Antonello

    Electricity, natural gas, water, and dis trict heating/cooling systems are predominantly planned and operated independently. However, it is increasingly recognized that integrated optimization and control of such systems at multiple spatiotemporal scales can bring significant socioeconomic, operational efficiency, and environmental benefits. Accordingly, the concept of the multi-energy system is gaining considerable attention, with the overarching objectives of 1) uncovering fundamental gains (and potential drawbacks) that emerge from the integrated operation of multiple systems and 2) developing holistic yet computationally affordable optimization and control methods that maximize operational benefits, while 3) acknowledging intrinsic interdependencies and quality-of-service requirements for each provider.

  4. Optimizing Oxygenation in the Mechanically Ventilated Patient: Nursing Practice Implications.

    PubMed

    Barton, Glenn; Vanderspank-Wright, Brandi; Shea, Jacqueline

    2016-12-01

    Critical care nurses constitute front-line care provision for patients in the intensive care unit (ICU). Hypoxemic respiratory compromise/failure is a primary reason that patients require ICU admission and mechanical ventilation. Critical care nurses must possess advanced knowledge, skill, and judgment when caring for these patients to ensure that interventions aimed at optimizing oxygenation are both effective and safe. This article discusses fundamental aspects of respiratory physiology and clinical indices used to describe oxygenation status. Key nursing interventions including patient assessment, positioning, pharmacology, and managing hemodynamic parameters are discussed, emphasizing their effects toward mitigating ventilation-perfusion mismatch and optimizing oxygenation. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. The Empowerment of Plasma Modeling by Fundamental Electron Scattering Data

    NASA Astrophysics Data System (ADS)

    Kushner, Mark J.

    2015-09-01

    Modeling of low temperature plasmas addresses at least 3 goals - investigation of fundamental processes, analysis and optimization of current technologies, and prediction of performance of as yet unbuilt systems for new applications. The former modeling may be performed on somewhat idealized systems in simple gases, while the latter will likely address geometrically and electromagnetically intricate systems with complex gas mixtures, and now gases in contact with liquids. The variety of fundamental electron and ion scattering data (FSD) required for these activities increases from the former to the latter, while the accuracy required of that data probably decreases. In each case, the fidelity, depth and impact of the modeling depends on the availability of FSD. Modeling is, in fact, empowered by the availability and robustness of FSD. In this talk, examples of the impact of and requirements for FSD in plasma modeling will be discussed from each of these three perspectives using results from multidimensional and global models. The fundamental studies will focus on modeling of inductively coupled plasmas sustained in Ar/Cl2 where the electron scattering from feed gases and their fragments ultimately determine gas temperatures. Examples of the optimization of current technologies will focus on modeling of remote plasma etching of Si and Si3N4 in Ar/NF3/N2/O2 mixtures. Modeling of systems as yet unbuilt will address the interaction of atmospheric pressure plasmas with liquids Work was supported by the US Dept. of Energy (DE-SC0001939), National Science Foundation (CHE-124752), and the Semiconductor Research Corp.

  6. Clinical modeling--a critical analysis.

    PubMed

    Blobel, Bernd; Goossen, William; Brochhausen, Mathias

    2014-01-01

    Modeling clinical processes (and their informational representation) is a prerequisite for optimally enabling and supporting high quality and safe care through information and communication technology and meaningful use of gathered information. The paper investigates existing approaches to clinical modeling, thereby systematically analyzing the underlying principles, the consistency with and the integration opportunity to other existing or emerging projects, as well as the correctness of representing the reality of health and health services. The analysis is performed using an architectural framework for modeling real-world systems. In addition, fundamental work on the representation of facts, relations, and processes in the clinical domain by ontologies is applied, thereby including the integration of advanced methodologies such as translational and system medicine. The paper demonstrates fundamental weaknesses and different maturity as well as evolutionary potential in the approaches considered. It offers a development process starting with the business domain and its ontologies, continuing with the Reference Model-Open Distributed Processing (RM-ODP) related conceptual models in the ICT ontology space, the information and the computational view, and concluding with the implementation details represented as engineering and technology view, respectively. The existing approaches reflect at different levels the clinical domain, put the main focus on different phases of the development process instead of first establishing the real business process representation and therefore enable quite differently and partially limitedly the domain experts' involvement. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Quantum speed limits: from Heisenberg’s uncertainty principle to optimal quantum control

    NASA Astrophysics Data System (ADS)

    Deffner, Sebastian; Campbell, Steve

    2017-11-01

    One of the most widely known building blocks of modern physics is Heisenberg’s indeterminacy principle. Among the different statements of this fundamental property of the full quantum mechanical nature of physical reality, the uncertainty relation for energy and time has a special place. Its interpretation and its consequences have inspired continued research efforts for almost a century. In its modern formulation, the uncertainty relation is understood as setting a fundamental bound on how fast any quantum system can evolve. In this topical review we describe important milestones, such as the Mandelstam-Tamm and the Margolus-Levitin bounds on the quantum speed limit, and summarise recent applications in a variety of current research fields—including quantum information theory, quantum computing, and quantum thermodynamics amongst several others. To bring order and to provide an access point into the many different notions and concepts, we have grouped the various approaches into the minimal time approach and the geometric approach, where the former relies on quantum control theory, and the latter arises from measuring the distinguishability of quantum states. Due to the volume of the literature, this topical review can only present a snapshot of the current state-of-the-art and can never be fully comprehensive. Therefore, we highlight but a few works hoping that our selection can serve as a representative starting point for the interested reader.

  8. Vibrational spectrum, ab initio calculations, conformational stabilities and assignment of fundamentals of 1,2-dibromopropane

    NASA Astrophysics Data System (ADS)

    LaPlante, Arthur J.; Stidham, Howard D.

    2009-10-01

    The mid and far infrared and the Raman spectrum of 1,2-dibromopropane is reported in solid, liquid and gas. Several bands reported by earlier workers are not present in the spectrum of the purified material. Ab initio calculations of optimized geometry, energy, dipole moment, molar volume, vibrational spectrum and normal coordinate calculation were performed using the density functional B3LYP/6-311++g(3df,2pd), and the results used to assist a complete assignment of the 81 fundamental modes of vibrations of the three conformers of 1,2-dibromopropane. Relative energies found conformer A the lowest with G and G' at 815.6 and 871.4 cm -1 higher. The temperature dependence of the Raman spectrum of the liquid was investigated in the CCC bending region and the relative energies determined. It was found that the G' and G conformers lie 236 ± 11 and 327 ±11 cm -1, respectively above the A conformer, leading to the room temperature composition of the liquid as A, 65 ± 1; G', 21 ± 1; G, 14 ± 1%. It is apparent that the calculated highest energy conformer G' is stabilized more than the G conformer in the liquid. The G' conformer has the lowest molar volume effectively changing the interaction distance between conformers in the liquid, and enhancing the effect of its dipole moment.

  9. Vibrational spectrum, ab initio calculations, conformational stabilities and assignment of fundamentals of 1,2-dibromopropane.

    PubMed

    LaPlante, Arthur J; Stidham, Howard D

    2009-10-15

    The mid and far infrared and the Raman spectrum of 1,2-dibromopropane is reported in solid, liquid and gas. Several bands reported by earlier workers are not present in the spectrum of the purified material. Ab initio calculations of optimized geometry, energy, dipole moment, molar volume, vibrational spectrum and normal coordinate calculation were performed using the density functional B3LYP/6-311++g(3df,2pd), and the results used to assist a complete assignment of the 81 fundamental modes of vibrations of the three conformers of 1,2-dibromopropane. Relative energies found conformer A the lowest with G and G' at 815.6 and 871.4 cm(-1) higher. The temperature dependence of the Raman spectrum of the liquid was investigated in the CCC bending region and the relative energies determined. It was found that the G' and G conformers lie 236+/-11 and 327+/-11 cm(-1), respectively above the A conformer, leading to the room temperature composition of the liquid as A, 65+/-1; G', 21+/-1; G, 14+/-1%. It is apparent that the calculated highest energy conformer G' is stabilized more than the G conformer in the liquid. The G' conformer has the lowest molar volume effectively changing the interaction distance between conformers in the liquid, and enhancing the effect of its dipole moment.

  10. Parametric Characterization of TES Detectors Under DC Bias

    NASA Technical Reports Server (NTRS)

    Chiao, Meng P.; Smith, Stephen James; Kilbourne, Caroline A.; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Ewin, Audrey J.; hide

    2016-01-01

    The X-ray integrated field unit (X-IFU) in European Space Agency's (ESA's) Athena mission will be the first high-resolution X-ray spectrometer in space using a large-format transition-edge sensor microcalorimeter array. Motivated by optimization of detector performance for X-IFU, we have conducted an extensive campaign of parametric characterization on transition-edge sensor (TES) detectors with nominal geometries and physical properties in order to establish sensitivity trends relative to magnetic field, dc bias on detectors, operating temperature, and to improve our understanding of detector behavior relative to its fundamental properties such as thermal conductivity, heat capacity, and transition temperature. These results were used for validation of a simple linear detector model in which a small perturbation can be introduced to one or multiple parameters to estimate the error budget for X-IFU. We will show here results of our parametric characterization of TES detectors and briefly discuss the comparison with the TES model.

  11. Empowerment through education and science: three intersecting strands in the career of Griffith Edwards.

    PubMed

    Crome, Ilana

    2015-07-01

    This paper describes three important strands in the career of Griffith Edwards that define him as a leader and an innovator. Believing that education and science were critical for the development of addiction as a profession and as a field of inquiry, his approach was multi-faceted: educating all doctors to appreciate the fundamental issues in addiction; training psychiatrists in the complexity of 'dual diagnosis' and specific specialist intervention; and teaching that addiction could be a chronic condition which required care management over the life course. These three inter-related areas are directly related to the need for a range of practitioners to have an understanding of addiction so that patients can be properly managed. The greater our understanding of the nature of addiction behaviour, the more likely the potential to optimize treatment and train practitioners from different professional disciplines. © 2015 Society for the Study of Addiction.

  12. Exploring the complexity of quantum control optimization trajectories.

    PubMed

    Nanduri, Arun; Shir, Ofer M; Donovan, Ashley; Ho, Tak-San; Rabitz, Herschel

    2015-01-07

    The control of quantum system dynamics is generally performed by seeking a suitable applied field. The physical objective as a functional of the field forms the quantum control landscape, whose topology, under certain conditions, has been shown to contain no critical point suboptimal traps, thereby enabling effective searches for fields that give the global maximum of the objective. This paper addresses the structure of the landscape as a complement to topological critical point features. Recent work showed that landscape structure is highly favorable for optimization of state-to-state transition probabilities, in that gradient-based control trajectories to the global maximum value are nearly straight paths. The landscape structure is codified in the metric R ≥ 1.0, defined as the ratio of the length of the control trajectory to the Euclidean distance between the initial and optimal controls. A value of R = 1 would indicate an exactly straight trajectory to the optimal observable value. This paper extends the state-to-state transition probability results to the quantum ensemble and unitary transformation control landscapes. Again, nearly straight trajectories predominate, and we demonstrate that R can take values approaching 1.0 with high precision. However, the interplay of optimization trajectories with critical saddle submanifolds is found to influence landscape structure. A fundamental relationship necessary for perfectly straight gradient-based control trajectories is derived, wherein the gradient on the quantum control landscape must be an eigenfunction of the Hessian. This relation is an indicator of landscape structure and may provide a means to identify physical conditions when control trajectories can achieve perfect linearity. The collective favorable landscape topology and structure provide a foundation to understand why optimal quantum control can be readily achieved.

  13. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  14. Thermal/structural Tailoring of Engine Blades (T/STAEBL) User's Manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.; Clevenger, W. B.; Arel, J. D.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual contains an overview of the system, fundamentals of the data block structure, and detailed descriptions of the inputs required by the optimizer. Additionally, the thermal analysis input requirements are described as well as the inputs required to perform a finite element blade vibrations analysis.

  15. Game Intelligence in Team Sports

    PubMed Central

    Lennartsson, Jan; Lidström, Nicklas; Lindberg, Carl

    2015-01-01

    We set up a game theoretic framework to analyze a wide range of situations from team sports. A fundamental idea is the concept of potential; the probability of the offense scoring the next goal minus the probability that the next goal is made by the defense. We develop categorical as well as continuous models, and obtain optimal strategies for both offense and defense. A main result is that the optimal defensive strategy is to minimize the maximum potential of all offensive strategies. PMID:25970581

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivak, David; Crooks, Gavin

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  17. Particle-gas dynamics in the protoplanetary nebula

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.

    1991-01-01

    In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.

  18. Diversity in Biological Molecules

    ERIC Educational Resources Information Center

    Newbury, H. John

    2010-01-01

    One of the striking characteristics of fundamental biological processes, such as genetic inheritance, development and primary metabolism, is the limited amount of variation in the molecules involved. Natural selective pressures act strongly on these core processes and individuals carrying mutations and producing slightly sub-optimal versions of…

  19. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  20. Optimizing Approximate Weighted Matching on Nvidia Kepler K40

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh

    Matching is a fundamental graph problem with numerous applications in science and engineering. While algorithms for computing optimal matchings are difficult to parallelize, approximation algorithms on the other hand generally compute high quality solutions and are amenable to parallelization. In this paper, we present efficient implementations of the current best algorithm for half-approximate weighted matching, the Suitor algorithm, on Nvidia Kepler K-40 platform. We develop four variants of the algorithm that exploit hardware features to address key challenges for a GPU implementation. We also experiment with different combinations of work assigned to a warp. Using an exhaustive set ofmore » $269$ inputs, we demonstrate that the new implementation outperforms the previous best GPU algorithm by $10$ to $$100\\times$$ for over $100$ instances, and from $100$ to $$1000\\times$$ for $15$ instances. We also demonstrate up to $$20\\times$$ speedup relative to $2$ threads, and up to $$5\\times$$ relative to $16$ threads on Intel Xeon platform with $16$ cores for the same algorithm. The new algorithms and implementations provided in this paper will have a direct impact on several applications that repeatedly use matching as a key compute kernel. Further, algorithm designs and insights provided in this paper will benefit other researchers implementing graph algorithms on modern GPU architectures.« less

  1. New Insights into the Surgical Management of Tetralogy of Fallot: Physiological Fundamentals and Clinical Relevance.

    PubMed

    Bove, Thierry; François, Katrien; De Wolf, Daniel

    2015-01-01

    The surgical treatment of tetralogy of Fallot can be considered as a success story in the history of congenital heart diseases. Since the early outcome is no longer the main issue, the focus moved to the late sequelae of TOF repair, i.e. the pulmonary insufficiency and the secondary adaptation of the right ventricle. This review provides recent insights into the pathophysiological alterations of the right ventricle in relation to the reconstruction of the right ventricular outflow tract after repair of tetralogy of Fallot. Its clinical relevance is documented by addressing the policy changes regarding the optimal management at the time of surgical repair as well as properly defining criteria and timing for late pulmonary valve implantation.

  2. Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.

  3. Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.

    2013-01-01

    Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.

  4. Quantum interference of position and momentum: A particle propagation paradox

    NASA Astrophysics Data System (ADS)

    Hofmann, Holger F.

    2017-08-01

    Optimal simultaneous control of position and momentum can be achieved by maximizing the probabilities of finding their experimentally observed values within two well-defined intervals. The assumption that particles move along straight lines in free space can then be tested by deriving a lower limit for the probability of finding the particle in a corresponding spatial interval at any intermediate time t . Here, it is shown that this lower limit can be violated by quantum superpositions of states confined within the respective position and momentum intervals. These violations of the particle propagation inequality show that quantum mechanics changes the laws of motion at a fundamental level, providing a different perspective on causality relations and time evolution in quantum mechanics.

  5. REVIEWS OF TOPICAL PROBLEMS: Experimental tests of general relativity: recent progress and future directions

    NASA Astrophysics Data System (ADS)

    Turyshev, S. G.

    2009-01-01

    Einstein's general theory of relativity is the standard theory of gravity, especially where the needs of astronomy, astrophysics, cosmology, and fundamental physics are concerned. As such, this theory is used for many practical purposes involving spacecraft navigation, geodesy, and time transfer. We review the foundations of general relativity, discuss recent progress in tests of relativistic gravity, and present motivations for the new generation of high-accuracy tests of new physics beyond general relativity. Space-based experiments in fundamental physics are presently capable of uniquely addressing important questions related to the fundamental laws of nature. We discuss the advances in our understanding of fundamental physics that are anticipated in the near future and evaluate the discovery potential of a number of recently proposed space-based gravitational experiments.

  6. WASTE-TO-RESOURCE: NOVEL MEMBRANE SYSTEMS FOR SAFE AND SUSTAINABLE BRINE MANAGEMENT

    EPA Science Inventory

    Decentralized waste-to-reuse systems will be optimized to maximize resource and energy recovery and minimize chemicals and energy use. This research will enhance fundamental knowledge on simultaneous heat and mass transport through membranes, lower process costs, and furthe...

  7. Early recognition of growth abnormalities permitting early intervention

    USDA-ARS?s Scientific Manuscript database

    Normal growth is a sign of good health. Monitoring for growth disturbances is fundamental to children's health care. Early detection and diagnosis of the causes of short stature allows management of underlying medical conditions, optimizing attainment of good health and normal adult height. This rev...

  8. A Novel Platform for Evaluating the Environmental Impacts on Bacterial Cellulose Production.

    PubMed

    Basu, Anindya; Vadanan, Sundaravadanam Vishnu; Lim, Sierin

    2018-04-10

    Bacterial cellulose (BC) is a biocompatible material with versatile applications. However, its large-scale production is challenged by the limited biological knowledge of the bacteria. The advent of synthetic biology has lead the way to the development of BC producing microbes as a novel chassis. Hence, investigation on optimal growth conditions for BC production and understanding of the fundamental biological processes are imperative. In this study, we report a novel analytical platform that can be used for studying the biology and optimizing growth conditions of cellulose producing bacteria. The platform is based on surface growth pattern of the organism and allows us to confirm that cellulose fibrils produced by the bacteria play a pivotal role towards their chemotaxis. The platform efficiently determines the impacts of different growth conditions on cellulose production and is translatable to static culture conditions. The analytical platform provides a means for fundamental biological studies of bacteria chemotaxis as well as systematic approach towards rational design and development of scalable bioprocessing strategies for industrial production of bacterial cellulose.

  9. Theory of the mode stabilization mechanism in concave-micromirror-capped vertical-cavity surface-emitting lasers

    NASA Astrophysics Data System (ADS)

    Park, Si-Hyun; Park, Yeonsang; Jeon, Heonsu

    2003-08-01

    We have investigated theoretically the transverse mode stabilization mechanism in oxide-confined concave-micromirror-capped vertical-cavity surface-emitting lasers (CMC-VCSELs) as reported by Park et al. [Appl. Phys. Lett. 80, 183 (2002)]. From detailed numerical calculations on a model CMC-VCSEL structure, we found that mode discrimination factors appear to be periodic in the micromirror layer thickness with a periodicity of λ/2. We also found that there are two possible concave micromirror structures for the fundamental transverse mode laser operation. These structures can be grouped according to the thickness of the concave micromirror layer: whether it is an integer or a half-integer multiple of λ/2. The optimal micromirror curvature radius differs accordingly for each case. In an optimally designed CMC-VCSEL model structure, the fundamental transverse mode can be favored as much as 4, 8, and 13 times more strongly than the first, second, and third excited modes, respectively.

  10. Maximizing the return on taxpayers' investments in fundamental biomedical research.

    PubMed

    Lorsch, Jon R

    2015-05-01

    The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health has an annual budget of more than $2.3 billion. The institute uses these funds to support fundamental biomedical research and training at universities, medical schools, and other institutions across the country. My job as director of NIGMS is to work to maximize the scientific returns on the taxpayers' investments. I describe how we are optimizing our investment strategies and funding mechanisms, and how, in the process, we hope to create a more efficient and sustainable biomedical research enterprise.

  11. Maximizing the return on taxpayers' investments in fundamental biomedical research

    PubMed Central

    Lorsch, Jon R.

    2015-01-01

    The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health has an annual budget of more than $2.3 billion. The institute uses these funds to support fundamental biomedical research and training at universities, medical schools, and other institutions across the country. My job as director of NIGMS is to work to maximize the scientific returns on the taxpayers' investments. I describe how we are optimizing our investment strategies and funding mechanisms, and how, in the process, we hope to create a more efficient and sustainable biomedical research enterprise. PMID:25926703

  12. Mesoscale Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    Schlick, Tamar

    2009-03-01

    Eukaryotic chromatin is the fundamental protein/nucleic acid unit that stores the genetic material. Understanding how chromatin fibers fold and unfold in physiological conditions is important for interpreting fundamental biological processes like DNA replication and transcription regulation. Using a mesoscopic model of oligonucleosome chains and tailored sampling protocols, we elucidate the energetics of oligonucleosome folding/unfolding and the role of each histone tail, linker histones, and divalent ions in regulating chromatin structure. The resulting compact topologies reconcile features of the zigzag model with straight linker DNAs with the solenoid model with bent linker DNAs for optimal fiber organization and reveal dynamic and energetic aspects involved.

  13. Demystifying the Millennial student: a reassessment in measures of character and engagement in professional education.

    PubMed

    DiLullo, Camille; McGee, Patricia; Kriebel, Richard M

    2011-01-01

    The characteristic profile of Millennial Generation students, driving many educational reforms, can be challenged by research in a number of fields including cognition, learning style, neurology, and psychology. This evidence suggests that the current aggregate view of the Millennial student may be less than accurate. Statistics show that Millennial students are considerably diverse in backgrounds, personalities, and learning styles. Data are presented regarding technological predilection, multitasking, reading, critical thinking, professional behaviors, and learning styles, which indicate that students in the Millennial Generation may not be as homogenous in fundamental learning strategies and attitudes as is regularly proposed. Although their common character traits have implications for instruction, no available evidence demonstrates that these traits impact their fundamental process of learning. Many curricular strategies have been implemented to address alleged changes in the manner by which Millennial students learn. None has clearly shown superior outcomes in academic accomplishments or developing expertise for graduating students and concerns persist related to the successful engagement of Millennial students in the process of learning. Four factors for consideration in general curricular design are proposed to address student engagement and optimal knowledge acquisition for 21st century learners. Copyright © 2011 American Association of Anatomists.

  14. Extracorporeal CO2 removal: Technical and physiological fundaments and principal indications.

    PubMed

    Romay, E; Ferrer, R

    2016-01-01

    In recent years, technological improvements have reduced the complexity of extracorporeal membrane oxygenation devices. This have enabled the development of specific devices for the extracorporeal removal of CO2. These devices have a simpler configuration than extracorporeal membrane oxygenation devices and uses lower blood flows which could reduce the potential complications. Experimental studies have demonstrated the feasibility, efficacy and safety of extracorporeal removal of CO2 and some of its effects in humans. This technique was initially conceived as an adjunct therapy in patients with severe acute respiratory distress syndrome, as a tool to optimize protective ventilation. More recently, the use of this technique has allowed the emergence of a relatively new concept called "tra-protective ventilation"whose effects are still to be determined. In addition, the extracorporeal removal of CO2 has been used in patients with exacerbated hypercapnic respiratory failure with promising results. In this review we will describe the physiological and technical fundamentals of this therapy and its variants as well as an overview of the available clinical evidence, focused on its current potential. Copyright © 2015 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.

  15. Enhancing electrochemical water-splitting kinetics by polarization-driven formation of near-surface iron(0): an in situ XPS study on perovskite-type electrodes.

    PubMed

    Opitz, Alexander K; Nenning, Andreas; Rameshan, Christoph; Rameshan, Raffael; Blume, Raoul; Hävecker, Michael; Knop-Gericke, Axel; Rupprechter, Günther; Fleig, Jürgen; Klötzer, Bernhard

    2015-02-23

    In the search for optimized cathode materials for high-temperature electrolysis, mixed conducting oxides are highly promising candidates. This study deals with fundamentally novel insights into the relation between surface chemistry and electrocatalytic activity of lanthanum ferrite based electrolysis cathodes. For this means, near-ambient-pressure X-ray photoelectron spectroscopy (NAP-XPS) and impedance spectroscopy experiments were performed simultaneously on electrochemically polarized La0.6 Sr0.4 FeO3-δ (LSF) thin film electrodes. Under cathodic polarization the formation of Fe(0) on the LSF surface could be observed, which was accompanied by a strong improvement of the electrochemical water splitting activity of the electrodes. This correlation suggests a fundamentally different water splitting mechanism in presence of the metallic iron species and may open novel paths in the search for electrodes with increased water splitting activity. © 2014 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  16. Higgs varieties and fundamental groups

    NASA Astrophysics Data System (ADS)

    Bruzzo, Ugo; Graña Otero, Beatriz

    2018-06-01

    After reviewing some "fundamental group schemes" that can be attached to a variety by means of Tannaka duality, we consider the example of the Higgs fundamental group scheme, surveying its main properties and relations with the other fundamental groups, and giving some examples.

  17. Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology

    PubMed Central

    Palaparthi, Anil; Riede, Tobias

    2017-01-01

    Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563

  18. An efficient method of reducing glass dispersion tolerance sensitivity

    NASA Astrophysics Data System (ADS)

    Sparrold, Scott W.; Shepard, R. Hamilton

    2014-12-01

    Constraining the Seidel aberrations of optical surfaces is a common technique for relaxing tolerance sensitivities in the optimization process. We offer an observation that a lens's Abbe number tolerance is directly related to the magnitude by which its longitudinal and transverse color are permitted to vary in production. Based on this observation, we propose a computationally efficient and easy-to-use merit function constraint for relaxing dispersion tolerance sensitivity. Using the relationship between an element's chromatic aberration and dispersion sensitivity, we derive a fundamental limit for lens scale and power that is capable of achieving high production yield for a given performance specification, which provides insight on the point at which lens splitting or melt fitting becomes necessary. The theory is validated by comparing its predictions to a formal tolerance analysis of a Cooke Triplet, and then applied to the design of a 1.5x visible linescan lens to illustrate optimization for reduced dispersion sensitivity. A selection of lenses in high volume production is then used to corroborate the proposed method of dispersion tolerance allocation.

  19. Implementation and Development of the Incremental Hole Drilling Method for the Measurement of Residual Stress in Thermal Spray Coatings

    NASA Astrophysics Data System (ADS)

    Valente, T.; Bartuli, C.; Sebastiani, M.; Loreto, A.

    2005-12-01

    The experimental measurement of residual stresses originating within thick coatings deposited by thermal spray on solid substrates plays a role of fundamental relevance in the preliminary stages of coating design and process parameters optimization. The hole-drilling method is a versatile and widely used technique for the experimental determination of residual stress in the most superficial layers of a solid body. The consolidated procedure, however, can only be implemented for metallic bulk materials or for homogeneous, linear elastic, and isotropic materials. The main objective of the present investigation was to adapt the experimental method to the measurement of stress fields built up in ceramic coatings/metallic bonding layers structures manufactured by plasma spray deposition. A finite element calculation procedure was implemented to identify the calibration coefficients necessary to take into account the elastic modulus discontinuities that characterize the layered structure through its thickness. Experimental adjustments were then proposed to overcome problems related to the low thermal conductivity of the coatings. The number of calculation steps and experimental drilling steps were finally optimized.

  20. Optimizing health system response to patient's needs: an argument for the importance of functioning information.

    PubMed

    Hopfe, Maren; Prodinger, Birgit; Bickenbach, Jerome E; Stucki, Gerold

    2017-06-06

    Current health systems are increasingly challenged to meet the needs of a growing number of patients living with chronic and often multiple health conditions. The primary outcome of care, it is argued, is not merely curing disease but also optimizing functioning over a person's life span. According to the World Health Organization, functioning can serve as foundation for a comprehensive picture of health and augment the biomedical perspective with a broader and more comprehensive picture of health as it plays out in people's lives. The crucial importance of information about patient's functioning for a well-performing health system, however, has yet to be sufficiently appreciated. This paper argues that functioning information is fundamental in all components of health systems and enhances the capacity of health systems to optimize patients' health and health-related needs. Beyond making sense of biomedical disease patterns, health systems can profit from using functioning information to improve interprofessional collaboration and achieve cross-cutting disease treatment outcomes. Implications for rehabilitation Functioning is a key health outcome for rehabilitation within health systems. Information on restoring, maintaining, and optimizing human functioning can strengthen health system response to patients' health and rehabilitative needs. Functioning information guides health systems to achieve cross-cutting health outcomes that respond to the needs of the growing number of individuals living with chronic and multiple health conditions. Accounting for individuals functioning helps to overcome fragmentation of care and to improve interprofessional collaboration across settings.

  1. Development and innovation of system resources to optimize patient care.

    PubMed

    Johnson, Thomas J; Brownlee, Michael J

    2018-04-01

    Various incremental and disruptive healthcare innovations that are occurring or may occur are discussed, with insights on how multihospital health systems can prepare for the future and optimize the continuity of patient care provided. Innovation in patient care is occurring at an ever-increasing rate, and this is especially true relative to the transition of patients through the care continuum. Health systems must leverage their ability to standardize and develop electronic health record (EHR) systems and other infrastructure necessary to support patient care and optimize outcomes; examples include 3D printing of patient-specific medication dosage forms to enhance precision medicine, the use of drones for medication delivery, and the expansion of telehealth capabilities to improve patient access to the services of pharmacists and other healthcare team members. Disruptive innovations in pharmacy services and delivery will alter how medications are prescribed and delivered to patients now and in the future. Further, technology may also fundamentally alter how and where pharmacists and pharmacy technicians care for patients. This article explores the various innovations that are occurring and that will likely occur in the future, particularly as they apply to multihospital health systems and patient continuity of care. Pharmacy departments that anticipate and are prepared to adapt to incremental and disruptive innovations can demonstrate value in the multihospital health system through strategies such as optimizing the EHR, identifying telehealth opportunities, supporting infrastructure, and integrating services. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  2. Aerodynamic shape optimization directed toward a supersonic transport using sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    This investigation was conducted from March 1994 to August 1995, primarily, to extend and implement the previously developed aerodynamic design optimization methodologies for the problems related to a supersonic transport design. These methods had demonstrated promise to improve the designs (more specifically, the shape) of aerodynamic surfaces, by coupling optimization algorithms (OA) with Computational Fluid Dynamics (CFD) algorithms via sensitivity analyses (SA) with surface definition methods from Computer Aided Design (CAD). The present extensions of this method and their supersonic implementations have produced wing section designs, delta wing designs, cranked-delta wing designs, and nacelle designs, all of which have been reported in the open literature. Despite the fact that these configurations were highly simplified to be of any practical or commercial use, they served the algorithmic and proof-of-concept objectives of the study very well. The primary cause for the configurational simplifications, other than the usual simplify-to-study the fundamentals reason, were the premature closing of the project. Only after the first of the originally intended three-year term, both the funds and the computer resources supporting the project were abruptly cut due to their severe shortages at the funding agency. Nonetheless, it was shown that the extended methodologies could be viable options in optimizing the design of not only an isolated single-component configuration, but also a multiple-component configuration in supersonic and viscous flow. This allowed designing with the mutual interference of the components being one of the constraints all along the evolution of the shapes.

  3. Learning-Based Adaptive Optimal Tracking Control of Strict-Feedback Nonlinear Systems.

    PubMed

    Gao, Weinan; Jiang, Zhong-Ping; Weinan Gao; Zhong-Ping Jiang; Gao, Weinan; Jiang, Zhong-Ping

    2018-06-01

    This paper proposes a novel data-driven control approach to address the problem of adaptive optimal tracking for a class of nonlinear systems taking the strict-feedback form. Adaptive dynamic programming (ADP) and nonlinear output regulation theories are integrated for the first time to compute an adaptive near-optimal tracker without any a priori knowledge of the system dynamics. Fundamentally different from adaptive optimal stabilization problems, the solution to a Hamilton-Jacobi-Bellman (HJB) equation, not necessarily a positive definite function, cannot be approximated through the existing iterative methods. This paper proposes a novel policy iteration technique for solving positive semidefinite HJB equations with rigorous convergence analysis. A two-phase data-driven learning method is developed and implemented online by ADP. The efficacy of the proposed adaptive optimal tracking control methodology is demonstrated via a Van der Pol oscillator with time-varying exogenous signals.

  4. Estimation of fundamental kinetic parameters of polyhydroxybutyrate fermentation process of Azohydromonas australica using statistical approach of media optimization.

    PubMed

    Gahlawat, Geeta; Srivastava, Ashok K

    2012-11-01

    Polyhydroxybutyrate or PHB is a biodegradable and biocompatible thermoplastic with many interesting applications in medicine, food packaging, and tissue engineering materials. The present study deals with the enhanced production of PHB by Azohydromonas australica using sucrose and the estimation of fundamental kinetic parameters of PHB fermentation process. The preliminary culture growth inhibition studies were followed by statistical optimization of medium recipe using response surface methodology to increase the PHB production. Later on batch cultivation in a 7-L bioreactor was attempted using optimum concentration of medium components (process variables) obtained from statistical design to identify the batch growth and product kinetics parameters of PHB fermentation. A. australica exhibited a maximum biomass and PHB concentration of 8.71 and 6.24 g/L, respectively in bioreactor with an overall PHB production rate of 0.75 g/h. Bioreactor cultivation studies demonstrated that the specific biomass and PHB yield on sucrose was 0.37 and 0.29 g/g, respectively. The kinetic parameters obtained in the present investigation would be used in the development of a batch kinetic mathematical model for PHB production which will serve as launching pad for further process optimization studies, e.g., design of several bioreactor cultivation strategies to further enhance the biopolymer production.

  5. Deriving high-resolution protein backbone structure propensities from all crystal data using the information maximization device.

    PubMed

    Solis, Armando D

    2014-01-01

    The most informative probability distribution functions (PDFs) describing the Ramachandran phi-psi dihedral angle pair, a fundamental descriptor of backbone conformation of protein molecules, are derived from high-resolution X-ray crystal structures using an information-theoretic approach. The Information Maximization Device (IMD) is established, based on fundamental information-theoretic concepts, and then applied specifically to derive highly resolved phi-psi maps for all 20 single amino acid and all 8000 triplet sequences at an optimal resolution determined by the volume of current data. The paper shows that utilizing the latent information contained in all viable high-resolution crystal structures found in the Protein Data Bank (PDB), totaling more than 77,000 chains, permits the derivation of a large number of optimized sequence-dependent PDFs. This work demonstrates the effectiveness of the IMD and the superiority of the resulting PDFs by extensive fold recognition experiments and rigorous comparisons with previously published triplet PDFs. Because it automatically optimizes PDFs, IMD results in improved performance of knowledge-based potentials, which rely on such PDFs. Furthermore, it provides an easy computational recipe for empirically deriving other kinds of sequence-dependent structural PDFs with greater detail and precision. The high-resolution phi-psi maps derived in this work are available for download.

  6. Dynamical modeling and experiment for an intra-cavity optical parametric oscillator pumped by a Q-switched self-mode-locking laser

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Liu, Nianqiao; Song, Peng; Zhang, Haikun

    2016-11-01

    The rate-equation-based model for the Q-switched mode-locking (QML) intra-cavity OPO (IOPO) is developed, which includes the behavior of the fundamental laser. The intensity fluctuation mechanism of the fundamental laser is first introduced into the dynamics of a mode-locking OPO. In the derived model, the OPO nonlinear conversion is considered as a loss for the fundamental laser and thus the QML signal profile originates from the QML fundamental laser. The rate equations are solved by a digital computer for the case of an IOPO pumped by an electro-optic (EO) Q-switched self-mode-locking fundamental laser. The simulated results for the temporal shape with 20 kHz EO repetition and 11.25 W pump power, the signal average power, the Q-switched pulsewidth and the Q-switched pulse energy are obtained from the rate equations. The signal trace and output power from an EO QML Nd3+: GdVO4/KTA IOPO are experimentally measured. The theoretical values from the rate equations agree with the experimental results well. The developed model explains the behavior, which is helpful to system optimization.

  7. Small Changes: Using Assessment to Direct Instructional Practices in Large-Enrollment Biochemistry Courses.

    PubMed

    Xu, Xiaoying; Lewis, Jennifer E; Loertscher, Jennifer; Minderhout, Vicky; Tienson, Heather L

    2017-01-01

    Multiple-choice assessments provide a straightforward way for instructors of large classes to collect data related to student understanding of key concepts at the beginning and end of a course. By tracking student performance over time, instructors receive formative feedback about their teaching and can assess the impact of instructional changes. The evidence of instructional effectiveness can in turn inform future instruction, and vice versa. In this study, we analyzed student responses on an optimized pretest and posttest administered during four different quarters in a large-enrollment biochemistry course. Student performance and the effect of instructional interventions related to three fundamental concepts-hydrogen bonding, bond energy, and pK a -were analyzed. After instructional interventions, a larger proportion of students demonstrated knowledge of these concepts compared with data collected before instructional interventions. Student responses trended from inconsistent to consistent and from incorrect to correct. The instructional effect was particularly remarkable for the later three quarters related to hydrogen bonding and bond energy. This study supports the use of multiple-choice instruments to assess the effectiveness of instructional interventions, especially in large classes, by providing instructors with quick and reliable feedback on student knowledge of each specific fundamental concept. © 2017 X. Xu et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  8. Housing Arrays Following Disasters: Social Vulnerability Considerations in Designing Transitional Communities

    ERIC Educational Resources Information Center

    Spokane, Arnold R.; Mori, Yoko; Martinez, Frank

    2013-01-01

    Displacement and dislocation from homes disrupt fundamental social processes necessary for optimal community functioning. Neighborhood and community social capital, collective efficacy and place attachment are social processes that may be compromised following disaster, conflict, and upheaval. A collaborative approach to the preplanning, design,…

  9. Computers and the Thought-Producing Self of the Young Child.

    ERIC Educational Resources Information Center

    Fomichova, Olga; Fomichov, Vladimir

    2000-01-01

    Discusses a new, informational-based cybernetic conception of the early development of child consciousness. Suggests a solution to the fundamental problem of formulating and creating the optimal cognitive preconditions of successful child-computer interaction, and analyzes some negative aspects of using intelligent computer and communications…

  10. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  11. Physical parameter estimation from porcine ex vivo vocal fold dynamics in an inverse problem framework.

    PubMed

    Gómez, Pablo; Schützenberger, Anne; Kniesburges, Stefan; Bohr, Christopher; Döllinger, Michael

    2018-06-01

    This study presents a framework for a direct comparison of experimental vocal fold dynamics data to a numerical two-mass-model (2MM) by solving the corresponding inverse problem of which parameters lead to similar model behavior. The introduced 2MM features improvements such as a variable stiffness and a modified collision force. A set of physiologically sensible degrees of freedom is presented, and three optimization algorithms are compared on synthetic vocal fold trajectories. Finally, a total of 288 high-speed video recordings of six excised porcine larynges were optimized to validate the proposed framework. Particular focus lay on the subglottal pressure, as the experimental subglottal pressure is directly comparable to the model subglottal pressure. Fundamental frequency, amplitude and objective function values were also investigated. The employed 2MM is able to replicate the behavior of the porcine vocal folds very well. The model trajectories' fundamental frequency matches the one of the experimental trajectories in [Formula: see text] of the recordings. The relative error of the model trajectory amplitudes is on average [Formula: see text]. The experiments feature a mean subglottal pressure of 10.16 (SD [Formula: see text]) [Formula: see text]; in the model, it was on average 7.61 (SD [Formula: see text]) [Formula: see text]. A tendency of the model to underestimate the subglottal pressure is found, but the model is capable of inferring trends in the subglottal pressure. The average absolute error between the subglottal pressure in the model and the experiment is 2.90 (SD [Formula: see text]) [Formula: see text] or [Formula: see text]. A detailed analysis of the factors affecting the accuracy in matching the subglottal pressure is presented.

  12. Optimizing point-of-care testing in clinical systems management.

    PubMed

    Kost, G J

    1998-01-01

    The goal of improving medical and economic outcomes calls for leadership based on fundamental principles. The manager of clinical systems works collaboratively within the acute care center to optimize point-of-care testing through systematic approaches such as integrative strategies, algorithms, and performance maps. These approaches are effective and efficacious for critically ill patients. Optimizing point-of-care testing throughout the entire health-care system is inherently more difficult. There is potential to achieve high-quality testing, integrated disease management, and equitable health-care delivery. Despite rapid change and economic uncertainty, a macro-strategic, information-integrated, feedback-systems, outcomes-oriented approach is timely, challenging, effective, and uplifting to the creative human spirit.

  13. Interpreting Measures of Fundamental Movement Skills and Their Relationship with Health-Related Physical Activity and Self-Concept

    ERIC Educational Resources Information Center

    Jarvis, Stuart; Williams, Morgan; Rainer, Paul; Jones, Eleri Sian; Saunders, John; Mullen, Richard

    2018-01-01

    The aims of this study were to determine proficiency levels of fundamental movement skills using cluster analysis in a cohort of U.K. primary school children; and to further examine the relationships between fundamental movement skills proficiency and other key aspects of health-related physical activity behavior. Participants were 553 primary…

  14. Optimized multi-electrode stimulation increases focality and intensity at target

    NASA Astrophysics Data System (ADS)

    Dmochowski, Jacek P.; Datta, Abhishek; Bikson, Marom; Su, Yuzhuo; Parra, Lucas C.

    2011-08-01

    Transcranial direct current stimulation (tDCS) provides a non-invasive tool to elicit neuromodulation by delivering current through electrodes placed on the scalp. The present clinical paradigm uses two relatively large electrodes to inject current through the head resulting in electric fields that are broadly distributed over large regions of the brain. In this paper, we present a method that uses multiple small electrodes (i.e. 1.2 cm diameter) and systematically optimize the applied currents to achieve effective and targeted stimulation while ensuring safety of stimulation. We found a fundamental trade-off between achievable intensity (at the target) and focality, and algorithms to optimize both measures are presented. When compared with large pad-electrodes (approximated here by a set of small electrodes covering 25cm2), the proposed approach achieves electric fields which exhibit simultaneously greater focality (80% improvement) and higher target intensity (98% improvement) at cortical targets using the same total current applied. These improvements illustrate the previously unrecognized and non-trivial dependence of the optimal electrode configuration on the desired electric field orientation and the maximum total current (due to safety). Similarly, by exploiting idiosyncratic details of brain anatomy, the optimization approach significantly improves upon prior un-optimized approaches using small electrodes. The analysis also reveals the optimal use of conventional bipolar montages: maximally intense tangential fields are attained with the two electrodes placed at a considerable distance from the target along the direction of the desired field; when radial fields are desired, the maximum-intensity configuration consists of an electrode placed directly over the target with a distant return electrode. To summarize, if a target location and stimulation orientation can be defined by the clinician, then the proposed technique is superior in terms of both focality and intensity as compared to previous solutions and is thus expected to translate into improved patient safety and increased clinical efficacy.

  15. 78 FR 48331 - Defense Federal Acquisition Regulation Supplement: Release of Fundamental Research Information...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-08

    ... Federal Acquisition Regulation Supplement: Release of Fundamental Research Information (DFARS Case 2012...) to provide guidance relating to the release of fundamental research information. This rule was... release of information on fundamental research projects and not safeguarding. This rule was initiated to...

  16. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  17. Context-Specific Effects on Reciprocity in Mentoring Relationships: Ethical Implications

    ERIC Educational Resources Information Center

    Shore, Wendelyn J.; Toyokawa, Teru; Anderson, Dana D.

    2008-01-01

    Reciprocity is fundamental to effective mentoring relationships. However, we argue that it is inappropriate, and perhaps unethical, to expect comparable levels of reciprocity in all mentoring relationships. Instead, contextual factors influence optimal levels of reciprocity. Foremost is the developmental stage of the protege, with less mature,…

  18. Using the "Zone" to Help Reach Every Learner

    ERIC Educational Resources Information Center

    Silver, Debbie

    2011-01-01

    Basically everything associated with maximizing student engagement, achievement, optimal learning environment, learning zone, and the like can be attributed to the work of Lev Vygotsky (1978). A Russian psychologist and social constructivist, Vygotsky (1896-1934) proposed a concept so fundamental to the theory of motivation that it undergirds…

  19. Strategies for Creating Supportive School Nutrition Environments

    ERIC Educational Resources Information Center

    Centers for Disease Control and Prevention, 2014

    2014-01-01

    Good nutrition is vital to optimal health. The school environment plays a fundamental role in shaping lifelong healthy behaviors and can have a powerful influence on students' eating habits. A supportive school nutrition environment includes multiple elements: access to healthy and appealing foods and beverages available to students in school…

  20. Farmer Brown v. Rancher Wyatt: Teaching the Coase Theorem

    ERIC Educational Resources Information Center

    Gourley, Patrick

    2018-01-01

    The Coase Theorem is a fundamental tenet of environmental economics and is taught to thousands of principles of microeconomics students each year. Its counterintuitive conclusion, that a Pareto optimal solution can result between private parties regardless of the initial allocation of property rights over a scarce resource, is difficult for…

  1. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  2. Fundamental Studies in Blow-Down and Cryogenic Cooling

    DTIC Science & Technology

    1993-09-01

    Mudawar , I. and Anderson, T.M., -High Flux Electronic Cooling by Means of Pool Boiling - Part I: Parametric Investigation of the Effects of Coolant...Electronics, pp. 25-34, 1989. 30 Mudawar , I. and Anderson, T.M., "High Flux Electronic Cooling by Means of Pool Boiling - Part 1I: Optimization of

  3. Spreadsheet Design: An Optimal Checklist for Accountants

    ERIC Educational Resources Information Center

    Barnes, Jeffrey N.; Tufte, David; Christensen, David

    2009-01-01

    Just as good grammar, punctuation, style, and content organization are important to well-written documents, basic fundamentals of spreadsheet design are essential to clear communication. In fact, the very principles of good writing should be integrated into spreadsheet workpaper design and organization. The unique contributions of this paper are…

  4. Fundamental Studies on Crashworthiness Design with Uncertainties in the System

    DTIC Science & Technology

    2005-01-01

    studied; examples include using the Response Surface Methods (RSM) and Design of Experiment (DOE) [2-4]. Space Mapping (SM) is another practical...Exposed to Impact Load Using a Space Mapping Technique,” Struct. Multidisc. Optim., Vol. 27, pp. 411-420 (2004). 6. Mayer, R. R., Kikuchi, N. and Scott

  5. Fundamental Studies on Crashworthiness Design with Uncertainties in the System

    DTIC Science & Technology

    2005-01-01

    studied; examples include using the Response Surface Methods (RSM) and Design of Experiment (DOE) [2-4]. Space Mapping (SM) is another practical...to Impact Load Using a Space Mapping Technique," Struct. Multidisc. Optim., Vol. 27, pp. 411-420 (2004). 6. Mayer, R. R., Kikuchi, N. and Scott, R

  6. Proof of concept : GTFS data as a basis for optimization of Oregon's regional and statewide transit networks.

    DOT National Transportation Integrated Search

    2014-05-01

    Assessing the current "state of health" of individual transit networks is a fundamental part of studies aimed at planning changes and/or upgrades to the transportation network serving a region. To be able to effect changes that benefit both the indiv...

  7. Active material, optical mode and cavity impact on nanoscale electro-optic modulation performance

    NASA Astrophysics Data System (ADS)

    Amin, Rubab; Suer, Can; Ma, Zhizhen; Sarpkaya, Ibrahim; Khurgin, Jacob B.; Agarwal, Ritesh; Sorger, Volker J.

    2017-10-01

    Electro-optic modulation is a key function in optical data communication and possible future optical compute engines. The performance of modulators intricately depends on the interaction between the actively modulated material and the propagating waveguide mode. While a variety of high-performance modulators have been demonstrated, no comprehensive picture of what factors are most responsible for high performance has emerged so far. Here we report the first systematic and comprehensive analytical and computational investigation for high-performance compact on-chip electro-optic modulators by considering emerging active materials, model considerations and cavity feedback at the nanoscale. We discover that the delicate interplay between the material characteristics and the optical mode properties plays a key role in defining the modulator performance. Based on physical tradeoffs between index modulation, loss, optical confinement factors and slow-light effects, we find that there exist combinations of bias, material and optical mode that yield efficient phase or amplitude modulation with acceptable insertion loss. Furthermore, we show how material properties in the epsilon near zero regime enable reduction of length by as much as by 15 times. Lastly, we introduce and apply a cavity-based electro-optic modulator figure of merit, Δλ/Δα, relating obtainable resonance tuning via phase shifting relative to the incurred losses due to the fundamental Kramers-Kronig relations suggesting optimized device operating regions with optimized modulation-to-loss tradeoffs. This work paves the way for a holistic design rule of electro-optic modulators for high-density on-chip integration.

  8. Fundamental monogamy relation between contextuality and nonlocality.

    PubMed

    Kurzyński, Paweł; Cabello, Adán; Kaszlikowski, Dagomir

    2014-03-14

    We show that the no-disturbance principle imposes a tradeoff between locally contextual correlations violating the Klyachko-Can-Biniciogˇlu-Shumovski inequality and spatially separated correlations violating the Clauser-Horne-Shimony-Holt inequality. The violation of one inequality forbids the violation of the other. We also obtain the corresponding monogamy relation imposed by quantum theory for a qutrit-qubit system. Our results show the existence of fundamental monogamy relations between contextuality and nonlocality that suggest that entanglement might be a particular form of a more fundamental resource.

  9. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is developed. The theoretical underpinnings of the TSMC method and the flow of the algorithm are explained. Its performance is compared to other existing methods for financial option valuation. In the third, and final, problem, TSMC method is used to determine the conditions of feasibility for hybrid electric vehicles and fuel cell vehicles. There are many uncertainties related to the technologies and markets associated with new generation passenger vehicles. These uncertainties are analyzed in order to determine the conditions in which new generation vehicles can compete with established technologies.

  10. The application of artificial intelligence in the optimal design of mechanical systems

    NASA Astrophysics Data System (ADS)

    Poteralski, A.; Szczepanik, M.

    2016-11-01

    The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.

  11. Exchange Rates and Fundamentals.

    ERIC Educational Resources Information Center

    Engel, Charles; West, Kenneth D.

    2005-01-01

    We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…

  12. The integrated manual and automatic control of complex flight systems

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1991-01-01

    Research dealt with the general area of optimal flight control synthesis for manned flight vehicles. The work was generic; no specific vehicle was the focus of study. However, the class of vehicles generally considered were those for which high authority, multivariable control systems might be considered, for the purpose of stabilization and the achievement of optimal handling characteristics. Within this scope, the topics of study included several optimal control synthesis techniques, control-theoretic modeling of the human operator in flight control tasks, and the development of possible handling qualities metrics and/or measures of merit. Basic contributions were made in all these topics, including human operator (pilot) models for multi-loop tasks, optimal output feedback flight control synthesis techniques; experimental validations of the methods developed, and fundamental modeling studies of the air-to-air tracking and flared landing tasks.

  13. Design and optimization of mixed flow pump impeller blades by varying semi-cone angle

    NASA Astrophysics Data System (ADS)

    Dash, Nehal; Roy, Apurba Kumar; Kumar, Kaushik

    2018-03-01

    The mixed flow pump is a cross between the axial and radial flow pump. These pumps are used in a large number of applications in modern fields. For the designing of these mixed flow pump impeller blades, a lot number of design parameters are needed to be considered which makes this a tedious task for which fundamentals of turbo-machinery and fluid mechanics are always prerequisites. The semi-cone angle of mixed flow pump impeller blade has a specified range of variations generally between 45o to 60o. From the literature review done related to this topic researchers have considered only a particular semi-cone angle and all the calculations are based on this very same semi-cone angle. By varying this semi-cone angle in the specified range, it can be verified if that affects the designing of the impeller blades for a mixed flow pump. Although a lot of methods are available for designing of mixed flow pump impeller blades like inverse time marching method, the pseudo-stream function method, Fourier expansion singularity method, free vortex method, mean stream line theory method etc. still the optimized design of the mixed flow pump impeller blade has been a cumbersome work. As stated above since all the available research works suggest or propose the blade designs with constant semi-cone angle, here the authors have designed the impeller blades by varying the semi-cone angle in a particular range with regular intervals for a Mixed-Flow pump. Henceforth several relevant impeller blade designs are obtained and optimization is carried out to obtain the optimized design (blade with optimal geometry) of impeller blade.

  14. Strategies for the Optimization of Natural Leads to Anticancer Drugs or Drug Candidates

    PubMed Central

    Xiao, Zhiyan; Morris-Natschke, Susan L.; Lee, Kuo-Hsiung

    2015-01-01

    Natural products have made significant contribution to cancer chemotherapy over the past decades and remain an indispensable source of molecular and mechanistic diversity for anticancer drug discovery. More often than not, natural products may serve as leads for further drug development rather than as effective anticancer drugs by themselves. Generally, optimization of natural leads into anticancer drugs or drug candidates should not only address drug efficacy, but also improve ADMET profiles and chemical accessibility associated with the natural leads. Optimization strategies involve direct chemical manipulation of functional groups, structure-activity relationship-directed optimization and pharmacophore-oriented molecular design based on the natural templates. Both fundamental medicinal chemistry principles (e.g., bio-isosterism) and state-of-the-art computer-aided drug design techniques (e.g., structure-based design) can be applied to facilitate optimization efforts. In this review, the strategies to optimize natural leads to anticancer drugs or drug candidates are illustrated with examples and described according to their purposes. Furthermore, successful case studies on lead optimization of bioactive compounds performed in the Natural Products Research Laboratories at UNC are highlighted. PMID:26359649

  15. The Relationships among Fundamental Motor Skills, Health-Related Physical Fitness, and Body Fatness in South Korean Adolescents with Mental Retardation

    ERIC Educational Resources Information Center

    Foley, John T.; Harvey, Stephen; Chun, Hae-Ja; Kim, So-Yeun

    2008-01-01

    The purpose of this study was to examine the following: (a) the relationships among the latent constructs of fundamental motor skills (FMS), health-related physical fitness (HRF), and observed body fatness in South Korean adolescents with mental retardation (MR); (b) the indirect effect of fundamental motor skills on body fatness when mediated by…

  16. Development of a diode laser heterodyne spectrometer and observations of silicon monoxide in sunspots. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Glenar, D. A.

    1981-01-01

    A state of the art, tunable diode laser infrared heterodyne spectrometer was designed and constructed for ground based observations throughout the 8 to 12 micron atmospheric window. The instrument was optimized for use with presently available tunable diode lasers, and was designed as a flexible field system for use with large reflecting telescopes. The instrument was aligned and calibrated using laboratory and astronomical sources. Observations of SiO fundamental (v = 1-0) and hot band (v = 2-1) absorption features were made in sunspots near 8 microns using the spectrometer. The data permit an unambiguous determination of the temperature pressure relation in the upper layers of the umbral atmosphere, and support the sunspot model suggested by Stellmacher and Wiehr.

  17. Review of image processing fundamentals

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1985-01-01

    Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.

  18. Acidic and basic drugs in medicinal chemistry: a perspective.

    PubMed

    Charifson, Paul S; Walters, W Patrick

    2014-12-11

    The acid/base properties of a molecule are among the most fundamental for drug action. However, they are often overlooked in a prospective design manner unless it has been established that a certain ionization state (e.g., quaternary base or presence of a carboxylic acid) appears to be required for activity. In medicinal chemistry optimization programs it is relatively common to attenuate basicity to circumvent undesired effects such as lack of biological selectivity or safety risks such as hERG or phospholipidosis. However, teams may not prospectively explore a range of carefully chosen compound pKa values as part of an overall chemistry strategy or design hypothesis. This review summarizes the potential advantages and disadvantages of both acidic and basic drugs and provides some new analyses based on recently available public data.

  19. Early Childhood Physical Education. The Essential Elements.

    ERIC Educational Resources Information Center

    Gabbard, Carl

    1988-01-01

    Details are presented regarding the essential elements of an effective early childhood physical education curriculum. Components include movement awareness, fundamental locomotor skills, fundamental nonlocomotor skills, fundamental manipulative skills, and health-related fitness. (CB)

  20. 13C-based metabolic flux analysis: fundamentals and practice.

    PubMed

    Yang, Tae Hoon

    2013-01-01

    Isotope-based metabolic flux analysis is one of the emerging technologies applied to system level metabolic phenotype characterization in metabolic engineering. Among the developed approaches, (13)C-based metabolic flux analysis has been established as a standard tool and has been widely applied to quantitative pathway characterization of diverse biological systems. To implement (13)C-based metabolic flux analysis in practice, comprehending the underlying mathematical and computational modeling fundamentals is of importance along with carefully conducted experiments and analytical measurements. Such knowledge is also crucial when designing (13)C-labeling experiments and properly acquiring key data sets essential for in vivo flux analysis implementation. In this regard, the modeling fundamentals of (13)C-labeling systems and analytical data processing are the main topics we will deal with in this chapter. Along with this, the relevant numerical optimization techniques are addressed to help implementation of the entire computational procedures aiming at (13)C-based metabolic flux analysis in vivo.

  1. Fundamental bounds on the operation of Fano nonlinear isolators

    NASA Astrophysics Data System (ADS)

    Sounas, Dimitrios L.; Alù, Andrea

    2018-03-01

    Nonlinear isolators have attracted significant attention for their ability to break reciprocity and provide isolation without the need of an external bias. A popular approach for the design of such devices is based on Fano resonators, which, due to their sharp frequency response, can lead to very large isolation for moderate input intensities. Here, we show that, independent of their specific implementation, these devices are subject to fundamental bounds on the transmission coefficient in the forward direction versus their quality factor, input power, and nonreciprocal intensity range. Our analysis quantifies a general tradeoff between forward transmission and these metrics, stemming directly from time-reversal symmetry, and that unitary transmission is only possible for vanishing nonreciprocity. Our results also shed light on the operation of resonant nonlinear isolators, reveal their fundamental limitations, and provide indications on how it is possible to design nonlinear isolators with optimal performance.

  2. Double-trap measurement of the proton magnetic moment at 0.3 parts per billion precision.

    PubMed

    Schneider, Georg; Mooser, Andreas; Bohman, Matthew; Schön, Natalie; Harrington, James; Higuchi, Takashi; Nagahama, Hiroki; Sellner, Stefan; Smorra, Christian; Blaum, Klaus; Matsuda, Yasuyuki; Quint, Wolfgang; Walz, Jochen; Ulmer, Stefan

    2017-11-24

    Precise knowledge of the fundamental properties of the proton is essential for our understanding of atomic structure as well as for precise tests of fundamental symmetries. We report on a direct high-precision measurement of the magnetic moment μ p of the proton in units of the nuclear magneton μ N The result, μ p = 2.79284734462 (±0.00000000082) μ N , has a fractional precision of 0.3 parts per billion, improves the previous best measurement by a factor of 11, and is consistent with the currently accepted value. This was achieved with the use of an optimized double-Penning trap technique. Provided a similar measurement of the antiproton magnetic moment can be performed, this result will enable a test of the fundamental symmetry between matter and antimatter in the baryonic sector at the 10 -10 level. Copyright © 2017, American Association for the Advancement of Science.

  3. C-band fundamental/first-order mode converter based on multimode interference coupler on InP substrate

    NASA Astrophysics Data System (ADS)

    Limeng, Zhang; Dan, Lu; Zhaosong, Li; Biwei, Pan; Lingjuan, Zhao

    2016-12-01

    The design, fabrication and characterization of a fundamental/first-order mode converter based on multimode interference coupler on InP substrate were reported. Detailed optimization of the device parameters were investigated using 3D beam propagation method. In the experiments, the fabricated mode converter realized mode conversion from the fundamental mode to the first-order mode in the wavelength range of 1530-1565 nm with excess loss less than 3 dB. Moreover, LP01 and LP11 fiber modes were successfully excited from a few-mode fiber by using the device. This InP-based mode converter can be a possible candidate for integrated transceivers for future mode-division multiplexing system. Project supported by the National Basic Research Program of China (No. 2014CB340102) and in part by the National Natural Science Foundation of China (Nos. 61274045, 61335009).

  4. Fundamental Design Principles for Transcription-Factor-Based Metabolite Biosensors.

    PubMed

    Mannan, Ahmad A; Liu, Di; Zhang, Fuzhong; Oyarzún, Diego A

    2017-10-20

    Metabolite biosensors are central to current efforts toward precision engineering of metabolism. Although most research has focused on building new biosensors, their tunability remains poorly understood and is fundamental for their broad applicability. Here we asked how genetic modifications shape the dose-response curve of biosensors based on metabolite-responsive transcription factors. Using the lac system in Escherichia coli as a model system, we built promoter libraries with variable operator sites that reveal interdependencies between biosensor dynamic range and response threshold. We developed a phenomenological theory to quantify such design constraints in biosensors with various architectures and tunable parameters. Our theory reveals a maximal achievable dynamic range and exposes tunable parameters for orthogonal control of dynamic range and response threshold. Our work sheds light on fundamental limits of synthetic biology designs and provides quantitative guidelines for biosensor design in applications such as dynamic pathway control, strain optimization, and real-time monitoring of metabolism.

  5. FPGA Techniques Based New Hybrid Modulation Strategies for Voltage Source Inverters

    PubMed Central

    Sudha, L. U.; Baskaran, J.; Elankurisil, S. A.

    2015-01-01

    This paper corroborates three different hybrid modulation strategies suitable for single-phase voltage source inverter. The proposed method is formulated using fundamental switching and carrier based pulse width modulation methods. The main tale of this proposed method is to optimize a specific performance criterion, such as minimization of the total harmonic distortion (THD), lower order harmonics, switching losses, and heat losses. The proposed method is articulated using fundamental switching and carrier based pulse width modulation methods. Thus, the harmonic pollution in the power system will be reduced and the power quality will be augmented with better harmonic profile for a target fundamental output voltage. The proposed modulation strategies are simulated in MATLAB r2010a and implemented in a Xilinx spartan 3E-500 FG 320 FPGA processor. The feasibility of these modulation strategies is authenticated through simulation and experimental results. PMID:25821852

  6. Ontic structural realism and quantum field theory: Are there intrinsic properties at the most fundamental level of reality?

    NASA Astrophysics Data System (ADS)

    Berghofer, Philipp

    2018-05-01

    Ontic structural realism refers to the novel, exciting, and widely discussed basic idea that the structure of physical reality is genuinely relational. In its radical form, the doctrine claims that there are, in fact, no objects but only structure, i.e., relations. More moderate approaches state that objects have only relational but no intrinsic properties. In its most moderate and most tenable form, ontic structural realism assumes that at the most fundamental level of physical reality there are only relational properties. This means that the most fundamental objects only possess relational but no non-reducible intrinsic properties. The present paper will argue that our currently best physics refutes even this most moderate form of ontic structural realism. More precisely, I will claim that 1) according to quantum field theory, the most fundamental objects of matter are quantum fields and not particles, and show that 2) according to the Standard Model, quantum fields have intrinsic non-relational properties.

  7. Individual Differences in the Frequency-Following Response: Relation to Pitch Perception

    PubMed Central

    Coffey, Emily B. J.; Colagrosso, Emilia M. G.; Lehmann, Alexandre; Schönwiesner, Marc; Zatorre, Robert J.

    2016-01-01

    The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system. PMID:27015271

  8. Mapping Optimal Charge Density and Length of ROMP-Based PTDMs for siRNA Internalization.

    PubMed

    Caffrey, Leah M; deRonde, Brittany M; Minter, Lisa M; Tew, Gregory N

    2016-10-10

    A fundamental understanding of how polymer structure impacts internalization and delivery of biologically relevant cargoes, particularly small interfering ribonucleic acid (siRNA), is of critical importance to the successful design of improved delivery reagents. Herein we report the use of ring-opening metathesis polymerization (ROMP) methods to synthesize two series of guanidinium-rich protein transduction domain mimics (PTDMs): one based on an imide scaffold that contains one guanidinium moiety per repeat unit, and another based on a diester scaffold that contains two guanidinium moieties per repeat unit. By varying both the degree of polymerization and, in effect, the relative number of cationic charges in each PTDM, the performances of the two ROMP backbones for siRNA internalization were evaluated and compared. Internalization of fluorescently labeled siRNA into Jurkat T cells demonstrated that fluorescein isothiocyanate (FITC)-siRNA internalization had a charge content dependence, with PTDMs containing approximately 40 to 60 cationic charges facilitating the most internalization. Despite this charge content dependence, the imide scaffold yielded much lower viabilities in Jurkat T cells than the corresponding diester PTDMs with similar numbers of cationic charges, suggesting that the diester scaffold is preferred for siRNA internalization and delivery applications. These developments will not only improve our understanding of the structural factors necessary for optimal siRNA internalization, but will also guide the future development of optimized PTDMs for siRNA internalization and delivery.

  9. Optimizing the Anti-VEGF Treatment Strategy for Neovascular Age-Related Macular Degeneration: From Clinical Trials to Real-Life Requirements.

    PubMed

    Mantel, Irmela

    2015-06-01

    This Perspective discusses the pertinence of variable dosing regimens with anti-vascular endothelial growth factor (VEGF) for neovascular age-related macular degeneration (nAMD) with regard to real-life requirements. After the initial pivotal trials of anti-VEGF therapy, the variable dosing regimens pro re nata (PRN), Treat-and-Extend, and Observe-and-Plan, a recently introduced regimen, aimed to optimize the anti-VEGF treatment strategy for nAMD. The PRN regimen showed good visual results but requires monthly monitoring visits and can therefore be difficult to implement. Moreover, application of the PRN regimen revealed inferior results in real-life circumstances due to problems with resource allocation. The Treat-and-Extend regimen uses an interval based approach and has become widely accepted for its ease of preplanning and the reduced number of office visits required. The parallel development of the Observe-and-Plan regimen demonstrated that the future need for retreatment (interval) could be reliably predicted. Studies investigating the observe-and-plan regimen also showed that this could be used in individualized fixed treatment plans, allowing for dramatically reduced clinical burden and good outcomes, thus meeting the real life requirements. This progressive development of variable dosing regimens is a response to the real-life circumstances of limited human, technical, and financial resources. This includes an individualized treatment approach, optimization of the number of retreatments, a minimal number of monitoring visits, and ease of planning ahead. The Observe-and-Plan regimen achieves this goal with good functional results. Translational Relevance: This perspective reviews the process from the pivotal clinical trials to the development of treatment regimens which are adjusted to real life requirements. The article discusses this translational process which- although not the classical interpretation of translation from fundamental to clinical research, but a subsequent process after the pivotal clinical trials - represents an important translational step from the clinical proof of efficacy to optimization in terms of patients' and clinics' needs. The related scientific procedure includes the exploration of the concept, evaluation of security, and finally proof of efficacy.

  10. Optimum structural design with plate bending elements - A survey

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Prasad, B.

    1981-01-01

    A survey is presented of recently published papers in the field of optimum structural design of plates, largely with respect to the minimum-weight design of plates subject to such constraints as fundamental frequency maximization. It is shown that, due to the availability of powerful computers, the trend in optimum plate design is away from methods tailored to specific geometry and loads and toward methods that can be easily programmed for any kind of plate, such as finite element methods. A corresponding shift is seen in optimization from variational techniques to numerical optimization algorithms. Among the topics covered are fully stressed design and optimality criteria, mathematical programming, smooth and ribbed designs, design against plastic collapse, buckling constraints, and vibration constraints.

  11. Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)

    NASA Astrophysics Data System (ADS)

    Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi

    2017-03-01

    The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.

  12. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  13. A forced damped oscillation framework for undulatory swimming provides new insights into how propulsion arises in active and passive swimming.

    PubMed

    Bhalla, Amneet Pal Singh; Griffith, Boyce E; Patankar, Neelesh A

    2013-01-01

    A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions ("active" swimming) or by forces imparted by the surrounding fluid ("passive" swimming), is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD) simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained.

  14. A Forced Damped Oscillation Framework for Undulatory Swimming Provides New Insights into How Propulsion Arises in Active and Passive Swimming

    PubMed Central

    Bhalla, Amneet Pal Singh; Griffith, Boyce E.; Patankar, Neelesh A.

    2013-01-01

    A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions (“active” swimming) or by forces imparted by the surrounding fluid (“passive” swimming), is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD) simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained. PMID:23785272

  15. Optimization of 3D Field Design

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  16. Prospects and fundamental limitations of room temperature, non-avalanche, semiconductor photon-counting sensors (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ma, Jiaju; Zhang, Yang; Wang, Xiaoxin; Ying, Lei; Masoodian, Saleh; Wang, Zhiyuan; Starkey, Dakota A.; Deng, Wei; Kumar, Rahul; Wu, Yang; Ghetmiri, Seyed Amir; Yu, Zongfu; Yu, Shui-Qing; Salamo, Gregory J.; Fossum, Eric R.; Liu, Jifeng

    2017-05-01

    This research investigates the fundamental limits and trade-space of quantum semiconductor photodetectors using the Schrödinger equation and the laws of thermodynamics.We envision that, to optimize the metrics of single photon detection, it is critical to maximize the optical absorption in the minimal volume and minimize the carrier transit process simultaneously. Integration of photon management with quantum charge transport/redistribution upon optical excitation can be engineered to maximize the quantum efficiency (QE) and data rate and minimize timing jitter at the same time. Due to the ultra-low capacitance of these quantum devices, even a single photoelectron transfer can induce a notable change in the voltage, enabling non-avalanche single photon detection at room temperature as has been recently demonstrated in Si quanta image sensors (QIS). In this research, uniform III-V quantum dots (QDs) and Si QIS are used as model systems to test the theory experimentally. Based on the fundamental understanding, we also propose proof-of-concept, photon-managed quantum capacitance photodetectors. Built upon the concepts of QIS and single electron transistor (SET), this novel device structure provides a model system to synergistically test the fundamental limits and tradespace predicted by the theory for semiconductor detectors. This project is sponsored under DARPA/ARO's DETECT Program: Fundamental Limits of Quantum Semiconductor Photodetectors.

  17. Effects of pressing schedule on formation of vertical density profile for MDF panels

    Treesearch

    Zhiyong Cai; James H. Muehl; Jerrold E. Winandy

    2006-01-01

    A fundamental understanding of mat consolidation during hot pressing will help to optimize the medium-density fiberboard (MDF) manufacturing process by increasing productivity, improving product quality, and enhancing durability. Effects of panel density, fiber moisture content (MC), and pressing schedule on formation of vertical density profile (VDP) during hot...

  18. Useful Material Efficiency Green Metrics Problem Set Exercises for Lecture and Laboratory

    ERIC Educational Resources Information Center

    Andraos, John

    2015-01-01

    A series of pedagogical problem set exercises are posed that illustrate the principles behind material efficiency green metrics and their application in developing a deeper understanding of reaction and synthesis plan analysis and strategies to optimize them. Rigorous, yet simple, mathematical proofs are given for some of the fundamental concepts,…

  19. Student-Centered Learning: Functional Requirements for Integrated Systems to Optimize Learning

    ERIC Educational Resources Information Center

    Glowa, Liz; Goodell, Jim

    2016-01-01

    The realities of the 21st-century learner require that schools and educators fundamentally change their practice. "Educators must produce college- and career-ready graduates that reflect the future these students will face. And, they must facilitate learning through means that align with the defining attributes of this generation of…

  20. Characterization of Gas Chromatographic Liquid Phases Using McReynolds Constants. An Experiment for Instrumental Analysis Laboratory.

    ERIC Educational Resources Information Center

    Erskine, Steven R.; And Others

    1986-01-01

    Describes a laboratory experiment that is designed to aid in the understanding of the fundamental process involved in gas chromatographic separations. Introduces the Kovats retention index system for use by chemistry students to establish criteria for the optimal selection of gas chromatographic stationary phases. (TW)

  1. Fundamental research in the area of high temperature fuel cells in Russia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyomin, A.K.

    1996-04-01

    Research in the area of molten carbonate and solid oxide fuel cells has been conducted in Russia since the late 60`s. Institute of High Temperature Electrochemistry is the lead organisation in this area. Research in the area of materials used in fuel cells has allowed us to identify compositions of electrolytes, electrodes, current paths and transmitting, sealing and structural materials appropriate for long-term fuel cell applications. Studies of electrode processes resulted in better understanding of basic patterns of electrode reactions and in the development of a foundation for electrode structure optimization. We have developed methods to increase electrode activity levelsmore » that allowed us to reach current density levels of up to 1 amper/cm{sup 2}. Development of mathematical models of processes in high temperature fuel cells has allowed us to optimize their structure. The results of fundamental studies have been tested on laboratory mockups. MCFC mockups with up to 100 W capacity and SOFC mockups with up to 1 kW capacity have been manufactured and tested at IHTE. There are three SOFC structural options: tube, plate and modular.« less

  2. A Method for Combining Experimentation and Molecular Dynamics Simulation to Improve Cohesive Zone Models for Metallic Microstructures

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Glaessgen, E. H.; Ingraffea, A. R.; Aquino, W. A.

    2009-01-01

    Fracture processes within a material begin at the nanometer length scale at which the formation, propagation, and interaction of fundamental damage mechanisms occur. Physics-based modeling of these atomic processes quickly becomes computationally intractable as the system size increases. Thus, a multiscale modeling method, based on the aggregation of fundamental damage processes occurring at the nanoscale within a cohesive zone model, is under development and will enable computationally feasible and physically meaningful microscale fracture simulation in polycrystalline metals. This method employs atomistic simulation to provide an optimization loop with an initial prediction of a cohesive zone model (CZM). This initial CZM is then applied at the crack front region within a finite element model. The optimization procedure iterates upon the CZM until the finite element model acceptably reproduces the near-crack-front displacement fields obtained from experimental observation. With this approach, a comparison can be made between the original CZM predicted by atomistic simulation and the converged CZM that is based on experimental observation. Comparison of the two CZMs gives insight into how atomistic simulation scales.

  3. Exploring the effect of nested capillaries on core-cladding mode resonances in hollow-core antiresonant fibers

    NASA Astrophysics Data System (ADS)

    Provino, Laurent; Taunay, Thierry

    2018-02-01

    Optimal suppression of higher-order modes (HOMs) in hollow-core antiresonant fibers comprising a single ring of thin-walled capillaries was previously studied, and can be achieved when the condition on the capillary-tocore diameter ratio is satisfied (d/D ≍ 0.68). Here we report on the conditions for maximizing the leakage losses of HOMs in hollow-core nested antiresonant node-less fibers, while preserving low confinement loss for the fundamental mode. Using an analytical model based on coupled capillary waveguides, as well as full-vector finite element modeling, we show that optimal d/D value leading to high leakage losses of HOMs, is strongly correlated to the size of nested capillaries. We also show that extremely high value of degree of HOM suppression (˜1200) at the resonant coupling is almost unchanged on a wide range of nested capillary diameter dN ested values. These results thus suggest the possibility of designing antiresonant fibers with nested elements, which show optimal guiding performances in terms of the HOM loss compared to that of the fundamental mode, for clearly defined paired values of the ratios dN ested/d and d/D. These can also tend towards a single-mode behavior only when the dimensionless parameter dN ested/d is less than 0.30, with identical wall thicknesses for all of the capillaries.

  4. Development of Junior High School Students' Fundamental Movement Skills and Physical Activity in a Naturalistic Physical Education Setting

    ERIC Educational Resources Information Center

    Kalaja, Sami Pekka; Jaakkola, Timo Tapio; Liukkonen, Jarmo Olavi; Digelidis, Nikolaos

    2012-01-01

    Background: There is evidence showing that fundamental movement skills and physical activity are related with each other. The ability to perform a variety of fundamental movement skills increases the likelihood of children participating in different physical activities throughout their lives. However, no fundamental movement skill interventions…

  5. Device-independent randomness generation from several Bell estimators

    NASA Astrophysics Data System (ADS)

    Nieto-Silleras, Olmo; Bamps, Cédric; Silman, Jonathan; Pironio, Stefano

    2018-02-01

    Device-independent randomness generation and quantum key distribution protocols rely on a fundamental relation between the non-locality of quantum theory and its random character. This relation is usually expressed in terms of a trade-off between the probability of guessing correctly the outcomes of measurements performed on quantum systems and the amount of violation of a given Bell inequality. However, a more accurate assessment of the randomness produced in Bell experiments can be obtained if the value of several Bell expressions is simultaneously taken into account, or if the full set of probabilities characterizing the behavior of the device is considered. We introduce protocols for device-independent randomness generation secure against classical side information, that rely on the estimation of an arbitrary number of Bell expressions or even directly on the experimental frequencies of measurement outcomes. Asymptotically, this results in an optimal generation of randomness from experimental data (as measured by the min-entropy), without having to assume beforehand that the devices violate a specific Bell inequality.

  6. Micro/Nanostructured Materials for Sodium Ion Batteries and Capacitors.

    PubMed

    Li, Feng; Zhou, Zhen

    2018-02-01

    High-efficiency energy storage technologies and devices have received considerable attention due to their ever-increasing demand. Na-related energy storage systems, sodium ion batteries (SIBs) and sodium ion capacitors (SICs), are regarded as promising candidates for large-scale energy storage because of the abundant sources and low cost of sodium. In the last decade, many efforts, including structural and compositional optimization, effective modification of available materials, and design and exploration of new materials, have been made to promote the development of Na-related energy storage systems. In this Review, the latest developments of micro/nanostructured electrode materials for advanced SIBs and SICs, especially the rational design of unique composites with high thermodynamic stabilities and fast kinetics during charge/discharge, are summarized. In addition to the recent achievements, the remaining challenges with respect to fundamental investigations and commercialized applications are discussed in detail. Finally, the prospects of sodium-based energy storage systems are also described. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Multivariate Copula Analysis Toolbox (MvCAT): Describing dependence and underlying uncertainty using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Sadegh, Mojtaba; Ragno, Elisa; AghaKouchak, Amir

    2017-06-01

    We present a newly developed Multivariate Copula Analysis Toolbox (MvCAT) which includes a wide range of copula families with different levels of complexity. MvCAT employs a Bayesian framework with a residual-based Gaussian likelihood function for inferring copula parameters and estimating the underlying uncertainties. The contribution of this paper is threefold: (a) providing a Bayesian framework to approximate the predictive uncertainties of fitted copulas, (b) introducing a hybrid-evolution Markov Chain Monte Carlo (MCMC) approach designed for numerical estimation of the posterior distribution of copula parameters, and (c) enabling the community to explore a wide range of copulas and evaluate them relative to the fitting uncertainties. We show that the commonly used local optimization methods for copula parameter estimation often get trapped in local minima. The proposed method, however, addresses this limitation and improves describing the dependence structure. MvCAT also enables evaluation of uncertainties relative to the length of record, which is fundamental to a wide range of applications such as multivariate frequency analysis.

  8. Circadian Rhythms in Fear Conditioning: An Overview of Behavioral, Brain System, and Molecular Interactions

    PubMed Central

    Stork, Oliver

    2017-01-01

    The formation of fear memories is a powerful and highly evolutionary conserved mechanism that serves the behavioral adaptation to environmental threats. Accordingly, classical fear conditioning paradigms have been employed to investigate fundamental molecular processes of memory formation. Evidence suggests that a circadian regulation mechanism allows for a timestamping of such fear memories and controlling memory salience during both their acquisition and their modification after retrieval. These mechanisms include an expression of molecular clocks in neurons of the amygdala, hippocampus, and medial prefrontal cortex and their tight interaction with the intracellular signaling pathways that mediate neural plasticity and information storage. The cellular activities are coordinated across different brain regions and neural circuits through the release of glucocorticoids and neuromodulators such as acetylcholine, which integrate circadian and memory-related activation. Disturbance of this interplay by circadian phase shifts or traumatic experience appears to be an important factor in the development of stress-related psychopathology, considering these circadian components are of critical importance for optimizing therapeutic approaches to these disorders. PMID:28698810

  9. Communication Needs Assessment for Distributed Turbine Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Behbahani, Alireza R.

    2008-01-01

    Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.

  10. New Look at Social Support: A Theoretical Perspective on Thriving through Relationships

    PubMed Central

    Feeney, Brooke C.; Collins, Nancy L.

    2017-01-01

    Close and caring relationships are undeniably linked to health and well-being at all stages in the lifespan. Yet the specific pathways through which close relationships promote optimal well-being are not well understood. In this article, we present a model of thriving through relationships to provide a theoretical foundation for identifying the specific interpersonal processes that underlie the effects of close relationships on thriving. This model highlights two life contexts through which people may potentially thrive (coping successfully with life’s adversities and actively pursuing life opportunities for growth and development), it proposes two relational support functions that are fundamental to the experience of thriving in each life context, and it identifies mediators through which relational support is likely to have long-term effects on thriving. This perspective highlights the need for researchers to take a new look at social support by conceptualizing it as an interpersonal process with a focus on thriving. PMID:25125368

  11. Optimal adaptive control for quantum metrology with time-dependent Hamiltonians.

    PubMed

    Pang, Shengshi; Jordan, Andrew N

    2017-03-09

    Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T 2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T 4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case.

  12. Optimal adaptive control for quantum metrology with time-dependent Hamiltonians

    PubMed Central

    Pang, Shengshi; Jordan, Andrew N.

    2017-01-01

    Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case. PMID:28276428

  13. High speed civil transport aerodynamic optimization

    NASA Technical Reports Server (NTRS)

    Ryan, James S.

    1994-01-01

    This is a report of work in support of the Computational Aerosciences (CAS) element of the Federal HPCC program. Specifically, CFD and aerodynamic optimization are being performed on parallel computers. The long-range goal of this work is to facilitate teraflops-rate multidisciplinary optimization of aerospace vehicles. This year's work is targeted for application to the High Speed Civil Transport (HSCT), one of four CAS grand challenges identified in the HPCC FY 1995 Blue Book. This vehicle is to be a passenger aircraft, with the promise of cutting overseas flight time by more than half. To meet fuel economy, operational costs, environmental impact, noise production, and range requirements, improved design tools are required, and these tools must eventually integrate optimization, external aerodynamics, propulsion, structures, heat transfer, controls, and perhaps other disciplines. The fundamental goal of this project is to contribute to improved design tools for U.S. industry, and thus to the nation's economic competitiveness.

  14. Wideband Scattering Diffusion by using Diffraction of Periodic Surfaces and Optimized Unit Cell Geometries

    PubMed Central

    Costa, Filippo; Monorchio, Agostino; Manara, Giuliano

    2016-01-01

    A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process. PMID:27181841

  15. New algorithms for optimal reduction of technical risks

    NASA Astrophysics Data System (ADS)

    Todinov, M. T.

    2013-06-01

    The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.

  16. Design issues for optimum solar cell configuration

    NASA Astrophysics Data System (ADS)

    Kumar, Atul; Thakur, Ajay D.

    2018-05-01

    A computer based simulation of solar cell structure is performed to study the optimization of pn junction configuration for photovoltaic action. The fundamental aspects of photovoltaic action viz, absorption, separation collection, and their dependence on material properties and deatails of device structures is discussed. Using SCAPS 1D we have simulated the ideal pn junction and shown the effect of band offset and carrier densities on solar cell performance. The optimum configuration can be achieved by optimizing transport of carriers in pn junction under effect of field dependent recombination (tunneling) and density dependent recombination (SRH, Auger) mechanisms.

  17. Overcoming the Fundamental Bottlenecks to a new world-record silicon solar cell. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, Ajeet; Zimbardi, Francesco; Rounsaville, Brian

    The objective of the work performed within this contract is to reveal the materials and device physics that currently limit the experimental world record efficiency to 25% for single junction Si (2013), and to demonstrate 26.5% efficiency. The starting efficiency for this project was 23.9% in 2013. Four strategies are being combined throughout the project to achieve 26.5% cell efficiency: (1) passivated contacts via tunnel dielectrics, (2) emitter optimization and passivation through dopant profile engineering, (3) enhanced light trapping through development of photonic crystals and (4) base optimization.

  18. Scheduler Design Criteria: Requirements and Considerations

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.

  19. Swarm intelligence in bioinformatics: methods and implementations for discovering patterns of multiple sequences.

    PubMed

    Cui, Zhihua; Zhang, Yi

    2014-02-01

    As a promising and innovative research field, bioinformatics has attracted increasing attention recently. Beneath the enormous number of open problems in this field, one fundamental issue is about the accurate and efficient computational methodology that can deal with tremendous amounts of data. In this paper, we survey some applications of swarm intelligence to discover patterns of multiple sequences. To provide a deep insight, ant colony optimization, particle swarm optimization, artificial bee colony and artificial fish swarm algorithm are selected, and their applications to multiple sequence alignment and motif detecting problem are discussed.

  20. Optimizing root system architecture in biofuel crops for sustainable energy production and soil carbon sequestration.

    PubMed

    To, Jennifer Pc; Zhu, Jinming; Benfey, Philip N; Elich, Tedd

    2010-09-08

    Root system architecture (RSA) describes the dynamic spatial configuration of different types and ages of roots in a plant, which allows adaptation to different environments. Modifications in RSA enhance agronomic traits in crops and have been implicated in soil organic carbon content. Together, these fundamental properties of RSA contribute to the net carbon balance and overall sustainability of biofuels. In this article, we will review recent data supporting carbon sequestration by biofuel crops, highlight current progress in studying RSA, and discuss future opportunities for optimizing RSA for biofuel production and soil carbon sequestration.

  1. Quantifying noise in optical tweezers by allan variance.

    PubMed

    Czerwinski, Fabian; Richardson, Andrew C; Oddershede, Lene B

    2009-07-20

    Much effort is put into minimizing noise in optical tweezers experiments because noise and drift can mask fundamental behaviours of, e.g., single molecule assays. Various initiatives have been taken to reduce or eliminate noise but it has been difficult to quantify their effect. We propose to use Allan variance as a simple and efficient method to quantify noise in optical tweezers setups.We apply the method to determine the optimal measurement time, frequency, and detection scheme, and quantify the effect of acoustic noise in the lab. The method can also be used on-the-fly for determining optimal parameters of running experiments.

  2. Optimal control of parametric oscillations of compressed flexible bars

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.

  3. [Lyme-Arthritis--a case report].

    PubMed

    von Ameln-Mayerhofer, Andreas

    2016-05-01

    Lyme disease is a serious infectious disease which, if untreated, does not recover and leads to further complications that might be severe. This exemplary case report describes a possible secondary Borrelia infection. It underlines that early antibiotic therapy in the correct dosage is essential. Furthermore, problems are discussed that might occur in context of the decision process concerning the best antibiotic substance and the optimal application route. Last but not least, possible problems associated with the discharge from hospital are discussed. In conclusion, early diagnosis together with an on-time optimal antibiotic therapy are fundamental in the clinical management of Lyme disease.

  4. Dynamical modelling of haematopoiesis: an integrated view over the system in homeostasis and under perturbation.

    PubMed

    Manesso, Erica; Teles, José; Bryder, David; Peterson, Carsten

    2013-03-06

    A very high number of different types of blood cells must be generated daily through a process called haematopoiesis in order to meet the physiological requirements of the organism. All blood cells originate from a population of relatively few haematopoietic stem cells residing in the bone marrow, which give rise to specific progenitors through different lineages. Steady-state dynamics are governed by cell division and commitment rates as well as by population sizes, while feedback components guarantee the restoration of steady-state conditions. In this study, all parameters governing these processes were estimated in a computational model to describe the haematopoietic hierarchy in adult mice. The model consisted of ordinary differential equations and included negative feedback regulation. A combination of literature data, a novel divide et impera approach for steady-state calculations and stochastic optimization allowed one to reduce possible configurations of the system. The model was able to recapitulate the fundamental steady-state features of haematopoiesis and simulate the re-establishment of steady-state conditions after haemorrhage and bone marrow transplantation. This computational approach to the haematopoietic system is novel and provides insight into the dynamics and the nature of possible solutions, with potential applications in both fundamental and clinical research.

  5. Effect of Co-Production of Renewable Biomaterials on the Performance of Asphalt Binder in Macro and Micro Perspectives.

    PubMed

    Qu, Xin; Liu, Quan; Wang, Chao; Wang, Dawei; Oeser, Markus

    2018-02-06

    Conventional asphalt binder derived from the petroleum refining process is widely used in pavement engineering. However, asphalt binder is a non-renewable material. Therefore, the use of a co-production of renewable bio-oil as a modifier for petroleum asphalt has recently been getting more attention in the pavement field due to its renewability and its optimization for conventional petroleum-based asphalt binder. Significant research efforts have been done that mainly focus on the mechanical properties of bio-asphalt binder. However, there is still a lack of studies describing the effects of the co-production on performance of asphalt binders from a micro-scale perspective to better understand the fundamental modification mechanism. In this study, a reasonable molecular structure for the co-production of renewable bio-oils is created based on previous research findings and the observed functional groups from Fourier-transform infrared spectroscopy tests, which are fundamental and critical for establishing the molecular model of bio-asphalt binder with various biomaterials contents. Molecular simulation shows that the increase of biomaterial content causes the decrease of cohesion energy density, which can be related to the observed decrease of dynamic modulus. Additionally, a parameter of Flexibility Index is employed to characterize the ability of asphalt binder to resist deformation under oscillatory loading accurately.

  6. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  7. Recent advances in integrated multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.

    1992-01-01

    A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.

  8. Recent advances in multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.

    1992-01-01

    A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.

  9. The Impact of Vocal Hyperfunction on Relative Fundamental Frequency during Voicing Offset and Onset

    ERIC Educational Resources Information Center

    Stepp, Cara E.; Hillman, Robert E.; Heaton, James T.

    2010-01-01

    Purpose: This study tested the hypothesis that individuals with vocal hyperfunction would show decreases in relative fundamental frequency (RFF) surrounding a voiceless consonant. Method: This retrospective study of 2 clinical databases used speech samples from 15 control participants and women with hyperfunction-related voice disorders: 82 prior…

  10. Physics architecture

    NASA Astrophysics Data System (ADS)

    Konopleva, Nelly

    2017-03-01

    Fundamental physical theory axiomatics is closely connected with methods of experimental measurements. The difference between the theories using global and local symmetries is explained. It is shown that symmetry group localization leads not only to the change of the relativity principle, but to the fundamental modification of experimental programs testing physical theory predictions. It is noticed that any fundamental physical theory must be consistent with the measurement procedures employed for its testing. These ideas are illustrated by events of my biography connected with Yang-Mills theory transformation from an ordinary phenomenological model to a fundamental physical theory based on local symmetry principles like the Einsteinian General Relativity. Baldin position in this situation is demonstrated.

  11. Intraoperative Detection of Cell Injury and Cell Death with an 800 nm Near-Infrared Fluorescent Annexin V Derivative

    PubMed Central

    Ohnishi, Shunsuke; Vanderheyden, Jean-Luc; Tanaka, Eiichi; Patel, Bhavesh; De Grand, Alec; Laurence, Rita G.; Yamashita, Kenichiro; Frangioni, John V.

    2008-01-01

    The intraoperative detection of cell injury and cell death is fundamental to human surgeries such as organ transplantation and resection. Because of low autofluorescence background and relatively high tissue penetration, invisible light in the 800 nm region provides sensitive detection of disease pathology without changing the appearance of the surgical field. In order to provide surgeons with real-time intraoperative detection of cell injury and death after ischemia/reperfusion (I/R), we have developed a bioactive derivative of human annexin V (annexin800), which fluoresces at 800 nm. Total fluorescence yield, as a function of bioactivity, was optimized in vitro, and final performance was assessed in vivo. In liver, intestine and heart animal models of I/R, an optimal signal to background ratio was obtained 30 min after intravenous injection of annexin800, and histology confirmed concordance between planar reflectance images and actual deep tissue injury. In summary, annexin800 permits sensitive, real-time detection of cell injury and cell death after I/R in the intraoperative setting, and can be used during a variety of surgeries for rapid assessment of tissue and organ status. PMID:16869796

  12. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  13. Rational Design of Glucose-Responsive Insulin Using Pharmacokinetic Modeling.

    PubMed

    Bakh, Naveed A; Bisker, Gili; Lee, Michael A; Gong, Xun; Strano, Michael S

    2017-11-01

    A glucose responsive insulin (GRI) is a therapeutic that modulates its potency, concentration, or dosing of insulin in relation to a patient's dynamic glucose concentration, thereby approximating aspects of a normally functioning pancreas. Current GRI design lacks a theoretical basis on which to base fundamental design parameters such as glucose reactivity, dissociation constant or potency, and in vivo efficacy. In this work, an approach to mathematically model the relevant parameter space for effective GRIs is induced, and design rules for linking GRI performance to therapeutic benefit are developed. Well-developed pharmacokinetic models of human glucose and insulin metabolism coupled to a kinetic model representation of a freely circulating GRI are used to determine the desired kinetic parameters and dosing for optimal glycemic control. The model examines a subcutaneous dose of GRI with kinetic parameters in an optimal range that results in successful glycemic control within prescribed constraints over a 24 h period. Additionally, it is demonstrated that the modeling approach can find GRI parameters that enable stable glucose levels that persist through a skipped meal. The results provide a framework for exploring the parameter space of GRIs, potentially without extensive, iterative in vivo animal testing. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A trap potential model investigation of the optical activity induced in dye-DNA intercalation complexes

    NASA Astrophysics Data System (ADS)

    Kamiya, Mamoru

    1988-02-01

    The fundamental features of the optical activity induced in dye-DNA intercalation complexes are studied by application of the trap potential model which is useful to evaluate the induced rotational strength without reference to detailed geometrical information about the intercalation complexes. The specific effect of the potential depth upon the induced optical activity is explained in terms of the relative magnitudes of the wave-phase and helix-phase variations in the path of an electron moving on a restricted helical segment just like an exciton trapped around the dye intercalation site. The parallel and perpendicular components of the induced rotational strength well reflect basic properties of the helicity effects about the longitudinal and tangential axes of the DNA helical cylinder. The trap potential model is applied to optimize the potential parameters so as to reproduce the ionic strength effect upon the optical activity induced to proflavine-DNA intercalation complexes. From relationships between the optimized potential parameters and ionic strengths, it is inferred that increase in the ionic strength contributes to the optical activity induced by the nearest-neighbour interaction between intercalated proflavine and DNA base pairs.

  15. Biologically Relevant Heterogeneity: Metrics and Practical Insights.

    PubMed

    Gough, Albert; Stern, Andrew M; Maier, John; Lezon, Timothy; Shun, Tong-Ying; Chennubhotla, Chakra; Schurdak, Mark E; Haney, Steven A; Taylor, D Lansing

    2017-03-01

    Heterogeneity is a fundamental property of biological systems at all scales that must be addressed in a wide range of biomedical applications, including basic biomedical research, drug discovery, diagnostics, and the implementation of precision medicine. There are a number of published approaches to characterizing heterogeneity in cells in vitro and in tissue sections. However, there are no generally accepted approaches for the detection and quantitation of heterogeneity that can be applied in a relatively high-throughput workflow. This review and perspective emphasizes the experimental methods that capture multiplexed cell-level data, as well as the need for standard metrics of the spatial, temporal, and population components of heterogeneity. A recommendation is made for the adoption of a set of three heterogeneity indices that can be implemented in any high-throughput workflow to optimize the decision-making process. In addition, a pairwise mutual information method is suggested as an approach to characterizing the spatial features of heterogeneity, especially in tissue-based imaging. Furthermore, metrics for temporal heterogeneity are in the early stages of development. Example studies indicate that the analysis of functional phenotypic heterogeneity can be exploited to guide decisions in the interpretation of biomedical experiments, drug discovery, diagnostics, and the design of optimal therapeutic strategies for individual patients.

  16. Research progress on reconstruction of meniscus in tissue engineering.

    PubMed

    Zhang, Yu; Li, Pengsong; Wang, Hai; Wang, Yiwei; Song, Kedong; Li, Tianqing

    2017-05-01

    Meniscus damages are most common in sports injuries and aged knees. One third of meniscus lesions are known as white-white zone or nonvascular zones, which are composed of chondrocyte and extracellular matrix composition only. Due to low vascularization the ability of regeneration in such zones is inherently limited, leading to impossible self-regeneration post damage. Meniscus tissue engineering is known for emerging techniques for treating meniscus damage, but there are questions that need to be answered, including an optimal and suitable cell source, the usability of growth factor, the selectivity of optimal biomaterial scaffolds as well as the technology for improving partial reconstruction of meniscus tears. This review focuses on current research on the in vitro reconstruction of the meniscus using tissue engineering methods with the expectation to develop a series of tissue engineering meniscus products for the benefit of sports injuries. With rapid growth of clinical demand, the key breakthrough of meniscus tissue engineering research foundation is enlarged to a great extent. This review discusses aspects of meniscus tissue engineering, which is relative to the clinical treatment of meniscus injuries for further support and establishment of fundamental and clinical studies.

  17. Aspects of skeletal muscle modelling.

    PubMed

    Epstein, Marcelo; Herzog, Walter

    2003-09-29

    The modelling of skeletal muscle raises a number of philosophical questions, particularly in the realm of the relationship between different possible levels of representation and explanation. After a brief incursion into this area, a list of desiderata is proposed as a guiding principle for the construction of a viable model, including: comprehensiveness, soundness, experimental consistency, predictive ability and refinability. Each of these principles is illustrated by means of simple examples. The presence of internal constraints, such as incompressibility, may lead to counterintuitive results. A one-panel example is exploited to advocate the use of the principle of virtual work as the ideal tool to deal with these situations. The question of stability in the descending limb of the force-length relation is addressed and a purely mechanical analogue is suggested. New experimental results confirm the assumption that fibre stiffness is positive even in the descending limb. The indeterminacy of the force-sharing problem is traditionally resolved by optimizing a, presumably, physically meaningful target function. After presenting some new results in this area, based on a separation theorem, it is suggested that a more fundamental approach to the problem is the abandoning of optimization criteria in favour of an explicit implementation of activation criteria.

  18. Analysis of the structure of complex networks at different resolution levels

    NASA Astrophysics Data System (ADS)

    Arenas, A.; Fernández, A.; Gómez, S.

    2008-05-01

    Modular structure is ubiquitous in real-world complex networks, and its detection is important because it gives insights into the structure-functionality relationship. The standard approach is based on the optimization of a quality function, modularity, which is a relative quality measure for the partition of a network into modules. Recently, some authors (Fortunato and Barthélemy 2007 Proc. Natl Acad. Sci. USA 104 36 and Kumpula et al 2007 Eur. Phys. J. B 56 41) have pointed out that the optimization of modularity has a fundamental drawback: the existence of a resolution limit beyond which no modular structure can be detected even though these modules might have their own entity. The reason is that several topological descriptions of the network coexist at different scales, which is, in general, a fingerprint of complex systems. Here, we propose a method that allows for multiple resolution screening of the modular structure. The method has been validated using synthetic networks, discovering the predefined structures at all scales. Its application to two real social networks allows us to find the exact splits reported in the literature, as well as the substructure beyond the actual split.

  19. Experimental and theoretical studies on vibrational spectra of 4-(2-furanylmethyleneamino)antipyrine, 4-benzylideneaminoantipyrine and 4-cinnamilideneaminoantipyrine

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Xi; Hao, Qing-Li; Yu, Zong-Xue; Jiang, Wen-Jun; Lu, Lu-De; Wang, Xin

    2009-09-01

    This work deals with the IR and Raman spectroscopy of 4-(2-furanylmethyleneamino) antipyrine (FAP), 4-benzylideneaminoantipyrine (BAP) and 4-cinnamilideneaminoantipyrine (CAP) by means of experimental and quantum chemical calculations. The equilibrium geometries, harmonic frequencies, infrared intensities and Raman scattering activities were calculated by density functional B3LYP method with the 6-31G(d) basis set. The comparisons between the calculated and experimental results covering molecular structures, assignments of fundamental vibrational modes and thermodynamic properties were investigated. The optimized molecular geometries have been compared with the experimental data obtained from XRD data, which indicates that the theoretical results agree well with the corresponding experimental values. For the three compounds, comparisons and assignments of the vibrational frequencies indicate that the calculated frequencies are close to the experimental data, and the IR spectra are comparable with some slight differences, whereas the Raman spectra are different clearly and the strongest Raman scattering actives are relative tightly to the molecular conjugative moieties linked through their Schiff base imines. The thermodynamic properties (heat capacities, entropies and enthalpy changes) and their correlations with temperatures were also obtained from the harmonic frequencies of the optimized strucutres.

  20. Evidence-based prosthodontics: fundamental considerations, limitations, and guidelines.

    PubMed

    Bidra, Avinash S

    2014-01-01

    Evidence-based dentistry is rapidly emerging to become an integral part of patient care, dental education, and research. Prosthodontics is a unique dental specialty that encompasses art, philosophy, and science and includes reversible and irreversible treatments. It not only affords good applicability of many principles of evidence-based dentistry but also poses numerous limitations. This article describes the epidemiologic background, fundamental considerations, scrutiny of levels of evidence, limitations, guidelines, and future perspectives of evidence-based prosthodontics. Understanding these principles can aid clinicians in appropriate appraisal of the prosthodontics literature and use the best available evidence for making confident clinical decisions and optimizing patient care. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Information-Constrained Optima with Retrading: An Externality and Its Market-Based Solution☆

    PubMed Central

    Kilenthong, Weerachart T.; Townsend, Robert M.

    2010-01-01

    This paper studies the efficiency of competitive equilibria in environments with a moral hazard problem and unobserved states, both with retrading in ex post spot markets. The interaction between private information problems and the possibility of retrade creates an externality, unless preferences have special, restrictive properties. The externality is internalized by allowing agents to contract ex ante on market fundamentals determining the spot price or interest rate, over and above contracting on actions and outputs. Then competitive equilibria are equivalent with the appropriate notion of constrained Pareto optimality. Examples show that it is possible to have multiple market fundamentals or price-islands, created endogenously in equilibrium. PMID:21765540

  2. Factors influencing the delivery of the fundamentals of care: Perceptions of nurses, nursing leaders and healthcare consumers.

    PubMed

    Conroy, Tiffany

    2017-11-17

    To explore the factors described by nurses and consumer representatives influencing the delivery of the fundamentals of care. An ongoing challenge facing nursing is ensuring the "basics" or fundamentals of care are delivered optimally. The way nurses and patients perceive the delivery of the fundamentals of care had not been explored. Once identified, the factors that promote the delivery of the fundamentals of care may be facilitated. Inductive content analysis of scenario based focus groups. A qualitative approach was taken using three stages, including direct observation, focus groups and interviews. This paper reports the second stage. Focus groups discussed four patient care scenarios derived from the observational data. Focus groups were conducted separately for registered nurses, nurses in leadership roles and consumer representatives. Content analysis was used. The analysis of the focus group data resulted in three themes: Organisational factors; Individual nurse or patient factors; and Interpersonal factors. Organisational factors include nursing leadership, the context of care delivery and the availability of time. Individual nurse and patient factors include the specific care needs of the patient and the individual nurse and patient characteristics. Interpersonal factors include the nurse-patient relationship; involving the patient in their care, ensuring understanding and respecting choices; communication; and setting care priorities. Seeking the perspective of the people involved in delivering and receiving the fundamentals of care showed a shared understanding of the factors influencing the delivery of the fundamentals of care. The influence of nursing leadership and the quality of the nurse-patient relationship were perceived as important factors. Nurses and consumers share a common perspective of the factors influencing the delivery of the fundamentals of care and both value a therapeutic nurse-patient relationship. Clinical nursing leaders must understand the impact of their role in shaping the delivery of the fundamentals of care. © 2017 John Wiley & Sons Ltd.

  3. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  4. Optimal control of fast and high-fidelity quantum state transfer in spin-1/2 chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiong-Peng; Shao, Bin, E-mail: sbin610@bit.edu.cn; Hu, Shuai

    Spin chains are promising candidates for quantum communication and computation. Using quantum optimal control (OC) theory based on the Krotov method, we present a protocol to perform quantum state transfer with fast and high fidelity by only manipulating the boundary spins in a quantum spin-1/2 chain. The achieved speed is about one order of magnitude faster than that is possible in the Lyapunov control case for comparable fidelities. Additionally, it has a fundamental limit for OC beyond which optimization is not possible. The controls are exerted only on the couplings between the boundary spins and their neighbors, so that themore » scheme has good scalability. We also demonstrate that the resulting OC scheme is robust against disorder in the chain.« less

  5. Effects of panel density and mat moisture content on processing medium density fiberboard

    Treesearch

    Zhiyong Cai; James H. Muehl; Jerrold E. Winandy

    2006-01-01

    Development of a fundamental understanding of heat transfer and resin curing during hot- pressing will help to optimize the manufacturing process of medium density fiberboard (MDF) allowing increased productivity, improved product quality, and enhanced durability. Effect of mat moisture content (MC) and panel density on performance of MDF panels, heat transfer,...

  6. Introducing the Concept of Biocatalysis in the Classroom: The Conversion of Cholesterol to Provitamin D[subscript 3

    ERIC Educational Resources Information Center

    De Luca, Belén M.; Nudel, Clara B.; Gonzalez, Rodrigo H.; Nusblat, Alejandro D.

    2017-01-01

    Biocatalysis is a fundamental concept in biotechnology. The topic integrates knowledge of several disciplines; therefore, it was included in the course "design and optimization of biological systems" which is offered in the biochemistry curricula. We selected the ciliate tetrahymena as an example of a eukaryotic system with potential for…

  7. Wideband tunable 140 GHz second-harmonic InP-TED oscillator

    NASA Astrophysics Data System (ADS)

    Rydberg, A.; Kollberg, E.

    1986-07-01

    A second-harmonic InP-TED oscillator, with an output power of more than 3 dBm at 144 GHz and tunable over a 10 percent frequency range, has been developed. The design incorporates two waveguide resonators. One resonator determines the fundamental frequency of oscillation and the other optimizes the second-harmonic output power.

  8. Human Albumin Fragments Nanoparticles as PTX Carrier for Improved Anti-cancer Efficacy

    PubMed Central

    Ge, Liang; You, Xinru; Huang, Jun; Chen, Yuejian; Chen, Li; Zhu, Ying; Zhang, Yuan; Liu, Xiqiang; Wu, Jun; Hai, Qian

    2018-01-01

    For enhanced anti-cancer performance, human serum albumin fragments (HSAFs) nanoparticles (NPs) were developed as paclitaxel (PTX) carrier in this paper. Human albumins were broken into fragments via degradation and crosslinked by genipin to form HSAF NPs for better biocompatibility, improved PTX drug loading and sustained drug release. Compared with crosslinked human serum albumin NPs, the HSAF-NPs showed relative smaller particle size, higher drug loading, and improved sustained release. Cellular and animal results both indicated that the PTX encapsulated HSAF-NPs have shown good anti-cancer performance. And the anticancer results confirmed that NPs with fast cellular internalization showed better tumor inhibition. These findings will not only provide a safe and robust drug delivery NP platform for cancer therapy, but also offer fundamental information for the optimal design of albumin based NPs. PMID:29946256

  9. Pharmaceutical solvates, hydrates and amorphous forms: A special emphasis on cocrystals.

    PubMed

    Healy, Anne Marie; Worku, Zelalem Ayenew; Kumar, Dinesh; Madi, Atif M

    2017-08-01

    Active pharmaceutical ingredients (APIs) may exist in various solid forms, which can lead to differences in the intermolecular interactions, affecting the internal energy and enthalpy, and the degree of disorder, affecting the entropy. Differences in solid forms often lead to differences in thermodynamic parameters and physicochemical properties for example solubility, dissolution rate, stability and mechanical properties of APIs and excipients. Hence, solid forms of APIs play a vital role in drug discovery and development in the context of optimization of bioavailability, filing intellectual property rights and developing suitable manufacturing methods. In this review, the fundamental characteristics and trends observed for pharmaceutical hydrates, solvates and amorphous forms are presented, with special emphasis, due to their relative abundance, on pharmaceutical hydrates with single and two-component (i.e. cocrystal) host molecules. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The Flow Engine Framework: A Cognitive Model of Optimal Human Experience

    PubMed Central

    Šimleša, Milija; Guegan, Jérôme; Blanchard, Edouard; Tarpin-Bernard, Franck; Buisine, Stéphanie

    2018-01-01

    Flow is a well-known concept in the fields of positive and applied psychology. Examination of a large body of flow literature suggests there is a need for a conceptual model rooted in a cognitive approach to explain how this psychological phenomenon works. In this paper, we propose the Flow Engine Framework, a theoretical model explaining dynamic interactions between rearranged flow components and fundamental cognitive processes. Using an IPO framework (Inputs – Processes – Outputs) including a feedback process, we organize flow characteristics into three logically related categories: inputs (requirements for flow), mediating and moderating cognitive processes (attentional and motivational mechanisms) and outputs (subjective and objective outcomes), describing the process of the flow. Comparing flow with an engine, inputs are depicted as flow-fuel, core processes cylinder strokes and outputs as power created to provide motion. PMID:29899807

  11. Developmental stage related patterns of codon usage and genomic GC content: searching for evolutionary fingerprints with models of stem cell differentiation

    PubMed Central

    2007-01-01

    Background The usage of synonymous codons shows considerable variation among mammalian genes. How and why this usage is non-random are fundamental biological questions and remain controversial. It is also important to explore whether mammalian genes that are selectively expressed at different developmental stages bear different molecular features. Results In two models of mouse stem cell differentiation, we established correlations between codon usage and the patterns of gene expression. We found that the optimal codons exhibited variation (AT- or GC-ending codons) in different cell types within the developmental hierarchy. We also found that genes that were enriched (developmental-pivotal genes) or specifically expressed (developmental-specific genes) at different developmental stages had different patterns of codon usage and local genomic GC (GCg) content. Moreover, at the same developmental stage, developmental-specific genes generally used more GC-ending codons and had higher GCg content compared with developmental-pivotal genes. Further analyses suggest that the model of translational selection might be consistent with the developmental stage-related patterns of codon usage, especially for the AT-ending optimal codons. In addition, our data show that after human-mouse divergence, the influence of selective constraints is still detectable. Conclusion Our findings suggest that developmental stage-related patterns of gene expression are correlated with codon usage (GC3) and GCg content in stem cell hierarchies. Moreover, this paper provides evidence for the influence of natural selection at synonymous sites in the mouse genome and novel clues for linking the molecular features of genes to their patterns of expression during mammalian ontogenesis. PMID:17349061

  12. Design of transcranial magnetic stimulation coils with optimal trade-off between depth, focality, and energy.

    PubMed

    Gomez, Luis J; Goetz, Stefan M; Peterchev, Angel V

    2018-08-01

    Transcranial magnetic stimulation (TMS) is a noninvasive brain stimulation technique used for research and clinical applications. Existent TMS coils are limited in their precision of spatial targeting (focality), especially for deeper targets. This paper presents a methodology for designing TMS coils to achieve optimal trade-off between the depth and focality of the induced electric field (E-field), as well as the energy required by the coil. A multi-objective optimization technique is used for computationally designing TMS coils that achieve optimal trade-offs between E-field focality, depth, and energy (fdTMS coils). The fdTMS coil winding(s) maximize focality (minimize the volume of the brain region with E-field above a given threshold) while reaching a target at a specified depth and not exceeding predefined peak E-field strength and required coil energy. Spherical and MRI-derived head models are used to compute the fundamental depth-focality trade-off as well as focality-energy trade-offs for specific target depths. Across stimulation target depths of 1.0-3.4 cm from the brain surface, the suprathreshold volume can be theoretically decreased by 42%-55% compared to existing TMS coil designs. The suprathreshold volume of a figure-8 coil can be decreased by 36%, 44%, or 46%, for matched, doubled, or quadrupled energy. For matched focality and energy, the depth of a figure-8 coil can be increased by 22%. Computational design of TMS coils could enable more selective targeting of the induced E-field. The presented results appear to be the first significant advancement in the depth-focality trade-off of TMS coils since the introduction of the figure-8 coil three decades ago, and likely represent the fundamental physical limit.

  13. Exploring the Clinical Utility of Relative Fundamental Frequency as an Objective Measure of Vocal Hyperfunction

    ERIC Educational Resources Information Center

    Roy, Nelson; Fetrow, Rebecca A.; Merrill, Ray M.; Dromey, Christopher

    2016-01-01

    Purpose: Vocal hyperfunction, related to abnormal laryngeal muscle activity, is considered the proximal cause of primary muscle tension dysphonia (pMTD). Relative fundamental frequency (RFF) has been proposed as an objective acoustic marker of vocal hyperfunction. This study examined (a) the ability of RFF to track changes in vocal hyperfunction…

  14. Effects of Voice Therapy on Relative Fundamental Frequency during Voicing Offset and Onset in Patients with Vocal Hyperfunction

    ERIC Educational Resources Information Center

    Stepp, Cara E.; Merchant, Gabrielle R.; Heaton, James T.; Hillman, Robert E.

    2011-01-01

    Purpose: The purpose of this study was to determine whether the relative fundamental frequency (RFF) surrounding a voiceless consonant in patients with hyperfunctionally related voice disorders would normalize after a successful course of voice therapy. Method: Pre- and posttherapy measurements of RFF were compared in 16 subjects undergoing voice…

  15. Fundamentals of Embouchure in Brass Players: Towards a Definition and Clinical Assessment.

    PubMed

    Woldendorp, Kees H; Boschma, Hans; Boonstra, Anne M; Arendzen, Hans J; Reneman, Michiel F

    2016-12-01

    Brass players may experience problems producing an optimal sound (or range of sounds) in their instrument. Assessing and treating dysfunctional embouchure requires knowledge of functional embouchure, but peer-reviewed literature on dysfunctional and functional embouchure is scarce. This study aimed to provide a narrative overview of embouchure based on information from different scientific and clinical fields. This should be regarded as a first step in constructing a reliable, valid, and practical multi-item method to assess embouchure for brass players. Literature reviews were conducted concerning: 1) the definition of embouchure, 2) physics and acoustics of embouchure, 3) functioning of embouchure-related structures, and 4) instruments to assess embouchure. Also, embouchure experts (clinicians, scientists, and elite wind players) were consulted for information and discussion. A proposal for a new definition of embouchure, an overview of the relevant physics and acoustics, functions of embouchure-related body structures, and the main methods to measure embouchure in brass playing are presented. Peer-reviewed information about the fundamentals of dysfunctional embouchure is scarce and sometimes contradictory. A new definition for embouchure is proposed: embouchure is the process needed to adjust the amount, pressure, and direction of the air flow (generated by the breath support) as it travels through the mouth cavity and between the lips, by the position and/or movements of the tongue, teeth, jaws, cheeks, and lips, to produce a tone in a wind instrument. An integrative overview is presented which can serve as a transparent foundation for the present understanding of functional and dysfunctional embouchure and for developing an evidence-based multi-item assessment instrument.

  16. Searching for quantum optimal controls under severe constraints

    DOE PAGES

    Riviello, Gregory; Tibbetts, Katharine Moore; Brif, Constantin; ...

    2015-04-06

    The success of quantum optimal control for both experimental and theoretical objectives is connected to the topology of the corresponding control landscapes, which are free from local traps if three conditions are met: (1) the quantum system is controllable, (2) the Jacobian of the map from the control field to the evolution operator is of full rank, and (3) there are no constraints on the control field. This paper investigates how the violation of assumption (3) affects gradient searches for globally optimal control fields. The satisfaction of assumptions (1) and (2) ensures that the control landscape lacks fundamental traps, butmore » certain control constraints can still prevent successful optimization of the objective. Using optimal control simulations, we show that the most severe field constraints are those that limit essential control resources, such as the number of control variables, the control duration, and the field strength. Proper management of these resources is an issue of great practical importance for optimization in the laboratory. For each resource, we show that constraints exceeding quantifiable limits can introduce artificial traps to the control landscape and prevent gradient searches from reaching a globally optimal solution. These results demonstrate that careful choice of relevant control parameters helps to eliminate artificial traps and facilitate successful optimization.« less

  17. Canada-U.S. Relations

    DTIC Science & Technology

    2009-05-12

    56 RBC Financial Group, Daily Forex Fundamentals, February 27, 2009. [ http...www.actionforex.com/fundamental- analysis/daily- forex -fundamentals/canada%27s-fourth%11quarter-current-account-moves-into-deficit-after-nine-years- of-surpluses...sharing, infrastructure improvements, improvement of compatible immigration databases , visa policy coordination, common biometric identifiers in

  18. Vibrational spectra, NLO analysis, and HOMO-LUMO studies of 2-chloro-6-fluorobenzoic acid and 3,4-dichlorobenzoic acid by density functional method

    NASA Astrophysics Data System (ADS)

    Senthil kumar, J.; Arivazhagan, M.; Thangaraju, P.

    2015-08-01

    The FTIR and FT-Raman spectra of 2-chloro-6-fluorobenzoic acid and 3,4-dichlorobenzoic acid have been recorded in the region 4000-400 cm-1 and 3500-50 cm-1, respectively. Utilizing the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of fundamental modes of the compounds were carried out. The optimized molecular geometries, vibrational frequencies, thermodynamic properties and atomic charge of the compounds were calculated by using density functional theory (B3LYP) method with 6-311+G and 6-311++G basis sets. The difference between the observed and scaled wave number values of most of fundamentals is very small. Unambiguous vibration assignment of all the fundamentals is made up the total energy distribution (TED). The calculated HOMO and LUMO energies show that charge transfer occurs within the molecules. Besides, molecular electro static potential (MESP), Mulliken's charge analysis, first order hyper polarizability and several thermodynamic properties were performed by the DFT method.

  19. Dissemination of original NMR data enhances reproducibility and integrity in chemical research.

    PubMed

    Bisson, Jonathan; Simmler, Charlotte; Chen, Shao-Nong; Friesen, J Brent; Lankin, David C; McAlpine, James B; Pauli, Guido F

    2016-08-25

    The notion of data transparency is gaining a strong awareness among the scientific community. The availability of raw data is actually regarded as a fundamental way to advance science by promoting both integrity and reproducibility of research outcomes. Particularly, in the field of natural product and chemical research, NMR spectroscopy is a fundamental tool for structural elucidation and quantification (qNMR). As such, the accessibility of original NMR data, i.e., Free Induction Decays (FIDs), fosters transparency in chemical research and optimizes both peer review and reproducibility of reports by offering the fundamental tools to perform efficient structural verification. Although original NMR data are known to contain a wealth of information, they are rarely accessible along with published data. This viewpoint discusses the relevance of the availability of original NMR data as part of good research practices not only to promote structural correctness, but also to enhance traceability and reproducibility of both chemical and biological results.

  20. Glottal open quotient in singing: Measurements and correlation with laryngeal mechanisms, vocal intensity, and fundamental frequency

    NASA Astrophysics Data System (ADS)

    Henrich, Nathalie; D'Alessandro, Christophe; Doval, Boris; Castellengo, Michèle

    2005-03-01

    This article presents the results of glottal open-quotient measurements in the case of singing voice production. It explores the relationship between open quotient and laryngeal mechanisms, vocal intensity, and fundamental frequency. The audio and electroglottographic signals of 18 classically trained male and female singers were recorded and analyzed with regard to vocal intensity, fundamental frequency, and open quotient. Fundamental frequency and open quotient are derived from the differentiated electroglottographic signal, using the DECOM (DEgg Correlation-based Open quotient Measurement) method. As male and female phonation may differ in respect to vocal-fold vibratory properties, a distinction is made between two different glottal configurations, which are called laryngeal mechanisms: mechanism 1 (related to chest, modal, and male head register) and mechanism 2 (related to falsetto for male and head register for female). The results show that open quotient depends on the laryngeal mechanisms. It ranges from 0.3 to 0.8 in mechanism 1 and from 0.5 to 0.95 in mechanism 2. The open quotient is strongly related to vocal intensity in mechanism 1 and to fundamental frequency in mechanism 2. .

  1. Integrating fundamental movement skills in late childhood.

    PubMed

    Gimenez, Roberto; Manoel, Edison de J; de Oliveira, Dalton Lustosa; Dantas, Luiz; Marques, Inara

    2012-04-01

    The study examined how children of different ages integrate fundamental movement skills, such as running and throwing, and whether their developmental status was related to the combination of these skills. Thirty children were divided into three groups (G1 = 6-year-olds, G2 = 9-year-olds, and G3 = 12-year-olds) and filmed performing three tasks: running, overarm throwing, and the combined task. Patterns were identified and described, and the efficiency of integration was calculated (distance differences of the ball thrown in two tasks, overarm throwing and combined task). Differences in integration were related to age: the 6-year-olds were less efficient in combining the two skills than the 9- and 12-year-olds. These differences may be indicative of a phase of integrating fundamental movement skills in the developmental sequence. This developmental status, particularly throwing, seems to be related to the competence to integrate skills, which suggests that fundamental movement skills may be developmental modules.

  2. The role of environment in the observed Fundamental Plane of radio Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Shabala, Stanislav S.

    2018-05-01

    The optical Fundamental Plane of black hole activity relates radio continuum luminosity of Active Galactic Nuclei to [O III] luminosity and black hole mass. We examine the environments of low redshift (z < 0.2) radio-selected AGN, quantified through galaxy clustering, and find that halo mass provides similar mass scalings to black hole mass in the Fundamental Plane relations. AGN properties are strongly environment-dependent: massive haloes are more likely to host radiatively inefficient (low-excitation) radio AGN, as well as a higher fraction of radio luminous, extended sources. These AGN populations have different radio - optical luminosity scaling relations, and the observed mass scalings in the parent AGN sample are built up by combining populations preferentially residing in different environments. Accounting for environment-driven selection effects, the optical Fundamental Plane of supermassive black holes is likely to be mass-independent, as predicted by models.

  3. Beyond Low-Rank Representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering.

    PubMed

    Wang, Yang; Wu, Lin

    2018-07-01

    Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dholabhai, Pratik P., E-mail: pratik.dholabhai@asu.ed; Anwar, Shahriar, E-mail: anwar@asu.ed; Adams, James B., E-mail: jim.adams@asu.ed

    Kinetic lattice Monte Carlo (KLMC) model is developed for investigating oxygen vacancy diffusion in praseodymium-doped ceria. The current approach uses a database of activation energies for oxygen vacancy migration, calculated using first-principles, for various migration pathways in praseodymium-doped ceria. Since the first-principles calculations revealed significant vacancy-vacancy repulsion, we investigate the importance of that effect by conducting simulations with and without a repulsive interaction. Initially, as dopant concentrations increase, vacancy concentration and thus conductivity increases. However, at higher concentrations, vacancies interfere and repel one another, and dopants trap vacancies, creating a 'traffic jam' that decreases conductivity, which is consistent with themore » experimental findings. The modeled effective activation energy for vacancy migration slightly increased with increasing dopant concentration in qualitative agreement with the experiment. The current methodology comprising a blend of first-principle calculations and KLMC model provides a very powerful fundamental tool for predicting the optimal dopant concentration in ceria related materials. -- graphical abstract: Ionic conductivity in praseodymium doped ceria as a function of dopant concentration calculated using the kinetic lattice Monte Carlo vacancy-repelling model, which predicts the optimal composition for achieving maximum conductivity. Display Omitted Research highlights: {yields} KLMC method calculates the accurate time-dependent diffusion of oxygen vacancies. {yields} KLMC-VR model predicts a dopant concentration of {approx}15-20% to be optimal in PDC. {yields} At higher dopant concentration, vacancies interfere and repel one another, and dopants trap vacancies. {yields} Activation energy for vacancy migration increases as a function of dopant content« less

  5. Environmental statistics and optimal regulation.

    PubMed

    Sivak, David A; Thomson, Matt

    2014-09-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  6. Efficient Parameter Searches for Colloidal Materials Design with Digital Alchemy

    NASA Astrophysics Data System (ADS)

    Dodd, Paul, M.; Geng, Yina; van Anders, Greg; Glotzer, Sharon C.

    Optimal colloidal materials design is challenging, even for high-throughput or genomic approaches, because the design space provided by modern colloid synthesis techniques can easily have dozens of dimensions. In this talk we present the methodology of an inverse approach we term ''digital alchemy'' to perform rapid searches of design-paramenter spaces with up to 188 dimensions that yield thermodynamically optimal colloid parameters for target crystal structures with up to 20 particles in a unit cell. The method relies only on fundamental principles of statistical mechanics and Metropolis Monte Carlo techniques, and yields particle attribute tolerances via analogues of familiar stress-strain relationships.

  7. A Tutorial on Adaptive Design Optimization

    PubMed Central

    Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.

    2013-01-01

    Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275

  8. Optimization of a hybrid exchange-correlation functional for silicon carbides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oda, Takuji; Zhang, Yanwen; Weber, William J

    2013-01-01

    A hybrid exchange-correlation functional is optimized in order to accurately describe the nature of silicon carbides (SiC) in the framework of ab-initio calculations based on density functional theory (DFT), especially with an aim toward future applications in defect studies. It is shown that the Heyd-Scuseria-Ernzerhof (HSE) hybrid functional with the screening parameter of 0.15 -1 outperforms conventional exchange-correlation functionals and other popular hybrid functionals regarding description of band structures in SiC. High transferability is proven through assessment over various SiC polytypes, silicon and diamond. Excellent performance is also confirmed for other fundamental material properties including elastic constants and phonon frequency.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquez, Andres; Manzano Franco, Joseph B.; Song, Shuaiwen

    With Exascale performance and its challenges in mind, one ubiquitous concern among architects is energy efficiency. Petascale systems projected to Exascale systems are unsustainable at current power consumption rates. One major contributor to system-wide power consumption is the number of memory operations leading to data movement and management techniques applied by the runtime system. To address this problem, we present the concept of the Architected Composite Data Types (ACDT) framework. The framework is made aware of data composites, assigning them a specific layout, transformations and operators. Data manipulation overhead is amortized over a larger number of elements and program performancemore » and power efficiency can be significantly improved. We developed the fundamentals of an ACDT framework on a massively multithreaded adaptive runtime system geared towards Exascale clusters. Showcasing the capability of ACDT, we exercised the framework with two representative processing kernels - Matrix Vector Multiply and the Cholesky Decomposition – applied to sparse matrices. As transformation modules, we applied optimized compress/decompress engines and configured invariant operators for maximum energy/performance efficiency. Additionally, we explored two different approaches based on transformation opaqueness in relation to the application. Under the first approach, the application is agnostic to compression and decompression activity. Such approach entails minimal changes to the original application code, but leaves out potential applicationspecific optimizations. The second approach exposes the decompression process to the application, hereby exposing optimization opportunities that can only be exploited with application knowledge. The experimental results show that the two approaches have their strengths in HW and SW respectively, where the SW approach can yield performance and power improvements that are an order of magnitude better than ACDT-oblivious, hand-optimized implementations.We consider the ACDT runtime framework an important component of compute nodes that will lead towards power efficient Exascale clusters.« less

  10. A method to predict different mechanisms for blood-brain barrier permeability of CNS activity compounds in Chinese herbs using support vector machine.

    PubMed

    Jiang, Ludi; Chen, Jiahua; He, Yusu; Zhang, Yanling; Li, Gongyu

    2016-02-01

    The blood-brain barrier (BBB), a highly selective barrier between central nervous system (CNS) and the blood stream, restricts and regulates the penetration of compounds from the blood into the brain. Drugs that affect the CNS interact with the BBB prior to their target site, so the prediction research on BBB permeability is a fundamental and significant research direction in neuropharmacology. In this study, we combed through the available data and then with the help of support vector machine (SVM), we established an experiment process for discovering potential CNS compounds and investigating the mechanisms of BBB permeability of them to advance the research in this field four types of prediction models, referring to CNS activity, BBB permeability, passive diffusion and efflux transport, were obtained in the experiment process. The first two models were used to discover compounds which may have CNS activity and also cross the BBB at the same time; the latter two were used to elucidate the mechanism of BBB permeability of those compounds. Three optimization parameter methods, Grid Search, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO), were used to optimize the SVM models. Then, four optimal models were selected with excellent evaluation indexes (the accuracy, sensitivity and specificity of each model were all above 85%). Furthermore, discrimination models were utilized to study the BBB properties of the known CNS activity compounds in Chinese herbs and this may guide the CNS drug development. With the relatively systematic and quick approach, the application rationality of traditional Chinese medicines for treating nervous system disease in the clinical practice will be improved.

  11. Elevated depressive symptoms enhance reflexive but not reflective auditory category learning.

    PubMed

    Maddox, W Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G

    2014-09-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  13. Elevated Depressive Symptoms Enhance Reflexive but not Reflective Auditory Category Learning

    PubMed Central

    Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.

    2014-01-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. PMID:25041936

  14. Nanotechnology inspired advanced engineering fundamentals for optimizing drug delivery.

    PubMed

    Kassem, Ahmed Alaa

    2018-02-06

    Drug toxicity and inefficacy are commonly experienced problems with drug therapy failure. To face these problems, extensive research work took place aiming to design new dosage forms for drug delivery especially nanoparticulate systems. These systems are designed to increase the quantity of the therapeutic molecule delivered to the desired site concurrently with reduced side effects. In order to achieve this objective, nanocarriers must principally display suitable drug vehiculization abilities and a controlled biological destiny of drug molecules. Only the intelligent design of the nanomedicine will accomplish these fundamentals. The present review article is dedicated to the discussion of the important fundamentals to be considered in the fabrication of nanomedicines. These include the therapeutic agent, the nanocarrier and the functionalization moieties. Special consideration is devoted to the explanation and compilation of highly potential fabrication approaches assisting how to control the in vivo destiny of the nanomedicine. Finally, some nanotechnology-based drug delivery systems, for the development of nanomedicine, are also discussed. The nanotechnology-based drug delivery systems showed remarkable outcomes based on passive and active targeting as well as improvement of the drug pharmacodynamic and pharmacokinetic profiles. Multifunctional nanocarrier concept affords a revolutionary drug delivery approach for maximizing the efficacy, safety and monitoring the biological fate of the therapeutic molecule. Nanomedicines may enhance the efficacy of therapeutic molecules and reduce their toxic effects. Meanwhile, further research works are required to rightly optimize (and define) the effectiveness, nanotoxicity, in vivo destiny and feasibility of these nanomedicines which, from a preclinical standpoint, are actually promising. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Fundamentals of Hydrocarbon Upgrading to Liquid Fuels and Commodity Chemicals over Catalytic Metallic Nanoparticles

    NASA Astrophysics Data System (ADS)

    Chen, Tao

    Promising new technologies for biomass conversion into fuels and chemical feedstocks rely on the production of bio-oils, which need to be upgraded in order to remove oxygen-containing hydrocarbons and water. A high oxygen concentration makes bio-oils acidic and corrosive, unstable during storage, and less energetically valuable per unit weight than petroleum-derived hydrocarbons. Although there are efficient processes for the production of bio-oils, there are no efficient technologies for their upgrading. Current technologies utilize traditional petroleum refining catalysts, which are not optimized for biomass processing. New upgrading technologies are, therefore, urgently needed for development of sustainable energy resources. Development of such new technologies, however, is severely hindered by a lack of fundamental understanding of how oxygen and oxygen-containing hydrocarbons derived from biomass interact with promising noble-metal catalysts. In this study, kinetic reaction measurements, catalyst characterization and quantum chemical calculations using density functional theory were combined for determining adsorption modes and reaction mechanisms of hydrocarbons in the presence of oxygen on surfaces of catalytic noble-metal nanoparticles. The results were used for developing improved catalyst formulations and optimization of reaction conditions. The addition of molybdenum to platinum catalysts was shown to improve catalytic activity, stability, and selectivity in hydrodeoxygenation of acetic acid, which served as a model biomass compound. The fundamental results that describe interactions of oxygen and hydrocarbons with noble-metal catalysts were extended to other reactions and fields of study: evaluation of the reaction mechanism for hydrogen peroxide decomposition, development of improved hydrogenation catalysts and determination of adsorption modes of a spectroscopic probe molecule.

  16. Methodology for Selection of Optimum Light Stringers in Functionally Graded Panels Designed for Prescribed Fundamental Frequency or Buckling Load

    NASA Astrophysics Data System (ADS)

    Birman, Victor; Byrd, Larry W.

    2008-02-01

    The interest to functionally graded materials (FGM) and structures has been generated by their potential advantages, including enhanced thermal properties, reduced or eliminated delamination concerns, a potential for an improved stress distribution, etc. Various aspects of the processing, design, micromechanics and analysis of FGM have been outlined in a number of reviews, mentioned here are [1-3]. In particular, functionally graded panels may be advantageous compared to their conventional counterparts in numerous applications. However, a typical FGM panel is asymmetric about its middle plane resulting in lower buckling loads and fundamental frequencies as well as higher stresses and deformations than the counterpart with a symmetric distribution of the same constituents. The reduced stiffness of FGM panels can be compensated by reinforcing them with stringers. For example, metallic stringers at the metal-rich surface of a FGM ceramic-metal panel may provide an efficient solution enabling a designer to increase both buckling loads as well as natural frequencies. The list of studies on optimization of FGM is extensive as could be anticipated for such tailored structural elements. For example, recent papers by Batra and his collaborators present optimization of the natural frequencies of a FGM plate through material grading [4] and through the graded fiber orientation [5]. The present paper is concerned with an optimum design of the system of stringers for a specified FGM panel. The task is to design the lightest system of stringers enabling the panel to achieve prescribed buckling loads or fundamental frequency.

  17. Angles-only navigation for autonomous orbital rendezvous

    NASA Astrophysics Data System (ADS)

    Woffinden, David C.

    The proposed thesis of this dissertation has both a practical element and theoretical component which aim to answer key questions related to the use of angles-only navigation for autonomous orbital rendezvous. The first and fundamental principle to this work argues that an angles-only navigation filter can determine the relative position and orientation (pose) between two spacecraft to perform the necessary maneuvers and close proximity operations for autonomous orbital rendezvous. Second, the implementation of angles-only navigation for on-orbit applications is looked upon with skeptical eyes because of its perceived limitation of determining the relative range between two vehicles. This assumed, yet little understood subtlety can be formally characterized with a closed-form analytical observability criteria which specifies the necessary and sufficient conditions for determining the relative position and velocity with only angular measurements. With a mathematical expression of the observability criteria, it can be used to (1) identify the orbital rendezvous trajectories and maneuvers that ensure the relative position and velocity are observable for angles-only navigation, (2) quantify the degree or level of observability and (3) compute optimal maneuvers that maximize observability. In summary, the objective of this dissertation is to provide both a practical and theoretical foundation for the advancement of autonomous orbital rendezvous through the use of angles-only navigation.

  18. SUDEP: To discuss or not? Recommendations from bereaved relatives.

    PubMed

    Ramachandran Nair, Rajesh; Jack, Susan M; Strohm, Sonya

    2016-03-01

    The overarching purpose of this descriptive and exploratory qualitative study was to understand the experiences of relatives of individuals whose deaths were identified as SUDEP and to explore their preferences regarding SUDEP counseling. The principles of fundamental qualitative description informed all design decisions. Stratified purposeful sampling included 27 bereaved relatives (parent, sibling, spouse or child), aged at least 18 years, of 21 persons who passed away because of SUDEP. In-depth one-to-one interviews were conducted. Directed content analysis was used to code, categorize, and synthesize the interview data. There was consensus among all participants that the risk of SUDEP should be discussed with patients by their healthcare providers. Relatives opted for information on SUDEP at the time of, or shortly following, the diagnosis of epilepsy. Neurologists were identified as the healthcare providers who should discuss SUDEP with patients during a face-to-face encounter, subsequently supplemented with written information. It was identified that, when discussing SUDEP, emphasis should be on the risk factors, possible preventive strategies, and the rarity of incidence. The results of this study indicated that bereaved relatives wanted neurologists to inform patients about the risk of SUDEP, with optimal timing and setting of SUDEP counseling determined on a case-by-case basis. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Sire: An Automated Software Development Environment.

    DTIC Science & Technology

    1983-12-01

    understanding the fundamental nature of the software process" (Osterweil, 1981: 35). In fact, the optimal environment for most applications is found by extending... resource planning and other management concerns that cause these problems. Therefore, a complete ASDE should attempt to provide the -21...management with some type of control over the project without impeding the actual development process. Facilities that estimate current resource

  20. Designing for User Cognition and Affect in Software Instructions

    ERIC Educational Resources Information Center

    van der Meij, Hans

    2008-01-01

    In this paper we examine how to design software instructions for user cognition and affect. A basic and co-user manual are compared. The first provides fundamental support for both; the latter includes a buddy to further optimize support for user affect. The basic manual was faster and judged as easier to process than the co-user manual. In…

  1. Discrete sequence prediction and its applications

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    Learning from experience to predict sequences of discrete symbols is a fundamental problem in machine learning with many applications. We apply sequence prediction using a simple and practical sequence-prediction algorithm, called TDAG. The TDAG algorithm is first tested by comparing its performance with some common data compression algorithms. Then it is adapted to the detailed requirements of dynamic program optimization, with excellent results.

  2. An Examination of the Thought and Action of Higher Education Institution Presidents toward the Optimal Development of Students' Character

    ERIC Educational Resources Information Center

    Dietrich, Julie L.

    2017-01-01

    Historically, American higher education aimed to serve a public good, to include attending to students' character development and addressing the myriad needs of society (Yanikoski, 2004). This purpose was fundamental to the educational thought of ancient Greek philosophers, upon which much of higher education rests today. Two themes clearly…

  3. Optimizing the Nation's Investment in Academic Research: A New Regulatory Framework for the 21st Century

    ERIC Educational Resources Information Center

    National Academies Press, 2016

    2016-01-01

    Research universities are critical contributors to our national research enterprise. They are the principal source of a world-class labor force and fundamental discoveries that enhance our lives and the lives of others around the world. These institutions help to create an educated citizenry capable of making informed and crucial choices as…

  4. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  5. The Pediatric Urinary Tract and Medical Imaging.

    PubMed

    Penny, Steven M

    2016-01-01

    The pediatric urinary tract often is assessed with medical imaging. Consequently, it is essential for medical imaging professionals to have a fundamental understanding of pediatric anatomy, physiology, and common pathology of the urinary tract to provide optimal patient care. This article provides an overview of fetal development, pediatric urinary anatomy and physiology, and common diseases and conditions of the pediatric urinary tract.

  6. Physical fundamentals of criterial estimation of nitriding technology for parts of friction units

    NASA Astrophysics Data System (ADS)

    Kuksenova, L. I.; Gerasimov, S. A.; Lapteva, V. G.; Alekseeva, M. S.

    2013-03-01

    Characteristics of the structure and properties of surface layers of nitrided structural steels and alloys, which affect the level of surface fracture under friction, are studied. A generalized structural parameter for optimizing the nitriding process and a rapid method for estimating the quality of the surface layer of nitrided parts of friction units are developed.

  7. Effects of Phonetic Context on Relative Fundamental Frequency

    ERIC Educational Resources Information Center

    Lien, Yu-An S.; Gattuccio, Caitlin I.; Stepp, Cara E.

    2014-01-01

    Purpose: The effect of phonetic context on relative fundamental frequency (RFF) was examined, in order to develop stimuli sets with minimal within-speaker variability that can be implemented in future clinical protocols. Method: Sixteen speakers with healthy voices produced RFF stimuli. Uniform utterances consisted of 3 repetitions of the same…

  8. Plane Wave SH₀ Piezoceramic Transduction Optimized Using Geometrical Parameters.

    PubMed

    Boivin, Guillaume; Viens, Martin; Belanger, Pierre

    2018-02-10

    Structural health monitoring is a prominent alternative to the scheduled maintenance of safety-critical components. The nondispersive nature as well as the through-thickness mode shape of the fundamental shear horizontal guided wave mode (SH 0 ) make it a particularly attractive candidate for ultrasonic guided wave structural health monitoring. However, plane wave excitation of SH 0 at a high level of purity remains challenging because of the existence of the fundamental Lamb modes (A 0 and S 0 ) below the cutoff frequency thickness product of high-order modes. This paper presents a piezoelectric transducer concept optimized for plane SH 0 wave transduction based on the transducer geometry. The transducer parameter exploration was initially performed using a simple analytical model. A 3D multiphysics finite element model was then used to refine the transducer design. Finally, an experimental validation was conducted with a 3D laser Doppler vibrometer system. The analytical model, the finite element model, and the experimental measurement showed excellent agreement. The modal selectivity of SH 0 within a 20 ∘ beam opening angle at the design frequency of 425 kHz in a 1.59 mm aluminum plate was 23 dB, and the angle of the 6 dB wavefront was 86 ∘ .

  9. A discrete twin-boundary approach for simulating the magneto-mechanical response of Ni-Mn-Ga

    NASA Astrophysics Data System (ADS)

    Faran, Eilon; Shilo, Doron

    2016-09-01

    The design and optimization of ferromagnetic shape memory alloys (FSMA)-based devices require quantitative understanding of the dynamics of twin boundaries within these materials. Here, we present a discrete twin boundary modeling approach for simulating the behavior of an FSMA Ni-Mn-Ga crystal under combined magneto-mechanical loading conditions. The model is based on experimentally measured kinetic relations that describe the motion of individual twin boundaries over a wide range of velocities. The resulting calculations capture the dynamic response of Ni-Mn-Ga and reveal the relations between fundamental material parameters and actuation performance at different frequencies of the magnetic field. In particular, we show that at high field rates, the magnitude of the lattice barrier that resists twin boundary motion is the important property that determines the level of actuation strain, while the contribution of twinning stress property is minor. Consequently, type II twin boundaries, whose lattice barrier is smaller compared to type I, are expected to show better actuation performance at high rates, irrespective of the differences in the twinning stress property between the two boundary types. In addition, the simulation enables optimization of the actuation strain of a Ni-Mn-Ga crystal by adjusting the magnitude of the bias mechanical stress, thus providing direct guidelines for the design of actuating devices. Finally, we show that the use of a linear kinetic law for simulating the twinning-based response is inadequate and results in incorrect predictions.

  10. Investigation of monolithic passively mode-locked quantum dot lasers with extremely low repetition frequency.

    PubMed

    Xu, Tianhong; Cao, Juncheng; Montrosset, Ivo

    2015-01-01

    The dynamical regimes and performance optimization of quantum dot monolithic passively mode-locked lasers with extremely low repetition rate are investigated using the numerical method. A modified multisection delayed differential equation model is proposed to accomplish simulations of both two-section and three-section passively mode-locked lasers with long cavity. According to the numerical simulations, it is shown that fundamental and harmonic mode-locking regimes can be multistable over a wide current range. These dynamic regimes are studied, and the reasons for their existence are explained. In addition, we demonstrate that fundamental pulses with higher peak power can be achieved when the laser is designed to work in a region with smaller differential gain.

  11. Information-theoretic approach to interactive learning

    NASA Astrophysics Data System (ADS)

    Still, S.

    2009-01-01

    The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.

  12. Optimization of hole generation in Ti/CFRP stacks

    NASA Astrophysics Data System (ADS)

    Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.

    2018-03-01

    The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.

  13. Accelerated optimization and automated discovery with covariance matrix adaptation for experimental quantum control

    NASA Astrophysics Data System (ADS)

    Roslund, Jonathan; Shir, Ofer M.; Bäck, Thomas; Rabitz, Herschel

    2009-10-01

    Optimization of quantum systems by closed-loop adaptive pulse shaping offers a rich domain for the development and application of specialized evolutionary algorithms. Derandomized evolution strategies (DESs) are presented here as a robust class of optimizers for experimental quantum control. The combination of stochastic and quasi-local search embodied by these algorithms is especially amenable to the inherent topology of quantum control landscapes. Implementation of DES in the laboratory results in efficiency gains of up to ˜9 times that of the standard genetic algorithm, and thus is a promising tool for optimization of unstable or fragile systems. The statistical learning upon which these algorithms are predicated also provide the means for obtaining a control problem’s Hessian matrix with no additional experimental overhead. The forced optimal covariance adaptive learning (FOCAL) method is introduced to enable retrieval of the Hessian matrix, which can reveal information about the landscape’s local structure and dynamic mechanism. Exploitation of such algorithms in quantum control experiments should enhance their efficiency and provide additional fundamental insights.

  14. Finding influential nodes for integration in brain networks using optimal percolation theory.

    PubMed

    Del Ferraro, Gino; Moreno, Andrea; Min, Byungjoon; Morone, Flaviano; Pérez-Ramírez, Úrsula; Pérez-Cervera, Laura; Parra, Lucas C; Holodny, Andrei; Canals, Santiago; Makse, Hernán A

    2018-06-11

    Global integration of information in the brain results from complex interactions of segregated brain networks. Identifying the most influential neuronal populations that efficiently bind these networks is a fundamental problem of systems neuroscience. Here, we apply optimal percolation theory and pharmacogenetic interventions in vivo to predict and subsequently target nodes that are essential for global integration of a memory network in rodents. The theory predicts that integration in the memory network is mediated by a set of low-degree nodes located in the nucleus accumbens. This result is confirmed with pharmacogenetic inactivation of the nucleus accumbens, which eliminates the formation of the memory network, while inactivations of other brain areas leave the network intact. Thus, optimal percolation theory predicts essential nodes in brain networks. This could be used to identify targets of interventions to modulate brain function.

  15. Statistical considerations in monitoring birds over large areas

    USGS Publications Warehouse

    Johnson, D.H.

    2000-01-01

    The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.

  16. Pinning down high-performance Cu-chalcogenides as thin-film solar cell absorbers: A successive screening approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yubo; Zhang, Wenqing, E-mail: wqzhang@mail.sic.ac.cn, E-mail: pzhang3@buffalo.edu; State Key Laboratory of High Performance Ceramics and Superfine Microstructures, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050

    2016-05-21

    Photovoltaic performances of Cu-chalcogenides solar cells are strongly correlated with the absorber fundamental properties such as optimal bandgap, desired band alignment with window material, and high photon absorption ability. According to these criteria, we carry out a successive screening for 90 Cu-chalcogenides using efficient theoretical approaches. Besides the well-recognized CuInSe{sub 2} and Cu{sub 2}ZnSnSe{sub 4} materials, several novel candidates are identified to have optimal bandgaps of around 1.0–1.5 eV, spike-like band alignments with CdS window layer, sharp photon absorption edges, and high absorption coefficients. These new systems have great potential to be superior absorbers for photovolatic applications if their carrriermore » transport and defect properties are properly optimized.« less

  17. Rotor design optimization using a free wake analysis

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Boschitsch, Alexander H.; Wachspress, Daniel A.; Chua, Kiat

    1993-01-01

    The aim of this effort was to develop a comprehensive performance optimization capability for tiltrotor and helicopter blades. The analysis incorporates the validated EHPIC (Evaluation of Hover Performance using Influence Coefficients) model of helicopter rotor aerodynamics within a general linear/quadratic programming algorithm that allows optimization using a variety of objective functions involving the performance. The resulting computer code, EHPIC/HERO (HElicopter Rotor Optimization), improves upon several features of the previous EHPIC performance model and allows optimization utilizing a wide spectrum of design variables, including twist, chord, anhedral, and sweep. The new analysis supports optimization of a variety of objective functions, including weighted measures of rotor thrust, power, and propulsive efficiency. The fundamental strength of the approach is that an efficient search for improved versions of the baseline design can be carried out while retaining the demonstrated accuracy inherent in the EHPIC free wake/vortex lattice performance analysis. Sample problems are described that demonstrate the success of this approach for several representative rotor configurations in hover and axial flight. Features that were introduced to convert earlier demonstration versions of this analysis into a generally applicable tool for researchers and designers is also discussed.

  18. A framework for designing and analyzing binary decision-making strategies in cellular systems†

    PubMed Central

    Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.

    2015-01-01

    Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552

  19. 18.4%-Efficient Heterojunction Si Solar Cells Using Optimized ITO/Top Electrode.

    PubMed

    Kim, Namwoo; Um, Han-Don; Choi, Inwoo; Kim, Ka-Hyun; Seo, Kwanyong

    2016-05-11

    We optimize the thickness of a transparent conducting oxide (TCO) layer, and apply a microscale mesh-pattern metal electrode for high-efficiency a-Si/c-Si heterojunction solar cells. A solar cell equipped with the proposed microgrid metal electrode demonstrates a high short-circuit current density (JSC) of 40.1 mA/cm(2), and achieves a high efficiency of 18.4% with an open-circuit voltage (VOC) of 618 mV and a fill factor (FF) of 74.1% as result of the shortened carrier path length and the decreased electrode area of the microgrid metal electrode. Furthermore, by optimizing the process sequence for electrode formation, we are able to effectively restore the reduction in VOC that occurs during the microgrid metal electrode formation process. This work is expected to become a fundamental study that can effectively improve current loss in a-Si/c-Si heterojunction solar cells through the optimization of transparent and metal electrodes.

  20. Wet cooling towers: rule-of-thumb design and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leeper, Stephen A.

    1981-07-01

    A survey of wet cooling tower literature was performed to develop a simplified method of cooling tower design and simulation for use in power plant cycle optimization. The theory of heat exchange in wet cooling towers is briefly summarized. The Merkel equation (the fundamental equation of heat transfer in wet cooling towers) is presented and discussed. The cooling tower fill constant (Ka) is defined and values derived. A rule-of-thumb method for the optimized design of cooling towers is presented. The rule-of-thumb design method provides information useful in power plant cycle optimization, including tower dimensions, water consumption rate, exit air temperature,more » power requirements and construction cost. In addition, a method for simulation of cooling tower performance at various operating conditions is presented. This information is also useful in power plant cycle evaluation. Using the information presented, it will be possible to incorporate wet cooling tower design and simulation into a procedure to evaluate and optimize power plant cycles.« less

  1. Optimization of enzyme-assisted extraction and characterization of polysaccharides from Hericium erinaceus.

    PubMed

    Zhu, Yang; Li, Qian; Mao, Guanghua; Zou, Ye; Feng, Weiwei; Zheng, Daheng; Wang, Wei; Zhou, Lulu; Zhang, Tianxiu; Yang, Jun; Yang, Liuqing; Wu, Xiangyang

    2014-01-30

    The enzyme-assisted extraction (EAE) of polysaccharides from the fruits of Hericium erinaceus was studied. In this study, response surface methodology and the Box-Behnken design based on single-factor and orthogonal experiments were applied to optimize the EAE conditions. The optimal extraction conditions were as follows: a pH of 5.71, a temperature of 52.03°C and a time of 33.79 min. The optimal extraction conditions resulted in the highest H. erinaceus polysaccharides (HEP) yield, with a value 13.46 ± 0.37%, which represented an increase of 67.72% compared to hot water extraction (HWE). The polysaccharides were characterized by FT-IR, SEM, CD, AFM, and GC. The results showed that HEP was composed of mannose, glucose, xylose, and galactose in a molar ratio of 15.16:5.55:4.21:1. The functional groups of the H. erinaceus polysaccharides extracted by HWE and EAE were fundamentally identical but had apparent conformational changes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Chemistry challenges in lead optimization: silicon isosteres in drug discovery.

    PubMed

    Showell, Graham A; Mills, John S

    2003-06-15

    During the lead optimization phase of drug discovery projects, the factors contributing to subsequent failure might include poor portfolio decision-making and a sub-optimal intellectual property (IP) position. The pharmaceutical industry has an ongoing need for new, safe medicines with a genuine biomedical benefit, a clean IP position and commercial viability. Inherent drug-like properties and chemical tractability are also essential for the smooth development of such agents. The introduction of bioisosteres, to improve the properties of a molecule and obtain new classes of compounds without prior art in the patent literature, is a key strategy used by medicinal chemists during the lead optimization process. Sila-substitution (C/Si exchange) of existing drugs is an approach to search for new drug-like candidates that have beneficial biological properties and a clear IP position. Some of the fundamental differences between carbon and silicon can lead to marked alterations in the physicochemical and biological properties of the silicon-containing analogues and the resulting benefits can be exploited in the drug design process.

  3. The relationship between offspring size and fitness: integrating theory and empiricism.

    PubMed

    Rollinson, Njal; Hutchings, Jeffrey A

    2013-02-01

    How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.

  4. A unified RANS–LES model: Computational development, accuracy and cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopalan, Harish, E-mail: hgopalan@uwyo.edu; Heinz, Stefan, E-mail: heinz@uwyo.edu; Stöllinger, Michael K., E-mail: MStoell@uwyo.edu

    2013-09-15

    Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged Navier–Stokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANS–LES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANS–LES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unifiedmore » RANS–LES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANS–LES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANS–LES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANS–LES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07Re{sup 0.46}. For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANS–LES models, it is shown that the LUM provides significantly improved predictions.« less

  5. A variational approach to behavioral and neuroelectrical laws.

    PubMed

    Noventa, Stefano; Vidotto, Giulio

    2012-09-01

    Variational methods play a fundamental and unifying role in several fields of physics, chemistry, engineering, economics, and biology, as they allow one to derive the behavior of a system as a consequence of an optimality principle. A possible application of these methods to a model of perception is given by considering a psychophysical law as the solution of an Euler-Lagrange equation. A general class of Lagrangians is identified by requiring the measurability of prothetic continua on interval scales. The associated Hamiltonian (the energy of the process) is tentatively connected with neurophysiological aspects. As an example of the suggested approach a particular choice of the Lagrangian, that is a sufficient condition to obtain classical psychophysical laws, while accounting for psychophysical adaptation and the stationarity of neuronal activity, is used to explore a possible relation between a behavioral law and a neuroelectrical ,response based on the Naka-Rushton model.

  6. Meal frequency and timing in health and disease

    PubMed Central

    Mattson, Mark P.; Allison, David B.; Fontana, Luigi; Harvie, Michelle; Longo, Valter D.; Malaisse, Willy J.; Mosley, Michael; Notterpek, Lucia; Ravussin, Eric; Scheer, Frank A. J. L.; Seyfried, Thomas N.; Varady, Krista A.; Panda, Satchidananda

    2014-01-01

    Although major research efforts have focused on how specific components of foodstuffs affect health, relatively little is known about a more fundamental aspect of diet, the frequency and circadian timing of meals, and potential benefits of intermittent periods with no or very low energy intakes. The most common eating pattern in modern societies, three meals plus snacks every day, is abnormal from an evolutionary perspective. Emerging findings from studies of animal models and human subjects suggest that intermittent energy restriction periods of as little as 16 h can improve health indicators and counteract disease processes. The mechanisms involve a metabolic shift to fat metabolism and ketone production, and stimulation of adaptive cellular stress responses that prevent and repair molecular damage. As data on the optimal frequency and timing of meals crystalizes, it will be critical to develop strategies to incorporate those eating patterns into health care policy and practice, and the lifestyles of the population. PMID:25404320

  7. Fabrication and Testing of Ceramic Matrix Composite Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Effinger, M. R.; Clinton, R. C., Jr.; Dennis, J.; Elam, S.; Genge, G.; Eckel, A.; Jaskowiak, M. H.; Kiser, J. D.; Lang, J.

    2001-01-01

    NASA has established goals for Second and Third Generation Reusable Launch Vehicles. Emphasis has been placed on significantly improving safety and decreasing the cost of transporting payloads to orbit. Ceramic matrix composites (CMC) components are being developed by NASA to enable significant increases in safety and engineer performance, while reducing costs. The development of the following CMC components are being pursued by NASA: (1) Simplex CMC Blisk; (2) Cooled CMC Nozzle Ramps; (3) Cooled CMC Thrust Chambers; and (4) CMC Gas Generator. These development efforts are application oriented, but have a strong underpinning of fundamental understanding of processing-microstructure-property relationships relative to structural analyses, nondestructive characterization, and material behavior analysis at the coupon and component and system operation levels. As each effort matures, emphasis will be placed on optimizing and demonstrating material/component durability, ideally using a combined Building Block Approach and Build and Bust Approach.

  8. Relating the Resource Theories of Entanglement and Quantum Coherence.

    PubMed

    Chitambar, Eric; Hsieh, Min-Hsiu

    2016-07-08

    Quantum coherence and quantum entanglement represent two fundamental features of nonclassical systems that can each be characterized within an operational resource theory. In this Letter, we unify the resource theories of entanglement and coherence by studying their combined behavior in the operational setting of local incoherent operations and classical communication (LIOCC). Specifically, we analyze the coherence and entanglement trade-offs in the tasks of state formation and resource distillation. For pure states we identify the minimum coherence-entanglement resources needed to generate a given state, and we introduce a new LIOCC monotone that completely characterizes a state's optimal rate of bipartite coherence distillation. This result allows us to precisely quantify the difference in operational powers between global incoherent operations, LIOCC, and local incoherent operations without classical communication. Finally, a bipartite mixed state is shown to have distillable entanglement if and only if entanglement can be distilled by LIOCC, and we strengthen the well-known Horodecki criterion for distillability.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, Mikaël M.; Briquez, Priscilla S.; Maruyama, Kenta

    Growth factors are very promising molecules to enhance bone regeneration. However, their translation to clinical use has been seriously limited, facing issues related to safety and cost-effectiveness. These problems derive from the vastly supra-physiological doses of growth factor used without optimized delivery systems. Therefore, these issues have motivated the development of new delivery systems allowing better control of the spatio-temporal release and signaling of growth factors. Because the extracellular matrix (ECM) naturally plays a fundamental role in coordinating growth factor activity in vivo, a number of novel delivery systems have been inspired by the growth factor regulatory function of themore » ECM. After introducing the role of growth factors during the bone regeneration process, this review exposes different issues that growth factor-based therapies have encountered in the clinic and highlights recent delivery approaches based on the natural interaction between growth factor and the ECM.« less

  10. Adherence to antiretroviral therapy among children living with HIV in South India

    PubMed Central

    Mehta, K; Ekstrand, ML; Heylen, E; Sanjeeva, GN; Shet, A

    2017-01-01

    Adherence to ART, fundamental to treatment success, has been poorly studied in India. Caregivers of children attending HIV clinics in southern India were interviewed using structured questionnaires. Adherence was assessed using a visual analogue scale representing past-month adherence and treatment interruptions >48 hours during the past 3 months. Clinical features, correlates of adherence and HIV-1 viral-load were documented. Based on caregiver reports, 90.9% of the children were optimally adherent. In multivariable analysis, experiencing ART-related adverse effects was significantly associated with suboptimal adherence (p=0.01). The proportion of children who experienced virological failure was 16.5%. Virological failure was not linked to suboptimal adherence. Factors influencing virological failure included running out of medications (p=0.002) and the child refusing to take medications (p=0.01). Inclusion of drugs with better safety profiles and improved access to care could further enhance outcomes. PMID:26443264

  11. Relating the Resource Theories of Entanglement and Quantum Coherence

    NASA Astrophysics Data System (ADS)

    Chitambar, Eric; Hsieh, Min-Hsiu

    2016-07-01

    Quantum coherence and quantum entanglement represent two fundamental features of nonclassical systems that can each be characterized within an operational resource theory. In this Letter, we unify the resource theories of entanglement and coherence by studying their combined behavior in the operational setting of local incoherent operations and classical communication (LIOCC). Specifically, we analyze the coherence and entanglement trade-offs in the tasks of state formation and resource distillation. For pure states we identify the minimum coherence-entanglement resources needed to generate a given state, and we introduce a new LIOCC monotone that completely characterizes a state's optimal rate of bipartite coherence distillation. This result allows us to precisely quantify the difference in operational powers between global incoherent operations, LIOCC, and local incoherent operations without classical communication. Finally, a bipartite mixed state is shown to have distillable entanglement if and only if entanglement can be distilled by LIOCC, and we strengthen the well-known Horodecki criterion for distillability.

  12. The Biology of Aging: Citizen Scientists and Their Pets as a Bridge Between Research on Model Organisms and Human Subjects.

    PubMed

    Kaeberlein, M

    2016-03-01

    A fundamental goal of research into the basic mechanisms of aging is to develop translational strategies that improve human health by delaying the onset and progression of age-related pathology. Several interventions have been discovered that increase life span in invertebrate organisms, some of which have similar effects in mice. These include dietary restriction and inhibition of the mechanistic target of rapamycin by treatment with rapamycin. Key challenges moving forward will be to assess the extent to which these and other interventions improve healthy longevity and increase life span in mice and to develop practical strategies for extending this work to the clinic. Companion animals may provide an optimal intermediate between laboratory models and humans. By improving healthy longevity in companion animals, important insights will be gained regarding human aging while improving the quality of life for people and their pets. © The Author(s) 2015.

  13. Fourier transform infrared spectra and molecular structure of 5-methoxytryptamine, N-acetyl-5-methoxytryptamine and N-phenylsulfonamide-5-methoxytryptamine

    NASA Astrophysics Data System (ADS)

    Bayari, S.; Ide, S.

    2003-04-01

    5-Methoxytryptamine (5-MT) is a potent antioxidant and has radioprotective action. N-acetyl-5-methoxytryptamine (melatonin, NA-5-MT) is a free radical scavenger and antioxidant, which protects against oxidative damage due to a variety of toxicants. The infrared spectra of 5-MT, NA-5-MT and new synthesized N-phenylsulfonamide-5-methoxytryptamine (PS-5-MT) were investigated in the region between 4000 and 400 cm -1. Vibrational assignments of the molecules have been made for fundamental modes on the basis of the group vibrational concept, infrared intensity and comparison with the assignments for related molecules. X-ray powder diffraction patterns of molecules were also recorded. In order to optimize the geometries of the molecules, molecular mechanic calculations (MM3) were performed. Conformational analysis of 5-MT, NA-5-MT and PS-5-MT was also established by the using PM3 method.

  14. Zero- to low-field MRI with averaging of concomitant gradient fields.

    PubMed

    Meriles, Carlos A; Sakellariou, Dimitris; Trabesinger, Andreas H; Demas, Vasiliki; Pines, Alexander

    2005-02-08

    Magnetic resonance imaging (MRI) encounters fundamental limits in circumstances in which the static magnetic field is not sufficiently strong to truncate unwanted, so-called concomitant components of the gradient field. This limitation affects the attainable optimal image fidelity and resolution most prominently in low-field imaging. In this article, we introduce the use of pulsed magnetic-field averaging toward relaxing these constraints. It is found that the image of an object can be retrieved by pulsed low fields in the presence of the full spatial variation of the imaging encoding gradient field even in the absence of the typical uniform high-field time-independent contribution. In addition, error-compensation schemes can be introduced through the application of symmetrized pulse sequences. Such schemes substantially mitigate artifacts related to evolution in strong magnetic-field gradients, magnetic fields that vary in direction and orientation, and imperfections of the applied field pulses.

  15. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  16. The nature of the MDI/wood bond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcinko, J.J.; Phanopoulos, C.; Newman, W.H.

    1995-12-01

    Polymeric diphenylmethane diisocyanate (pMDI) binders have been used in the wood composite industry for 20 years. Almost one half of the oriented strand board (OSB) manufactures in North America are taking advantage of its processing speed and superior board performance. MDI`s current use in Strandboard, MDF (medium density fiber board), LVL (laminated veneer lumber), Plywood, and Particleboard is wide spread. A fundamental understanding of the role of MIDI as a binder in these complex composites is essential for further processing optimization. Experimental data is presented which investigates the nature of the chemical bonding in wood composites. Solid state nuclear magneticmore » resonance (NMR) data is combined with data from thermal analysis and fluorescence microscopy to investigate the chemistry, penetration, and morphology of the isocyanate/wood interphase. Structure property relationships are developed and related to composite performance. The study contrasts isocyanate and phenol formaldehyde binder systems.« less

  17. Financial dimensions of veterinary medical education: an economist's perspective.

    PubMed

    Lloyd, James W

    2013-01-01

    Much discussion has transpired in recent years related to the rising cost of veterinary medical education and the increasing debt loads of graduating veterinarians. Underlying these trends are fundamental changes in the funding structure of higher education in general and of academic veterinary medicine specifically. As a result of the ongoing disinvestment by state governments in higher education, both tuition rates and academic programs have experienced a substantial impact across US colleges and schools of veterinary medicine. Programmatically, the effects have spanned the entire range of teaching, research, and service activities. For graduates, both across higher education and in veterinary medicine specifically, the impact has been steadily increasing levels of student debt. Although the situation is clearly worrisome, viable repayment options exist for these escalating debt loads. In combination with recent income and employment trends for veterinarians, these options provide a basis for cautious optimism for the future.

  18. Evolving serodiagnostics by rationally designed peptide arrays: the Burkholderia paradigm in Cystic Fibrosis

    NASA Astrophysics Data System (ADS)

    Peri, Claudio; Gori, Alessandro; Gagni, Paola; Sola, Laura; Girelli, Daniela; Sottotetti, Samantha; Cariani, Lisa; Chiari, Marcella; Cretich, Marina; Colombo, Giorgio

    2016-09-01

    Efficient diagnosis of emerging and novel bacterial infections is fundamental to guide decisions on therapeutic treatments. Here, we engineered a novel rational strategy to design peptide microarray platforms, which combines structural and genomic analyses to predict the binding interfaces between diverse protein antigens and antibodies against Burkholderia cepacia complex infections present in the sera of Cystic Fibrosis (CF) patients. The predicted binding interfaces on the antigens are synthesized in the form of isolated peptides and chemically optimized for controlled orientation on the surface. Our platform displays multiple Burkholderia-related epitopes and is shown to diagnose infected individuals even in presence of superinfections caused by other prevalent CF pathogens, with limited cost and time requirements. Moreover, our data point out that the specific patterns determined by combined probe responses might provide a characterization of Burkholderia infections even at the subtype level (genomovars). The method is general and immediately applicable to other bacteria.

  19. Resilience, Integrity and Ecosystem Dynamics: Bridging Ecosystem Theory and Management

    NASA Astrophysics Data System (ADS)

    Müller, Felix; Burkhard, Benjamin; Kroll, Franziska

    In this paper different approaches to elucidate ecosystem dynamics are described, illustrated and interrelated. Ecosystem development is distinguished into two separate sequences, a complexifying phase which is characterized by orientor optimization and a destruction based phase which follows disturbances. The two developmental pathways are integrated in a modified illustration of the "adaptive cycle". Based on these fundamentals, the recent definitions of resilience, adaptability and vulnerability are discussed and a modified comprehension is proposed. Thereafter, two case studies about wetland dynamics are presented to demonstrate both, the consequences of disturbance and the potential of ecosystem recovery. In both examples ecosystem integrity is used as a key indicator variable. Based on the presented results the relativity and the normative loading of resilience quantification is worked out. The paper ends with the suggestion that the features of adaptability could be used as an integrative guideline for the analysis of ecosystem dynamics and as a well-suited concept for ecosystem management.

  20. Numerical investigation of design and operational parameters on CHI spheromak performance

    NASA Astrophysics Data System (ADS)

    O'Bryan, J. B.; Romero-Talamas, C. A.; Woodruff, S.

    2016-10-01

    Nonlinear, extended-MHD computation with the NIMROD code is used to explore magnetic self-organization and performance with respect to externally controllable parameters in spheromaks formed with coaxial helicity injection. The goal of this study is to inform the design and operational parameters of proposed proof-of-principle spheromak experiment. The calculations explore multiple distinct phases of evolution (including adiabatic magnetic compression), which must be explored and optimized separately. Results indicate that modest changes to the design and operation of past experiments, e.g. SSPX [E.B. Hooper et al. PPCF 2012], could have significantly improved the plasma-current injector coupling efficiency and performance, particularly with respect to peak temperature and lifetime. Though we frequently characterize performance relative to SSPX, we are also exploring fundamentally different designs and modes of operation, e.g. flux compression. This work is supported by DAPRA under Grant No. N66001-14-1-4044.

  1. Cold-mode Accretion: Driving the Fundamental Mass-Metallicity Relation at z ~ 2

    NASA Astrophysics Data System (ADS)

    Kacprzak, Glenn G.; van de Voort, Freeke; Glazebrook, Karl; Tran, Kim-Vy H.; Yuan, Tiantian; Nanayakkara, Themiya; Allen, Rebecca J.; Alcorn, Leo; Cowley, Michael; Labbé, Ivo; Spitler, Lee; Straatman, Caroline; Tomczak, Adam

    2016-07-01

    We investigate the star formation rate (SFR) dependence on the stellar mass and gas-phase metallicity relation at z = 2 with MOSFIRE/Keck as part of the ZFIRE survey. We have identified 117 galaxies (1.98 ≤ z ≤ 2.56), with 8.9 ≤ log(M/M ⊙) ≤ 11.0, for which we can measure gas-phase metallicities. For the first time, we show a discernible difference between the mass-metallicity relation, using individual galaxies, when dividing the sample by low (<10 M ⊙ yr-1) and high (>10 M ⊙ yr-1) SFRs. At fixed mass, low star-forming galaxies tend to have higher metallicity than high star-forming galaxies. Using a few basic assumptions, we further show that the gas masses and metallicities required to produce the fundamental mass-metallicity relation and its intrinsic scatter are consistent with cold-mode accretion predictions obtained from the OWLS hydrodynamical simulations. Our results from both simulations and observations are suggestive that cold-mode accretion is responsible for the fundamental mass-metallicity relation at z = 2 and it demonstrates the direct relationship between cosmological accretion and the fundamental properties of galaxies.

  2. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure.

    PubMed

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).

  3. Voice Relative Fundamental Frequency via Neck-Skin Acceleration in Individuals with Voice Disorders

    ERIC Educational Resources Information Center

    Lien, Yu-An S.; Calabrese, Carolyn R.; Michener, Carolyn M.; Murray, Elizabeth Heller; Van Stan, Jarrad H.; Mehta, Daryush D.; Hillman, Robert E.; Noordzij, J. Pieter; Stepp, Cara E.

    2015-01-01

    Purpose: This study investigated the use of neck-skin acceleration for relative fundamental frequency (RFF) analysis. Method: Forty individuals with voice disorders associated with vocal hyperfunction and 20 age- and sex-matched control participants were recorded with a subglottal neck-surface accelerometer and a microphone while producing speech…

  4. Representations of Fundamental Chemistry Concepts in Relation to the Particulate Nature of Matter

    ERIC Educational Resources Information Center

    Kirbulut, Zubeyde Demet; Beeth, Michael Edward

    2013-01-01

    This study investigated high school students' understanding of fundamental chemistry concepts - states of matter, melting, evaporation, condensation, boiling, and vapor pressure, in relation to their understanding of the particulate nature of matter. A sample of six students (four females and two males) enrolled in a second year chemistry course…

  5. Representations of Fundamental Chemistry Concepts in Relation to the Particulate Nature of Matter

    ERIC Educational Resources Information Center

    Kirbulut, Zübeyde Demet; Beeth, Michael Edward

    2013-01-01

    This study investigated high school students' understanding of fundamental chemistry concepts--states of matter, melting, evaporation, condensation, boiling, and vapor pressure, in relation to their understanding of the particulate nature of matter. A sample of six students (four females and two males) enrolled in a second year chemistry course at…

  6. Relative Fundamental Frequency Distinguishes between Phonotraumatic and Non-Phonotraumatic Vocal Hyperfunction

    ERIC Educational Resources Information Center

    Murray, Elizabeth S. Heller; Lien, Yu-An S.; Van Stan, Jarrad H.; Mehta, Daryush D.; Hillman, Robert E.; Noordzij, J. Pieter; Stepp, Cara E.

    2017-01-01

    Purpose: The purpose of this article is to examine the ability of an acoustic measure, relative fundamental frequency (RFF), to distinguish between two subtypes of vocal hyperfunction (VH): phonotraumatic (PVH) and non-phonotraumatic (NPVH). Method: RFF values were compared among control individuals with typical voices (N = 49), individuals with…

  7. Focusing light through random photonic layers by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-02-01

    The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.

  8. Characterization of direct-current atmospheric-pressure discharges useful for ambient desorption/ionization mass spectrometry.

    PubMed

    Shelley, Jacob T; Wiley, Joshua S; Chan, George C Y; Schilling, Gregory D; Ray, Steven J; Hieftje, Gary M

    2009-05-01

    Two relatively new ambient ionization sources, direct analysis in real time (DART) and the flowing atmospheric-pressure afterglow (FAPA), use direct current, atmospheric-pressure discharges to produce reagent ions for the direct ionization of a sample. Although at a first glance these two sources appear similar, a fundamental study reveals otherwise. Specifically, DART was found to operate with a corona-to-glow transition (C-G) discharge whereas the FAPA was found to operate with a glow-to-arc transition (G-A) discharge. The characteristics of both discharges were evaluated on the basis of four factors: reagent-ion production, response to a model analyte (ferrocene), infrared (IR) thermography of the gas used for desorption and ionization, and spatial emission characteristics. The G-A discharge produced a greater abundance and a wider variety of reagent ions than the C-G discharge. In addition, the discharges yielded different adducts and signal strengths for ferrocene. It was also found that the gas exiting the discharge chamber reached a maximum of 235 degrees C and 55 degrees C for the G-A and C-G discharges, respectively. Finally, spatially resolved emission maps of both discharges showed clear differences for N(2)(+) and O(I). These findings demonstrate that the discharges used by FAPA and DART are fundamentally different and should have different optimal applications for ambient desorption/ionization mass spectrometry (ADI-MS).

  9. The development of a core syllabus for the teaching of head and neck anatomy to medical students.

    PubMed

    Tubbs, R Shane; Sorenson, Edward P; Sharma, Amit; Benninger, Brion; Norton, Neil; Loukas, Marios; Moxham, Bernard J

    2014-04-01

    The study of human anatomy has traditionally served as a fundamental component in the basic science education of medical students, yet there exists a remarkable lack of firm guidance on essential features that must be included in a gross anatomy course, which would constitute a "Core Syllabus" of absolutely mandatory structures and related clinical pathologies. While universal agreement on the details of a core syllabus is elusive, there is a general consensus that a core syllabus aims to identify the minimum level of knowledge expected of recently qualified medical graduates in order to carry out clinical procedures safely and effectively, while avoiding overloading students with unnecessary facts that have less immediate application to their future careers as clinicians. This paper aims to identify consensus standards of essential features of Head and Neck anatomy via a Delphi Panel consisting of anatomists and clinicians who evaluated syllabus content structures (greater than 1,000) as "essential", "important", "acceptable", or "not required." The goal is to provide guidance for program/course directors who intend to provide the optimal balance between establishing a comprehensive list of clinically relevant essential structures and an overwhelming litany, which would otherwise overburden trainees in their initial years of medical school with superficial rote learning, which potentially dilutes the key and enduring fundamental lessons that prepare students for training in any medical field. Copyright © 2014 Wiley Periodicals, Inc.

  10. A facility for gas- and condensed-phase measurements behind shock waves

    NASA Astrophysics Data System (ADS)

    Petersen, Eric L.; Rickard, Matthew J. A.; Crofton, Mark W.; Abbey, Erin D.; Traum, Matthew J.; Kalitan, Danielle M.

    2005-09-01

    A shock-tube facility consisting of two, single-pulse shock tubes for the study of fundamental processes related to gas-phase chemical kinetics and the formation and reaction of solid and liquid aerosols at elevated temperatures is described. Recent upgrades and additions include a new high-vacuum system, a new gas-handling system, a new control system and electronics, an optimized velocity-detection scheme, a computer-based data acquisition system, several optical diagnostics, and new techniques and procedures for handling experiments involving gas/powder mixtures. Test times on the order of 3 ms are possible with reflected-shock pressures up to 100 atm and temperatures greater than 4000 K. Applications for the shock-tube facility include the study of ignition delay times of fuel/oxidizer mixtures, the measurement of chemical kinetic reaction rates, the study of fundamental particle formation from the gas phase, and solid-particle vaporization, among others. The diagnostic techniques include standard differential laser absorption, FM laser absorption spectroscopy, laser extinction for particle volume fraction and size, temporally and spectrally resolved emission from gas-phase species, and a scanning mobility particle sizer for particle size distributions. Details on the set-up and operation of the shock tube and diagnostics are given, the results of a detailed uncertainty analysis on the accuracy of the test temperature inferred from the incident-shock velocity are provided, and some recent results are presented.

  11. Effect of Co-Production of Renewable Biomaterials on the Performance of Asphalt Binder in Macro and Micro Perspectives

    PubMed Central

    Qu, Xin; Liu, Quan; Wang, Chao; Oeser, Markus

    2018-01-01

    Conventional asphalt binder derived from the petroleum refining process is widely used in pavement engineering. However, asphalt binder is a non-renewable material. Therefore, the use of a co-production of renewable bio-oil as a modifier for petroleum asphalt has recently been getting more attention in the pavement field due to its renewability and its optimization for conventional petroleum-based asphalt binder. Significant research efforts have been done that mainly focus on the mechanical properties of bio-asphalt binder. However, there is still a lack of studies describing the effects of the co-production on performance of asphalt binders from a micro-scale perspective to better understand the fundamental modification mechanism. In this study, a reasonable molecular structure for the co-production of renewable bio-oils is created based on previous research findings and the observed functional groups from Fourier-transform infrared spectroscopy tests, which are fundamental and critical for establishing the molecular model of bio-asphalt binder with various biomaterials contents. Molecular simulation shows that the increase of biomaterial content causes the decrease of cohesion energy density, which can be related to the observed decrease of dynamic modulus. Additionally, a parameter of Flexibility Index is employed to characterize the ability of asphalt binder to resist deformation under oscillatory loading accurately. PMID:29415421

  12. Analysis of Maritime Support Vessels and Acquisition Methods Utilized to Support Maritime Irregular Warfare

    DTIC Science & Technology

    2010-06-01

    1 identifies five fundamental IW operations as they relate to the maritime environment and domain. Maritime IrregularWarfare Activities...they relate to MIW. Figure 2 identifies five fundamental IW operations as they relate to the maritime environment and domain. Maritime...meter RHIB is designed for the insertion and extraction of SEAL Team personnel. It is a twin- turbocharged diesel engine, waterjet-propelled personnel

  13. Computational models of the Posner simple and choice reaction time tasks

    PubMed Central

    Feher da Silva, Carolina; Baldo, Marcus V. C.

    2015-01-01

    The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997

  14. Evolving therapies for the management of chronic and acute decompensated heart failure.

    PubMed

    Cook, Jennifer C; Tran, Richard H; Patterson, J Herbert; Rodgers, Jo E

    2016-11-01

    The pharmacology, clinical efficacy, and safety profiles of evolving therapies for the management of chronic heart failure (HF) and acute decompensated heart failure (ADHF) are described. HF confers a significant financial burden despite the widespread use of traditional guideline-directed medical therapies such as angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, β-blockers, and aldosterone receptor antagonists, and the rates of HF-related mortality and hospitalization have remained unacceptably high. In response to a demand for novel pharmacologic agents, several therapeutic compounds have recently gained approval or are currently under review by the Food and Drug Administration. Sacubitril-valsartan has demonstrated benefit in reducing cardiovascular mortality and HF-related hospitalizations in clinical trials, while ivabradine and ferric carboxymaltose have proven efficacious in reducing HF-related hospitalizations. Lastly, the role of serelaxin in ADHF is currently under investigation in an ongoing Phase III study. While large, outcome-driven clinical trials are fundamental in informing the clinical application of these therapeutic agents, careful patient selection is imperative to ensuring similar outcomes postmarketing. In addition, optimization of current guideline-directed medical therapy remains essential as new therapies emerge and are incorporated into guideline recommendations. Additional therapeutic agents currently undergoing investigation include bucindolol hydrochloride, cimaglermin alfa, nitroxyl, omecamtiv mecarbil, TRV027, and ularitide. Clinical practitioners should remain abreast of emerging literature so that new therapeutic entities are optimally applied and positive patient outcomes are achieved. Recently introduced agents for the treatment of patients with HF include sacubitril-valsartan, ivabradine, and ferric carboxymaltose. Additional agents worthy of attention include serelaxin and other therapies currently under investigation. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  15. A Key Major Guideline for Engineering Bioactive Multicomponent Nanofunctionalization for Biomedicine and Other Applications: Fundamental Models Confirmed by Both Direct and Indirect Evidence

    PubMed Central

    Scherrieble, Andreas; Bahrizadeh, Shiva; Avareh Sadrabadi, Fatemeh; Hedayat, Laleh

    2017-01-01

    This paper deals with the engineering multicomponent nanofunctionalization process considering fundamental physicochemical features of nanostructures such as surface energy, chemical bonds, and electrostatic interactions. It is pursued by modeling the surface nanopatterning and evaluating the proposed technique and the models. To this end, the effects of surface modifications of nanoclay on surface interactions, orientations, and final features of TiO2/Mt nanocolloidal textiles functionalization have been investigated. Various properties of cross-linkable polysiloxanes (XPs) treated samples as well as untreated samples with XPs have been compared to one another. The complete series of samples have been examined in terms of bioactivity and some physical properties, given to provide indirect evidence on the surface nanopatterning. The results disclosed a key role of the selected factors on the final features of treated surfaces. The effects have been thoroughly explained and modeled according to the fundamental physicochemical features. The developed models and associated hypotheses interestingly demonstrated a full agreement with all measured properties and were appreciably confirmed by FESEM evidence (direct evidence). Accordingly, a guideline has been developed to facilitate engineering and optimizing the pre-, main, and post-multicomponent nanofunctionalization procedures in terms of fundamental features of nanostructures and substrates for biomedical applications and other approaches. PMID:29333437

  16. All-optical and broadband microwave fundamental/sub-harmonic I/Q down-converters.

    PubMed

    Gao, Yongsheng; Wen, Aijun; Jiang, Wei; Fan, Yangyu; He, You

    2018-03-19

    Microwave I/Q down-converters are frequently used in image-reject super heterodyne receivers, zero intermediate frequency (zero-IF) receivers, and phase/frequency discriminators. However, due to the electronic bottleneck, conventional microwave I/Q mixers face a serious bandwidth limitation, I/Q imbalance, and even-order distortion. In this paper, photonic microwave fundamental and sub-harmonic I/Q down-converters are presented using a polarization division multiplexing dual-parallel Mach-Zehnder modulator (PDM-DPMZM). Thanks to all-optical manipulation, the proposed system features an ultra-wide operating band (7-40 GHz in the fundamental I/Q down-converter, and 10-40 GHz in the sub-harmonic I/Q down-converter) and an excellent I/Q balance (maximum 0.7 dB power imbalance and 1 degree phase imbalance). The conversion gain, noise figure (NF), even-order distortion, and spurious free dynamic range (SFDR) are also improved by LO power optimization and balanced detection. Using the proposed system, a high image rejection ratio is demonstrated for a super heterodyne receiver, and good EVMs over a wide RF power range is demonstrated for a zero-IF receiver. The proposed broadband photonic microwave fundamental and sub-harmonic I/Q down-converters may find potential applications in multi-band satellite, ultra-wideband radar and frequency-agile electronic warfare systems.

  17. Student Collaboration in a Series of Integrated Experiments to Study Enzyme Reactor Modeling with Immobilized Cell-Based Invertase

    ERIC Educational Resources Information Center

    Taipa, M. A^ngela; Azevedo, Ana M.; Grilo, Anto´nio L.; Couto, Pedro T.; Ferreira, Filipe A. G.; Fortuna, Ana R. M.; Pinto, Ine^s F.; Santos, Rafael M.; Santos, Susana B.

    2015-01-01

    An integrative laboratory study addressing fundamentals of enzyme catalysis and their application to reactors operation and modeling is presented. Invertase, a ß-fructofuranosidase that catalyses the hydrolysis of sucrose, is used as the model enzyme at optimal conditions (pH 4.5 and 45 °C). The experimental work involves 3 h of laboratory time…

  18. New secondary batteries utilizing electronically conductive polymer cathodes

    NASA Technical Reports Server (NTRS)

    Martin, Charles R.; White, Ralph E.

    1987-01-01

    The objectives are to optimize the transport rates in electronically conductive polypyrrole films by controlling the morphology of the film and to assess the utility of these films as cathodes in a lithium/polypyrrole secondary battery. During this research period, a better understanding was gained of the fundamental electrochemical switching processes within the polypyrrole film. Three publications were submitted based on the work completed.

  19. Some Approaches Towards Constructing Optimally Efficient Multigrid Solvers for the Inviscid Flow Equations

    NASA Technical Reports Server (NTRS)

    Sidilkover, David

    1997-01-01

    Some important advances took place during the last several years in the development of genuinely multidimensional upwind schemes for the compressible Euler equations. In particular, a robust, high-resolution genuinely multidimensional scheme which can be used for any of the flow regimes computations was constructed. This paper summarizes briefly these developments and outlines the fundamental advantages of this approach.

  20. Introduction to Adjoint Models

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  1. Towards a theory of automated elliptic mesh generation

    NASA Technical Reports Server (NTRS)

    Cordova, J. Q.

    1992-01-01

    The theory of elliptic mesh generation is reviewed and the fundamental problem of constructing computational space is discussed. It is argued that the construction of computational space is an NP-Complete problem and therefore requires a nonstandard approach for its solution. This leads to the development of graph-theoretic, combinatorial optimization and integer programming algorithms. Methods for the construction of two dimensional computational space are presented.

  2. A temporal and spatial analysis of anthropogenic noise sources affecting SNMR

    NASA Astrophysics Data System (ADS)

    Dalgaard, E.; Christiansen, P.; Larsen, J. J.; Auken, E.

    2014-11-01

    One of the biggest challenges when using the surface nuclear magnetic resonance (SNMR) method in urban areas is a relatively low signal level compared to a high level of background noise. To understand the temporal and spatial behavior of anthropogenic noise sources like powerlines and electric fences, we have developed a multichannel instrument, noiseCollector (nC), which measures the full noise spectrum up to 10 kHz. Combined with advanced signal processing we can interpret the noise as seen by a SNMR instrument and also obtain insight into the more fundamental behavior of the noise. To obtain a specified acceptable noise level for a SNMR sounding the stack size can be determined by quantifying the different noise sources. Two common noise sources, electromagnetic fields stemming from powerlines and fences are analyzed and show a 1/r2 dependency in agreement with theoretical relations. A typical noise map, obtained with the nC instrument prior to a SNMR field campaign, clearly shows the location of noise sources, and thus we can efficiently determine the optimal location for the SNMR sounding from a noise perspective.

  3. Clinical and financial impact of pharmacy services in the intensive care unit: pharmacist and prescriber perceptions.

    PubMed

    MacLaren, Robert; Brett McQueen, R; Campbell, Jon

    2013-04-01

    To compare pharmacist and prescriber perceptions of the clinical and financial outcomes of pharmacy services in the intensive care unit (ICU). ICU pharmacists were invited to participate in the survey and were asked to invite two ICU prescriber colleagues to complete questionnaires. ICUs with clinical pharmacy services. The questionnaires were designed to solicit frequency, efficiency, and perceptions about the clinical and financial impact (on a 10-point scale) of pharmacy services including patient care (eight functions), education (three functions), administration (three functions), and scholarship (four functions). Basic services were defined as fundamental, and higher-level services were categorized as desirable or optimal. Respondents were asked to suggest possible sources of funding and reimbursement for ICU pharmacy services. Eighty packets containing one 26-item pharmacy questionnaire and two 16-item prescriber questionnaires were distributed to ICU pharmacists. Forty-one pharmacists (51%) and 46 prescribers (29%) returned questionnaires. Pharmacists had worked in the ICU for 8.3 ± 6.4 years and devoted 50.3 ± 18.7% of their efforts to clinical practice. Prescribers generally rated the impact of pharmacy services more favorably than pharmacists. Fundamental services were provided more frequently and were rated more positively than desirable or optimal services across both groups. The percent efficiencies of providing services without the pharmacist ranged between 40% and 65%. Both groups indicated that salary support for the pharmacist should come from hospital departments of pharmacy or critical care or colleges of pharmacy. Prescribers were more likely to consider other sources of funding for pharmacist salaries. Both groups supported reimbursement of clinical pharmacy services. Critical care pharmacy activities were associated with perceptions of beneficial clinical and financial outcomes. Prescribers valued most services more than pharmacists. Fundamental services were viewed more favorably than desirable or optimal services, possibly because they occurred more frequently or were required for safe patient care. Substantial inefficiencies may occur if pharmacy services disappeared. Considerable support existed for funding and reimbursement of critical care pharmacy services. © 2013 Pharmacotherapy Publications, Inc.

  4. Considerations of net present value in policy making regarding diagnostic and therapeutic technologies.

    PubMed

    Califf, Robert M; Rasiel, Emma B; Schulman, Kevin A

    2008-11-01

    The pharmaceutical and medical device industries function in a business environment in which shareholders expect companies to optimize profit within legal and ethical standards. A fundamental tool used to optimize decision making is the net present value calculation, which estimates the current value of cash flows relating to an investment. We examined 3 prototypical research investment decisions that have been the source of public scrutiny to illustrate how policy decisions can be better understood when their impact on societally desirable investments by industry are viewed from the standpoint of their impact on net present value. In the case of direct, comparative clinical trials, a simple net present value calculation provides insight into why companies eschew such investments. In the case of pediatric clinical trials, the Pediatric Extension Rule changed the net present value calculation from unattractive to potentially very attractive by allowing patent extensions; thus, the dramatic increase in pediatric clinical trials can be explained by the financial return on investment. In the case of products for small markets, the fixed costs of development make this option financially unattractive. Policy decisions can be better understood when their impact on societally desirable investments by the pharmaceutical and medical device industries are viewed from the standpoint of their impact on net present value.

  5. The Role of Physical Activity in Preconception, Pregnancy and Postpartum Health.

    PubMed

    Harrison, Cheryce L; Brown, Wendy J; Hayman, Melanie; Moran, Lisa J; Redman, Leanne M

    2016-03-01

    The rise in obesity and associated morbidity is currently one of our greatest public health challenges. Women represent a high risk group for weight gain with associated metabolic, cardiovascular, reproductive and psychological health impacts. Regular physical activity is fundamental for health and well-being with protective benefits across the spectrum of women's health. Preconception, pregnancy and the early postpartum period represent opportune windows to engage women in regular physical activity to optimize health and prevent weight gain with added potential to transfer behavior change more broadly to children and families. This review summarizes the current evidence for the role of physical activity for women in relation to preconception (infertility, assisted reproductive therapy, polycystic ovary syndrome, weight gain prevention and psychological well-being) pregnancy (prevention of excess gestational weight gain, gestational diabetes and preeclampsia as well as labor and neonatal outcomes) and postpartum (lactation and breastfeeding, postpartum weight retention and depression) health. Beneficial outcomes validate the importance of regular physical activity, yet key methodological gaps highlight the need for large, high-quality studies to clarify the optimal type, frequency, duration and intensity of physical activity required for beneficial health outcomes during preconception, pregnancy and postpartum. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. Balance Maintenance in High-Speed Motion of Humanoid Robot Arm-Based on the 6D Constraints of Momentum Change Rate

    PubMed Central

    Zhang, Da-song; Chu, Jian

    2014-01-01

    Based on the 6D constraints of momentum change rate (CMCR), this paper puts forward a real-time and full balance maintenance method for the humanoid robot during high-speed movement of its 7-DOF arm. First, the total momentum formula for the robot's two arms is given and the momentum change rate is defined by the time derivative of the total momentum. The author also illustrates the idea of full balance maintenance and analyzes the physical meaning of 6D CMCR and its fundamental relation to full balance maintenance. Moreover, discretization and optimization solution of CMCR has been provided with the motion constraint of the auxiliary arm's joint, and the solving algorithm is optimized. The simulation results have shown the validity and generality of the proposed method on the full balance maintenance in the 6 DOFs of the robot body under 6D CMCR. This method ensures 6D dynamics balance performance and increases abundant ZMP stability margin. The resulting motion of the auxiliary arm has large abundance in joint space, and the angular velocity and the angular acceleration of these joints lie within the predefined limits. The proposed algorithm also has good real-time performance. PMID:24883404

  7. Biologically Relevant Heterogeneity: Metrics and Practical Insights

    PubMed Central

    Gough, A; Stern, AM; Maier, J; Lezon, T; Shun, T-Y; Chennubhotla, C; Schurdak, ME; Haney, SA; Taylor, DL

    2017-01-01

    Heterogeneity is a fundamental property of biological systems at all scales that must be addressed in a wide range of biomedical applications including basic biomedical research, drug discovery, diagnostics and the implementation of precision medicine. There are a number of published approaches to characterizing heterogeneity in cells in vitro and in tissue sections. However, there are no generally accepted approaches for the detection and quantitation of heterogeneity that can be applied in a relatively high throughput workflow. This review and perspective emphasizes the experimental methods that capture multiplexed cell level data, as well as the need for standard metrics of the spatial, temporal and population components of heterogeneity. A recommendation is made for the adoption of a set of three heterogeneity indices that can be implemented in any high throughput workflow to optimize the decision-making process. In addition, a pairwise mutual information method is suggested as an approach to characterizing the spatial features of heterogeneity, especially in tissue-based imaging. Furthermore, metrics for temporal heterogeneity are in the early stages of development. Example studies indicate that the analysis of functional phenotypic heterogeneity can be exploited to guide decisions in the interpretation of biomedical experiments, drug discovery, diagnostics and the design of optimal therapeutic strategies for individual patients. PMID:28231035

  8. Controlled cell-seeding methodologies: a first step toward clinically relevant bone tissue engineering strategies.

    PubMed

    Impens, Saartje; Chen, Yantian; Mullens, Steven; Luyten, Frank; Schrooten, Jan

    2010-12-01

    The repair of large and complex bone defects could be helped by a cell-based bone tissue engineering strategy. A reliable and consistent cell-seeding methodology is a mandatory step in bringing bone tissue engineering into the clinic. However, optimization of the cell-seeding step is only relevant when it can be reliably evaluated. The cell seeding efficiency (CSE) plays a fundamental role herein. Results showed that cell lysis and the definition used to determine the CSE played a key role in quantifying the CSE. The definition of CSE should therefore be consistent and unambiguous. The study of the influence of five drop-seeding-related parameters within the studied test conditions showed that (i) the cell density and (ii) the seeding vessel did not significantly affect the CSE, whereas (iii) the volume of seeding medium-to-free scaffold volume ratio (MFR), (iv) the seeding time, and (v) the scaffold morphology did. Prolonging the incubation time increased the CSE up to a plateau value at 4 h. Increasing the MFR or permeability by changing the morphology of the scaffolds significantly reduced the CSE. These results confirm that cell seeding optimization is needed and that an evidence-based selection of the seeding conditions is favored.

  9. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  10. Contrasting responses of grassland water and carbon exchanges to climate change between Tibetan Plateau and Inner Mongolia

    NASA Astrophysics Data System (ADS)

    Liu, D.; Li, Y.; Wang, T.; Peylin, P. P.; MacBean, N.; Ciais, P.; Jia, G.; Ma, M.; Ma, Y.; Shen, M.; Zhang, X.; Piao, S.

    2017-12-01

    he grassland in Tibetan Plateau (TP) and Inner Mongolia (IM) of China play important roles in climate change mitigation. These two regions have increasingly experienced warming and changing precipitation regimes over the past three decades. However, it remains uncertain to what extent temperature and water availability regulate the water and carbon fluxes across alpine (TP) and temperate (IM) grasslands. Here, we optimize a process-based model of carbon and water fluxes using eddy covariance (EC) data and analyze the simulated results based upon the optimized model exposed to a range of annual temperature and precipitation anomalies. We found that the changes of NEE of TP grassland are relatively small because of compatible increasing rate of ecosystem respiration (Re) and the gross primary productivity (GPP) under warming. The NEE of IM grassland increases with warming due to faster reduction of GPP than Re under warm-induced drought. We also found suppression of plant transpiration as the primary cause for the muted response of evapotranspiration to warming in IM, which is in contrast to enhanced transpiration in TP. We therefore highlight that the underlying processes regulating the responses of water and carbon cycles to warming are fundamentally different between TP and IM grasslands.

  11. A high-gain and high-efficiency X-band triaxial klystron amplifier with two-stage cascaded bunching cavities

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Ju, Jinchuan; Zhang, Jun; Zhong, Huihuang

    2017-12-01

    To achieve GW-level amplification output radiation at the X-band, a relativistic triaxial klystron amplifier with two-stage cascaded double-gap bunching cavities is investigated. The input cavity is optimized to obtain a high absorption rate of the external injection microwave. The cascaded bunching cavities are optimized to achieve a high depth of the fundamental harmonic current. A double-gap standing wave extractor is designed to improve the beam wave conversion efficiency. Two reflectors with high reflection coefficients both to the asymmetric mode and the TEM mode are employed to suppress the asymmetric mode competition and TEM mode microwave leakage. Particle-in-cell simulation results show that a high power microwave with a power of 2.53 GW and a frequency of 8.4 GHz is generated with a 690 kV, 9.3 kA electron beam excitation and a 25 kW seed microwave injection. Particularly, the achieved power conversion efficiency is about 40%, and the gain is as high as 50 dB. Meanwhile, there is insignificant self-excitation of the parasitic mode in the proposed structure by adopting the reflectors. The relative phase difference between the injected signals and the output microwaves keeps locked after the amplifier becomes saturated.

  12. Fundamental investigation of the tribological and mechanical responses of materials and nanostructures

    NASA Astrophysics Data System (ADS)

    Bucholz, Eric W.

    In the field of tribology, the ability to predict, and ultimately control, frictional performance is of critical importance for the optimization of tribological systems. As such, understanding the specific mechanisms involved in the lubrication processes for different materials is a fundamental step in tribological system design. In this work, a combination of computational and experimental methods that include classical molecular dynamics (MD) simulations, atomic force microscopy (AFM) experiments, and multivariate statistical analyses provides fundamental insight into the tribological and mechanical properties of carbon-based and inorganic nanostructures, lamellar materials, and inorganic ceramic compounds. One class of materials of modern interest for tribological applications is nanoparticles, which can be employed either as solid lubricating films or as lubricant additives. In experimental systems, however, it is often challenging to attain the in situ observation of tribological interfaces necessary to identify the atomic-level mechanisms involved during lubrication and response to mechanical deformation. Here, classical MD simulations establish the mechanisms occurring during the friction and compression of several types of nanoparticles including carbon nano-onions, amorphous carbon nanoparticles, and inorganic fullerene-like MoS2 nanoparticles. Specifically, the effect of a nanoparticle's structural properties on the lubrication mechanisms of rolling, sliding, and lamellar exfoliation is indicated; the findings quantify the relative impact of each mechanism on the tribological and mechanical properties of these nanoparticles. Beyond identifying the lubrication mechanisms of known lubricating materials, the continual advancement of modern technology necessitates the identification of new candidate materials for use in tribological applications. To this effect, atomic-scale AFM friction experiments on the aluminosilicate mineral pyrophyllite demonstrate that pyrophyllite provides a low friction coefficient and low shear stresses as well as a high threshold to interfacial wear; this suggests the potential for use of pyrophyllite as a lubricious material under specific conditions. Also, a robust and accurate model for estimating the friction coefficients of inorganic ceramic materials that is based on the fundamental relationships between material properties is presented, which was developed using multivariate data mining algorithms. These findings provide the tribological community with a new means of quickly identifying candidate materials that may provide specific frictional properties for desired applications.

  13. Relations between big five traits and fundamental motives.

    PubMed

    Olson, Kenneth R; Weber, Dale A

    2004-12-01

    Relations were examined between configurations of Big Five Traits (Extraversion, Agreeableness, Conscientiousness, Neuroticism, Openness to Experience) and 16 fundamental motives (Social Contact, Curiosity, Honor, Power, Order, Idealism, Independence, Status, Vengeance, Romance, Family, Activity, Saving, Acceptance, Eating, Tranquility) in 138 university students (93 women, 45 men; M age= 20.3 yr., SD=4.5). Big Five traits were measured with the NEO-PI-R and motives were measured with the Reiss Profile of Fundamental Goals and Motivation Sensitivities. The traits were significantly related with all the motives (adjusted R2=.06 to .43) except Physical Activity. Four motives were related with only one trait and nine configurations of two or more traits were correlated with the remaining 11 motives. Total motive scores across all participants, an index of the strength of overall motivation, were positively correlated with Extraversion and Neuroticism and negatively with Agreeableness.

  14. The Relationship between Relative Fundamental Frequency and a Kinematic Estimate of Laryngeal Stiffness in Healthy Adults

    ERIC Educational Resources Information Center

    McKenna, Victoria S.; Heller Murray, Elizabeth S.; Lien, Yu-An S.; Stepp, Cara E.

    2016-01-01

    Purpose: This study examined the relationship between the acoustic measure relative fundamental frequency (RFF) and a kinematic estimate of laryngeal stiffness. Method: Twelve healthy adults (mean age = 22.7 years, SD = 4.4; 10 women, 2 men) produced repetitions of /ifi/ while varying their vocal effort during simultaneous acoustic and video…

  15. Individual Monitoring of Vocal Effort with Relative Fundamental Frequency: Relationships with Aerodynamics and Listener Perception

    ERIC Educational Resources Information Center

    Lien, Yu-An S.; Michener, Carolyn M.; Eadie, Tanya L.; Stepp, Cara E.

    2015-01-01

    Purpose: The acoustic measure relative fundamental frequency (RFF) was investigated as a potential objective measure to track variations in vocal effort within and across individuals. Method: Twelve speakers with healthy voices created purposeful modulations in their vocal effort during speech tasks. RFF and an aerodynamic measure of vocal effort,…

  16. Increasing the technical level of mining haul trucks

    NASA Astrophysics Data System (ADS)

    Voronov, Yuri; Voronov, Artyom; Grishin, Sergey; Bujankin, Alexey

    2017-11-01

    Theoretical and methodological fundamentals of mining haul trucks optimal design are articulated. Methods based on the systems approach to integrated assessment of truck technical level and methods for optimization of truck parameters depending on performance standards are provided. The results of using these methods are given. The developed method allows not only assessing the truck technical levels but also choosing the most promising models and providing quantitative evaluations of the decisions to be made at the design stage. These areas are closely connected with the problem of improvement in the industrial output quality, which, being a part of the widely spread in Western world "total quality control" ideology, is one of the major issues for the Russian economy.

  17. Theoretical and experimental analysis of injection seeding a Q-switched alexandrite laser

    NASA Technical Reports Server (NTRS)

    Prasad, C. R.; Lee, H. S.; Glesne, T. R.; Monosmith, B.; Schwemmer, G. K.

    1991-01-01

    Injection seeding is a method for achieving linewidths of less than 500 MHz in the output of broadband, tunable, solid state lasers. Dye lasers, CW and pulsed diode lasers, and other solid state lasers have been used as injection seeders. By optimizing the fundamental laser parameters of pump energy, Q-switched pulse build-up time, injection seed power and mode matching, one can achieve significant improvements in the spectral purity of the Q-switched output. These parameters are incorporated into a simple model for analyzing spectral purity and pulse build-up processes in a Q-switched, injection-seeded laser. Experiments to optimize the relevant parameters of an alexandrite laser show good agreement.

  18. Designing a new three-dimensional periodic cellular auxetic material

    NASA Astrophysics Data System (ADS)

    Zhou, Yiyi; Chen, Lianmen

    2017-07-01

    Auxetics are materials showing a negative Poisson’s ratio. Early research found several categories of auxetic materials in the chemical field. Later research identified the fundamental mechanism generating this behavior is rotation; a variety of two-dimensional auxetic material have been generated accordingly. Nevertheless, the successful example of three-dimensional auxetic material is still rare. This paper introduces a new design of three-dimensional periodic cellular auxetic material based on geometrical and mechanical methodology. The projections of the optimized periodic modules in two horizontal directions are geometrically same with auxetic hexahedral poem, so that the optimized periodic material can perform auxetic in both two horizontal directions under vertical compression. Parametric model is simulated to prove the design.

  19. Optimized 2D array of thin silicon pillars for efficient antireflective coatings in the visible spectrum

    PubMed Central

    Proust, Julien; Fehrembach, Anne-Laure; Bedu, Frédéric; Ozerov, Igor; Bonod, Nicolas

    2016-01-01

    Light reflection occuring at the surface of silicon wafers is drastically diminished by etching square pillars of height 110 nm and width 140 nm separated by a 100 nm gap distance in a square lattice. The design of the nanostructure is optimized to widen the spectral tolerance of the antireflective coatings over the visible spectrum for both fundamental polarizations. Angle and polarized resolved optical measurements report a light reflection remaining under 5% when averaged in the visible spectrum for both polarizations in a wide angular range. Light reflection remains almost insensitive to the light polarization even in oblique incidence. PMID:27109643

  20. Optimization of silicon waveguides for gas detection application at mid-IR wavelengths

    NASA Astrophysics Data System (ADS)

    Butt, M. A.; Kozlova, E. S.

    2018-04-01

    There are several trace gases such as N2O, CO, CO2, NO, H2O, NO2, NH3, CH4 etc. which have their absorption peaks in Mid-IR spectrum These gases strongly absorb in the mid-IR > 2.5 μm spectral region due to their fundamental rotational and vibrational transitions. In this work, we modelled and optimized three different kinds of waveguides such as rib, strip and slot based on silicon platform to obtain maximum evanescent field ratio. These waveguides are designed at 3.39 μm and 4.67 μm which correspond to the absorption line of methane (CH4) and carbon monoxide (CO) respectively.

  1. The role of CSP in the electricity system of South Africa - technical operation, grid constraints, market structure and economics

    NASA Astrophysics Data System (ADS)

    Kost, Christoph; Friebertshäuser, Chris; Hartmann, Niklas; Fluri, Thomas; Nitz, Peter

    2017-06-01

    This paper analyses the role of solar technologies (CSP and PV) and their interaction in the South African electricity system by using a fundamental electricity system modelling (ENTIGRIS-SouthAfrica). The model is used to analyse the South African long-term electricity generation portfolio mix, optimized site selection and required transmission capacities until the year 2050. Hereby especially the location and grid integration of solar technology (PV and CSP) and wind power plants is analysed. This analysis is carried out by using detailed resource assessment of both technologies. A cluster approach is presented to reduce complexity by integrating the data in an optimization model.

  2. Pump-shaped dump optimal control reveals the nuclear reaction pathway of isomerization of a photoexcited cyanine dye.

    PubMed

    Dietzek, Benjamin; Brüggemann, Ben; Pascher, Torbjörn; Yartsev, Arkady

    2007-10-31

    Using optimal control as a spectroscopic tool we decipher the details of the molecular dynamics of the essential multidimensional excited-state photoisomerization - a fundamental chemical reaction of key importance in biology. Two distinct nuclear motions are identified in addition to the overall bond-twisting motion: Initially, the reaction is dominated by motion perpendicular to the torsion coordinate. At later times, a second optically active vibration drives the system along the reaction path to the bottom of the excited-state potential. The time scales of the wavepacket motion on a different part of the excited-state potential are detailed by pump-shaped dump optimal control. This technique offers new means to control a chemical reaction far from the Franck-Condon point of absorption and to map details of excited-state reaction pathways revealing unique insights into the underlying reaction mechanism.

  3. Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.

    PubMed

    Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai

    2005-10-01

    A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.

  4. Parameter optimization of electrochemical machining process using black hole algorithm

    NASA Astrophysics Data System (ADS)

    Singh, Dinesh; Shukla, Rajkamal

    2017-12-01

    Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.

  5. Modular and configurable optimal sequence alignment software: Cola.

    PubMed

    Zamani, Neda; Sundström, Görel; Höppner, Marc P; Grabherr, Manfred G

    2014-01-01

    The fundamental challenge in optimally aligning homologous sequences is to define a scoring scheme that best reflects the underlying biological processes. Maximising the overall number of matches in the alignment does not always reflect the patterns by which nucleotides mutate. Efficiently implemented algorithms that can be parameterised to accommodate more complex non-linear scoring schemes are thus desirable. We present Cola, alignment software that implements different optimal alignment algorithms, also allowing for scoring contiguous matches of nucleotides in a nonlinear manner. The latter places more emphasis on short, highly conserved motifs, and less on the surrounding nucleotides, which can be more diverged. To illustrate the differences, we report results from aligning 14,100 sequences from 3' untranslated regions of human genes to 25 of their mammalian counterparts, where we found that a nonlinear scoring scheme is more consistent than a linear scheme in detecting short, conserved motifs. Cola is freely available under LPGL from https://github.com/nedaz/cola.

  6. Chocolate milk: a post-exercise recovery beverage for endurance sports.

    PubMed

    Pritchett, Kelly; Pritchett, Robert

    2012-01-01

    An optimal post-exercise nutrition regimen is fundamental for ensuring recovery. Therefore, research has aimed to examine post-exercise nutritional strategies for enhanced training stimuli. Chocolate milk has become an affordable recovery beverage for many athletes, taking the place of more expensive commercially available recovery beverages. Low-fat chocolate milk consists of a 4:1 carbohydrate:protein ratio (similar to many commercial recovery beverages) and provides fluids and sodium to aid in post-workout recovery. Consuming chocolate milk (1.0-1.5•g•kg(-1) h(-1)) immediately after exercise and again at 2 h post-exercise appears to be optimal for exercise recovery and may attenuate indices of muscle damage. Future research should examine the optimal amount, timing, and frequency of ingestion of chocolate milk on post-exercise recovery measures including performance, indices of muscle damage, and muscle glycogen resynthesis. Copyright © 2012 S. Karger AG, Basel.

  7. Defining defect specifications to optimize photomask production and requalification

    NASA Astrophysics Data System (ADS)

    Fiekowsky, Peter

    2006-10-01

    Reducing defect repairs and accelerating defect analysis is becoming more important as the total cost of defect repairs on advanced masks increases. Photomask defect specs based on printability, as measured on AIMS microscopes has been used for years, but the fundamental defect spec is still the defect size, as measured on the photomask, requiring the repair of many unprintable defects. ADAS, the Automated Defect Analysis System from AVI is now available in most advanced mask shops. It makes the use of pure printability specs, or "Optimal Defect Specs" practical. This software uses advanced algorithms to eliminate false defects caused by approximations in the inspection algorithm, classify each defect, simulate each defect and disposition each defect based on its printability and location. This paper defines "optimal defect specs", explains why they are now practical and economic, gives a method of determining them and provides accuracy data.

  8. Optimal programming management of ventricular tachycardia storm in ICD patients

    PubMed Central

    Qian, Zhiyong; Guo, Jianghong; Zhang, Zhiyong; Wang, Yao; Hou, Xiaofeng; Zou, Jiangang

    2015-01-01

    Abstract Ventricular tachycardia storm (VTS) is defined as a life-threatening syndrome of three or more separate episodes of ventricular tachycardia (VT) leading to implantable cardioverter defibrillator (ICD) therapy within 24 hours. Patients with VTS have poor outcomes and require immediate medical attention. ICD shocks have been shown to be associated with increased mortality in several studies. Optimal programming in minimization of ICD shocks may decrease mortality. Large controlled trials showed that long detection time and high heart rate detection threshold reduced ICD shock burden without an increase in syncope or death. As a fundamental therapy of ICD, antitachycardia pacing (ATP) can terminate most slow VT with a low risk of acceleration. For fast VT, burst pacing is more effective and less likely to result in acceleration than ramp pacing. One algorithm of optimal programming management during a VTS is presented in the review. PMID:25745473

  9. Optimal Portfolio Selection Under Concave Price Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less

  10. The Joint Institute for Nuclear Research in Experimental Physics of Elementary Particles

    NASA Astrophysics Data System (ADS)

    Bednyakov, V. A.; Russakovich, N. A.

    2018-05-01

    The year 2016 marks the 60th anniversary of the Joint Institute for Nuclear Research (JINR) in Dubna, an international intergovernmental organization for basic research in the fields of elementary particles, atomic nuclei, and condensed matter. Highly productive advances over this long road clearly show that the international basis and diversity of research guarantees successful development (and maintenance) of fundamental science. This is especially important for experimental research. In this review, the most significant achievements are briefly described with an attempt to look into the future (seven to ten years ahead) and show the role of JINR in solution of highly important problems in elementary particle physics, which is a fundamental field of modern natural sciences. This glimpse of the future is full of justified optimism.

  11. From analytic inversion to contemporary IMRT optimization: Radiation therapy planning revisited from a mathematical perspective

    PubMed Central

    Censor, Yair; Unkelbach, Jan

    2011-01-01

    In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). PMID:21616694

  12. Physical and mechanical metallurgy of high purity Nb accelerator cavities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, N. T.; Bieler, T. R.; Pourgoghart , F.

    2010-01-01

    In the past decade, high Q values have been achieved in high purity Nb superconducting radio frequency (SRF) cavities. Fundamental understanding of the physical metallurgy of Nb that enables these achievements is beginning to reveal what challenges remain to establish reproducible and cost-effective production of high performance SRF cavities. Recent studies of dislocation substructure development and effects of recrystallization arising from welding and heat treatments and their correlations with cavity performance are considered. With better fundamental understanding of the effects of dislocation substructure evolution and recrystallization on electron and phonon conduction, as well as the interior and surface states, itmore » will be possible to design optimal processing paths for cost-effective performance using approaches such as hydroforming, which minimizes or eliminates welds in a cavity.« less

  13. Physical and mechanical metallurgy of high purity Nb for accelerator cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieler, T. R.; Wright, N. T.; Pourboghrat, F.

    2010-01-01

    In the past decade, high Q values have been achieved in high purity Nb superconducting radio frequency (SRF) cavities. Fundamental understanding of the physical metallurgy of Nb that enables these achievements is beginning to reveal what challenges remain to establish reproducible and cost-effective production of high performance SRF cavities. Recent studies of dislocation substructure development and effects of recrystallization arising from welding and heat treatments and their correlations with cavity performance are considered. With better fundamental understanding of the effects of dislocation substructure evolution and recrystallization on electron and phonon conduction, as well as the interior and surface states, itmore » will be possible to design optimal processing paths for cost-effective performance using approaches such as hydroforming, which minimizes or eliminates welds in a cavity.« less

  14. High resolution study of the ν2 and ν5 rovibrational fundamental bands of thionyl chloride: Interplay of an evolutionary algorithm and a line-by-line analysis

    NASA Astrophysics Data System (ADS)

    Roucou, Anthony; Dhont, Guillaume; Cuisset, Arnaud; Martin-Drumel, Marie-Aline; Thorwirth, Sven; Fontanari, Daniele; Meerts, W. Leo

    2017-08-01

    The ν2 and ν5 fundamental bands of thionyl chloride (SOCl2) were measured in the 420 cm-1-550 cm-1 region using the FT-far-IR spectrometer exploiting synchrotron radiation on the AILES beamline at SOLEIL. A straightforward line-by-line analysis is complicated by the high congestion of the spectrum due to both the high density of SOCl2 rovibrational bands and the presence of the ν2 fundamental band of sulfur dioxide produced by hydrolysis of SOCl2 with residual water. To overcome this difficulty, our assignment procedure for the main isotopologues 32S16O35Cl2 and 32S16O35Cl37Cl alternates between a direct fit of the spectrum, via a global optimization technique, and a traditional line-by-line analysis. The global optimization, based on an evolutionary algorithm, produces rotational constants and band centers that serve as useful starting values for the subsequent spectroscopic analysis. This work helped to identify the pure rotational submillimeter spectrum of 32S16O35Cl2 in the v2=1 and v5=1 vibrational states of Martin-Drumel et al. [J. Chem. Phys. 144, 084305 (2016)]. As a by-product, the rotational transitions of the v4=1 far-IR inactive state were identified in the submillimeter spectrum. A global fit gathering all the microwave, submillimeter, and far-IR data of thionyl chloride has been performed, showing that no major perturbation of rovibrational energy levels occurs for the main isotopologue of the molecule.

  15. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  16. Fuzzy multi-objective optimization case study based on an anaerobic co-digestion process of food waste leachate and piggery wastewater.

    PubMed

    Choi, Angelo Earvin Sy; Park, Hung Suck

    2018-06-20

    This paper presents the development and evaluation of fuzzy multi-objective optimization for decision-making that includes the process optimization of anaerobic digestion (AD) process. The operating cost criteria which is a fundamental research gap in previous AD analysis was integrated for the case study in this research. In this study, the mixing ratio of food waste leachate (FWL) and piggery wastewater (PWW), calcium carbonate (CaCO 3 ) and sodium chloride (NaCl) concentrations were optimized to enhance methane production while minimizing operating cost. The results indicated a maximum of 63.3% satisfaction for both methane production and operating cost under the following optimal conditions: mixing ratio (FWL: PWW) - 1.4, CaCO 3 - 2970.5 mg/L and NaCl - 2.7 g/L. In multi-objective optimization, the specific methane yield (SMY) was 239.0 mL CH 4 /g VS added , while 41.2% volatile solids reduction (VSR) was obtained at an operating cost of 56.9 US$/ton. In comparison with the previous optimization study that utilized the response surface methodology, the SMY, VSR and operating cost of the AD process were 310 mL/g, 54% and 83.2 US$/ton, respectively. The results from multi-objective fuzzy optimization proves to show the potential application of this technique for practical decision-making in the process optimization of AD process. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Numerical analysis of fundamental mode selection of a He-Ne laser by a circular aperture

    NASA Astrophysics Data System (ADS)

    He, Xin; Zhang, Bin

    2011-11-01

    In the He-Ne laser with an integrated cavity made of zerodur, the inner face performance of the gain tube is limited by the machining techniques, which tends to influence the beam propagation and transverse mode distribution. In order to improve the beam quality and select out the fundamental mode, an aperture is usually introduced in the cavity. In the process of laser design, the Fresnel-Kirchhoff diffraction integral equation is adopted to calculate the optical field distributions on each interface. The transit matrix is obtained based on self-reproducing principle and finite element method. Thus, optical field distribution on any interface and field loss of each transverse mode could be acquired by solving the eigenvalue and eigenvector of the transit matrix. For different-sized apertures in different positions, we could get different matrices and corresponding calculation results. By comparing these results, the optimal size and position of the aperture could be obtained. As a result, the feasibility of selecting fundamental mode in a zerodur He-Ne laser by a circular aperture has been verified theoretically.

  18. Gender classification in children based on speech characteristics: using fundamental and formant frequencies of Malay vowels.

    PubMed

    Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa

    2013-03-01

    Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. THE FUNDAMENTAL GROUP OF THE COMPLEMENT OF A HYPERSURFACE IN \\mathbf C^n

    NASA Astrophysics Data System (ADS)

    Kulikov, Vik S.

    1992-04-01

    Let D be a complex algebraic hypersurface in \\mathbf C^n not passing through the point o \\in \\mathbf C^n. The generators of the fundamental group \\pi_1(\\mathbf C^n\\setminus D, o) and the relations among them are described in terms of the real cone over D with apex at o. This description is a generalization to the algebraic case of Wirtinger's corepresentation of the fundamental group of a knot in \\mathbf R^3. A new proof of Zariski's conjecture about commutativity of the fundamental group \\pi_1(\\mathbf P^2\\setminus C) for a projective nodal curve C is given in the second part of the paper based on the description of the generators and the relations in the group \\pi_1(\\mathbf C^n\\setminus D, o) obtained in the first part.

  20. COLD-MODE ACCRETION: DRIVING THE FUNDAMENTAL MASS–METALLICITY RELATION AT z ∼ 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kacprzak, Glenn G.; Glazebrook, Karl; Nanayakkara, Themiya

    2016-07-20

    We investigate the star formation rate (SFR) dependence on the stellar mass and gas-phase metallicity relation at z = 2 with MOSFIRE/Keck as part of the ZFIRE survey. We have identified 117 galaxies (1.98 ≤ z ≤ 2.56), with 8.9 ≤ log( M / M {sub ⊙}) ≤ 11.0, for which we can measure gas-phase metallicities. For the first time, we show a discernible difference between the mass–metallicity relation, using individual galaxies, when dividing the sample by low (<10 M {sub ⊙} yr{sup −1}) and high (>10 M {sub ⊙} yr{sup −1}) SFRs. At fixed mass, low star-forming galaxies tendmore » to have higher metallicity than high star-forming galaxies. Using a few basic assumptions, we further show that the gas masses and metallicities required to produce the fundamental mass–metallicity relation and its intrinsic scatter are consistent with cold-mode accretion predictions obtained from the OWLS hydrodynamical simulations. Our results from both simulations and observations are suggestive that cold-mode accretion is responsible for the fundamental mass–metallicity relation at z = 2 and it demonstrates the direct relationship between cosmological accretion and the fundamental properties of galaxies.« less

  1. Evaluation of type II thyroplasty on phonatory physiology in an excised canine larynx model

    PubMed Central

    Devine, Erin E.; Hoffman, Matthew R.; McCulloch, Timothy M.; Jiang, Jack J.

    2016-01-01

    Objective Type II thyroplasty is an alternative treatment for spasmodic dysphonia, addressing hyperadduction by incising and lateralizing the thyroid cartilage. We quantified the effect of lateralization width on phonatory physiology using excised canine larynges. Methods Normal closure, hyperadduction, and type II thyroplasty (lateralized up to 5mm at 1mm increments with hyperadducted arytenoids) were simulated in excised larynges (N=7). Aerodynamic, acoustic, and videokymographic data were recorded at three subglottal pressures relative to phonation threshold pressure (PTP). One-way repeated measures ANOVA assessed effect of condition on aerodynamic parameters. Random intercepts linear mixed effects models assessed effects of condition and subglottal pressure on acoustic and videokymographic parameters. Results PTP differed across conditions (p<0.001). Condition affected percent shimmer (p<0.005) but not percent jitter. Both pressure (p<0.03) and condition (p<0.001) affected fundamental frequency. Pressure affected vibratory amplitude (p<0.05) and intra-fold phase difference (p<0.05). Condition affected phase difference between the vocal folds (p<0.001). Conclusions Hyperadduction increased PTP and worsened perturbation compared to normal, with near normal physiology restored with 1mm lateralization. Further lateralization deteriorated voice quality and increased PTP. Acoustic and videokymographic results indicate that normal physiologic relationships between subglottal pressure and vibration are preserved at optimal lateralization width, but then degrade with further lateralization. The 1mm optimal width observed here is due to the small canine larynx size. Future human trials would likely demonstrate a greater optimal width, with patient-specific value potentially determined based on larynx size and symptom severity. PMID:27223665

  2. Environmental Statistics and Optimal Regulation

    PubMed Central

    2014-01-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493

  3. Optimal planning of co-firing alternative fuels with coal in a power plant by grey nonlinear mixed integer programming model.

    PubMed

    Ko, Andi Setiady; Chang, Ni-Bin

    2008-07-01

    Energy supply and use is of fundamental importance to society. Although the interactions between energy and environment were originally local in character, they have now widened to cover regional and global issues, such as acid rain and the greenhouse effect. It is for this reason that there is a need for covering the direct and indirect economic and environmental impacts of energy acquisition, transport, production and use. In this paper, particular attention is directed to ways of resolving conflict between economic and environmental goals by encouraging a power plant to consider co-firing biomass and refuse-derived fuel (RDF) with coal simultaneously. It aims at reducing the emission level of sulfur dioxide (SO(2)) in an uncertain environment, using the power plant in Michigan City, Indiana as an example. To assess the uncertainty by a comparative way both deterministic and grey nonlinear mixed integer programming (MIP) models were developed to minimize the net operating cost with respect to possible fuel combinations. It aims at generating the optimal portfolio of alternative fuels while maintaining the same electricity generation simultaneously. To ease the solution procedure stepwise relaxation algorithm was developed for solving the grey nonlinear MIP model. Breakeven alternative fuel value can be identified in the post-optimization stage for decision-making. Research findings show that the inclusion of RDF does not exhibit comparative advantage in terms of the net cost, albeit relatively lower air pollution impact. Yet it can be sustained by a charge system, subsidy program, or emission credit as the price of coal increases over time.

  4. Individual Functional ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

    PubMed Central

    Li, Kaiming; Guo, Lei; Zhu, Dajiang; Hu, Xintao; Han, Junwei; Liu, Tianming

    2013-01-01

    Studying connectivities among functional brain regions and the functional dynamics on brain networks has drawn increasing interest. A fundamental issue that affects functional connectivity and dynamics studies is how to determine the best possible functional brain regions or ROIs (regions of interest) for a group of individuals, since the connectivity measurements are heavily dependent on ROI locations. Essentially, identification of accurate, reliable and consistent corresponding ROIs is challenging due to the unclear boundaries between brain regions, variability across individuals, and nonlinearity of the ROIs. In response to these challenges, this paper presents a novel methodology to computationally optimize ROIs locations derived from task-based fMRI data for individuals so that the optimized ROIs are more consistent, reproducible and predictable across brains. Our computational strategy is to formulate the individual ROI location optimization as a group variance minimization problem, in which group-wise consistencies in functional/structural connectivity patterns and anatomic profiles are defined as optimization constraints. Our experimental results from multimodal fMRI and DTI data show that the optimized ROIs have significantly improved consistency in structural and functional profiles across individuals. These improved functional ROIs with better consistency could contribute to further study of functional interaction and dynamics in the human brain. PMID:22281931

  5. Thermal optimality of net ecosystem exchange of carbon dioxide and underlying mechanisms.

    PubMed

    Niu, Shuli; Luo, Yiqi; Fei, Shenfeng; Yuan, Wenping; Schimel, David; Law, Beverly E; Ammann, Christof; Arain, M Altaf; Arneth, Almut; Aubinet, Marc; Barr, Alan; Beringer, Jason; Bernhofer, Christian; Black, T Andrew; Buchmann, Nina; Cescatti, Alessandro; Chen, Jiquan; Davis, Kenneth J; Dellwik, Ebba; Desai, Ankur R; Etzold, Sophia; Francois, Louis; Gianelle, Damiano; Gielen, Bert; Goldstein, Allen; Groenendijk, Margriet; Gu, Lianhong; Hanan, Niall; Helfter, Carole; Hirano, Takashi; Hollinger, David Y; Jones, Mike B; Kiely, Gerard; Kolb, Thomas E; Kutsch, Werner L; Lafleur, Peter; Lawrence, David M; Li, Linghao; Lindroth, Anders; Litvak, Marcy; Loustau, Denis; Lund, Magnus; Marek, Michal; Martin, Timothy A; Matteucci, Giorgio; Migliavacca, Mirco; Montagnani, Leonardo; Moors, Eddy; Munger, J William; Noormets, Asko; Oechel, Walter; Olejnik, Janusz; Kyaw Tha Paw U; Pilegaard, Kim; Rambal, Serge; Raschi, Antonio; Scott, Russell L; Seufert, Günther; Spano, Donatella; Stoy, Paul; Sutton, Mark A; Varlagin, Andrej; Vesala, Timo; Weng, Ensheng; Wohlfahrt, Georg; Yang, Bai; Zhang, Zhongda; Zhou, Xuhui

    2012-05-01

    • It is well established that individual organisms can acclimate and adapt to temperature to optimize their functioning. However, thermal optimization of ecosystems, as an assemblage of organisms, has not been examined at broad spatial and temporal scales. • Here, we compiled data from 169 globally distributed sites of eddy covariance and quantified the temperature response functions of net ecosystem exchange (NEE), an ecosystem-level property, to determine whether NEE shows thermal optimality and to explore the underlying mechanisms. • We found that the temperature response of NEE followed a peak curve, with the optimum temperature (corresponding to the maximum magnitude of NEE) being positively correlated with annual mean temperature over years and across sites. Shifts of the optimum temperature of NEE were mostly a result of temperature acclimation of gross primary productivity (upward shift of optimum temperature) rather than changes in the temperature sensitivity of ecosystem respiration. • Ecosystem-level thermal optimality is a newly revealed ecosystem property, presumably reflecting associated evolutionary adaptation of organisms within ecosystems, and has the potential to significantly regulate ecosystem-climate change feedbacks. The thermal optimality of NEE has implications for understanding fundamental properties of ecosystems in changing environments and benchmarking global models. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  6. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Han; Zheng, Shi-Lian; Yang, Xiao-Niu; Zhao, Zhi-Jin

    2016-12-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. Project supported by the National Natural Science Foundation of China (Grant No. 61501356), the Fundamental Research Funds of the Ministry of Education, China (Grant No. JB160101), and the Postdoctoral Fund of Shaanxi Province, China.

  7. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  8. Optimal aggregation of binary classifiers for multiclass cancer diagnosis using gene expression profiles.

    PubMed

    Yukinawa, Naoto; Oba, Shigeyuki; Kato, Kikuya; Ishii, Shin

    2009-01-01

    Multiclass classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. There have been many studies of aggregating binary classifiers to construct a multiclass classifier based on one-versus-the-rest (1R), one-versus-one (11), or other coding strategies, as well as some comparison studies between them. However, the studies found that the best coding depends on each situation. Therefore, a new problem, which we call the "optimal coding problem," has arisen: how can we determine which coding is the optimal one in each situation? To approach this optimal coding problem, we propose a novel framework for constructing a multiclass classifier, in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. Although there is no a priori answer to the optimal coding problem, our weight tuning method can be a consistent answer to the problem. We apply this method to various classification problems including a synthesized data set and some cancer diagnosis data sets from gene expression profiling. The results demonstrate that, in most situations, our method can improve classification accuracy over simple voting heuristics and is better than or comparable to state-of-the-art multiclass predictors.

  9. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  10. Optimal Control via Self-Generated Stochasticity

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    The problem of global maxima of functionals has been examined. Mathematical roots of local maxima are the same as those for a much simpler problem of finding global maximum of a multi-dimensional function. The second problem is instability even if an optimal trajectory is found, there is no guarantee that it is stable. As a result, a fundamentally new approach is introduced to optimal control based upon two new ideas. The first idea is to represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then, the corresponding ordinary differential equations (ODEs) become stochastic, and that sample of the solution that has the largest value will have the highest probability to appear in ODE simulation. The main advantages of the stochastic approach are that it is not sensitive to local maxima, the function to be maximized must be only integrable but not necessarily differentiable, and global equality and inequality constraints do not cause any significant obstacles. The second idea is to remove possible instability of the optimal solution by equipping the control system with a self-stabilizing device. The applications of the proposed methodology will optimize the performance of NASA spacecraft, as well as robot performance.

  11. The Configuration Process of a Community of Practice in the Collective Text Editor

    ERIC Educational Resources Information Center

    Zank, Cláudia; Behar, Patricia Alejandra

    2013-01-01

    The various tools available on Web 2.0 enable the interactions in a Community of Practice (CoP) to be optimized and may discourage the participation of members. Thus, the choice of the tools is fundamental for the growth and maintenance of a CoP. With a focus on this and, from the analysis of the characteristics of the group and the activities…

  12. Impact of Medium on the Development and Physiology of Pseudomonas fluorescens Biofilms on Polyurethane Paint

    DTIC Science & Technology

    2012-02-01

    including P. fluorescens are known to make several types of intracellular storage granules, including polyhydroxyalkanoates (PHAs), polyphosphates, and... polyhydroxyalkanoates in Pseudomonas putida KT2442 and the fundamental role of PhaZ depolymerase for the metabolic balance. Environ Microbiol. 12(1... polyhydroxyalkanoates by bacteria. Biotechnol Lett. 11(7):471-476. Herigstad B, Hamilton M, Heersink J. 2001. How to optimize the drop plate method for

  13. Employment of Adaptive Learning Techniques for the Discrimination of Acoustic Emissions.

    DTIC Science & Technology

    1983-11-01

    Dereverberation Simulations ... ............ .. 96 Ŗ 4. ARRAY OPTIMIZATION ......... ...................... . 115 * 4.1 Phased Array Fundamentals... 115 4.2 Phased Array Diffraction Suboptimization ......... ... 121 , .i Page s 4.3 Diffraction Pattern Simulations of Phased Arrays...by differentiating (2.13.14) with respect to z and • -- equating equal powers of z , giving n-i c n bn + I/n kckbn-k (2.13.15)nk= This is very

  14. Aspects of job scheduling

    NASA Technical Reports Server (NTRS)

    Phillips, K.

    1976-01-01

    A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.

  15. The Physics of Ultrabroadband Frequency Comb Generation and Optimized Combs for Measurements in Fundamental Physics

    DTIC Science & Technology

    2016-07-02

    great potential of chalcogenide microwires for applications in the mid-IR ranging from absorption spectroscopy to entangled photon pairs generation...modulation instability) gain. Stochastic nonlinear Schrödinger equation simulations were shown to be in very good agreement with experiment. This...as the seed coherence decreases. Stochastic nonlinear Schrödinger equation simulations of spectral and noise properties are in excellent agreement with

  16. Robotic reactions: delay-induced patterns in autonomous vehicle systems.

    PubMed

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  17. Robotic reactions: Delay-induced patterns in autonomous vehicle systems

    NASA Astrophysics Data System (ADS)

    Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco

    2010-02-01

    Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.

  18. Analysis and Design of Complex Network Environments

    DTIC Science & Technology

    2012-03-01

    and J. Lowe, “The myths and facts behind cyber security risks for industrial control systems ,” in the Proceedings of the VDE Kongress, VDE Congress...questions about 1) how to model them, 2) the design of experiments necessary to discover their structure (and thus adapt system inputs to optimize the...theoretical work that clarifies fundamental limitations of complex networks with network engineering and systems biology to implement specific designs and

  19. Optimal Control of a Circular Satellite Formation Subject to Gravitational Perturbations

    DTIC Science & Technology

    2007-03-01

    fundamental reference in the study of the dynamics of close-proximity spacecraft is the paper by Clohessy and Wiltshire (5). In this work, the linear...dynamics for a satellite rendezvous problem are derived, which are now commonly known as either the Clohessy - Wiltshire (CW) equations or Hill’s...themselves to closed-form solutions, as did the Clohessy - Wiltshire development. When the nonlinear approach is undertaken, the numeric integration

  20. Condensation on Slippery Asymmetric Bumps

    NASA Astrophysics Data System (ADS)

    Park, Kyoo-Chul; Kim, Philseok; Aizenberg, Joanna

    Controlling dropwise condensation by designing surfaces that enable droplets to grow rapidly and be shed as quickly as possible is fundamental to water harvesting systems, thermal power generation, distillation towers, etc. However, cutting-edge approaches based on micro/nanoscale textures suffer from intrinsic trade-offs that make it difficult to optimize both growth and transport at once. Here we present a conceptually different design approach based on principles derived from Namib desert beetles, cacti, and pitcher plants that synergistically couples both aspects of condensation and outperforms other synthetic surfaces. Inspired by an unconventional interpretation of the role of the beetle's bump geometry in promoting condensation, we show how to maximize vapor diffusion flux at the apex of convex millimetric bumps by optimizing curvature and shape. Integrating this apex geometry with a widening slope analogous to cactus spines couples rapid drop growth with fast directional transport, by creating a free energy profile that drives the drop down the slope. This coupling is further enhanced by a slippery, pitcher plant-inspired coating that facilitates feedback between coalescence-driven growth and capillary-driven motion. We further observe an unprecedented six-fold higher exponent in growth rate and much faster shedding time compared to other surfaces. We envision that our fundamental understanding and rational design strategy can be applied to a wide range of phase change applications.

  1. Condensation on Slippery Asymmetric Bumps

    NASA Astrophysics Data System (ADS)

    Park, Kyoo-Chul; Kim, Philseok; Aizenberg, Joanna

    2016-11-01

    Controlling dropwise condensation by designing surfaces that enable droplets to grow rapidly and be shed as quickly as possible is fundamental to water harvesting systems, thermal power generation, distillation towers, etc. However, cutting-edge approaches based on micro/nanoscale textures suffer from intrinsic trade-offs that make it difficult to optimize both growth and transport at once. Here we present a conceptually different design approach based on principles derived from Namib desert beetles, cacti, and pitcher plants that synergistically couples both aspects of condensation and outperforms other synthetic surfaces. Inspired by an unconventional interpretation of the role of the beetle's bump geometry in promoting condensation, we show how to maximize vapor diffusion flux at the apex of convex millimetric bumps by optimizing curvature and shape. Integrating this apex geometry with a widening slope analogous to cactus spines couples rapid drop growth with fast directional transport, by creating a free energy profile that drives the drop down the slope. This coupling is further enhanced by a slippery, pitcher plant-inspired coating that facilitates feedback between coalescence-driven growth and capillary-driven motion. We further observe an unprecedented six-fold higher exponent in growth rate and much faster shedding time compared to other surfaces. We envision that our fundamental understanding and rational design strategy can be applied to a wide range of phase change applications.

  2. Diagnostic and interventional musculoskeletal ultrasound: part 1. Fundamentals.

    PubMed

    Smith, Jay; Finnoff, Jonathan T

    2009-01-01

    Musculoskeletal ultrasound involves the use of high-frequency sound waves to image soft tissues and bony structures in the body for the purposes of diagnosing pathology or guiding real-time interventional procedures. Recently, an increasing number of physicians have integrated musculoskeletal ultrasound into their practices to facilitate patient care. Technological advancements, improved portability, and reduced costs continue to drive the proliferation of ultrasound in clinical medicine. This increased interest creates a need for education pertaining to all aspects of musculoskeletal ultrasound. The primary purpose of this article is to review diagnostic ultrasound technology and its potential clinical applications in the evaluation and treatment of patients with neurologic and musculoskeletal disorders. After reviewing this article, physicians should be able to (1) list the advantages and disadvantages of ultrasound compared with other available imaging modalities, (2) describe how ultrasound machines produce images using sound waves, (3) discuss the steps necessary to acquire and optimize an ultrasound image, (4) understand the different ultrasound appearances of tendons, nerves, muscles, ligaments, blood vessels, and bones, and (5) identify multiple applications for diagnostic and interventional musculoskeletal ultrasound in musculoskeletal practice. Part 1 of this 2-part article reviews the fundamentals of clinical ultrasonographic imaging, including relevant physics, equipment, training, image optimization, and scanning principles for diagnostic and interventional purposes.

  3. Monogamy relation in multipartite continuous-variable quantum teleportation

    NASA Astrophysics Data System (ADS)

    Lee, Jaehak; Ji, Se-Wan; Park, Jiyong; Nha, Hyunchul

    2016-12-01

    Quantum teleportation (QT) is a fundamentally remarkable communication protocol that also finds many important applications for quantum informatics. Given a quantum entangled resource, it is crucial to know to what extent one can accomplish the QT. This is usually assessed in terms of output fidelity, which can also be regarded as an operational measure of entanglement. In the case of multipartite communication when each communicator possesses a part of an N -partite entangled state, not all pairs of communicators can achieve a high fidelity due to the monogamy property of quantum entanglement. We here investigate how such a monogamy relation arises in multipartite continuous-variable (CV) teleportation, particularly when using a Gaussian entangled state. We show a strict monogamy relation, i.e., a sender cannot achieve a fidelity higher than optimal cloning limit with more than one receiver. While this seems rather natural owing to the no-cloning theorem, a strict monogamy relation still holds even if the sender is allowed to individually manipulate the reduced state in collaboration with each receiver to improve fidelity. The local operations are further extended to non-Gaussian operations such as photon subtraction and addition, and we demonstrate that the Gaussian cloning bound cannot be beaten by more than one pair of communicators. Furthermore, we investigate a quantitative form of monogamy relation in terms of teleportation capability, for which we show that a faithful monogamy inequality does not exist.

  4. Graph rigidity, cyclic belief propagation, and point pattern matching.

    PubMed

    McAuley, Julian J; Caetano, Tibério S; Barbosa, Marconi S

    2008-11-01

    A recent paper [1] proposed a provably optimal polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph that is also globally rigid but has an advantage over the graph proposed in [1]: Its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal, and thus, standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that in [1] when there is noise in the point patterns.

  5. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  6. An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172

  7. Vibrational self-consistent field theory using optimized curvilinear coordinates.

    PubMed

    Bulik, Ireneusz W; Frisch, Michael J; Vaccaro, Patrick H

    2017-07-28

    A vibrational SCF model is presented in which the functions forming the single-mode functions in the product wavefunction are expressed in terms of internal coordinates and the coordinates used for each mode are optimized variationally. This model involves no approximations to the kinetic energy operator and does not require a Taylor-series expansion of the potential. The non-linear optimization of coordinates is found to give much better product wavefunctions than the limited variations considered in most previous applications of SCF methods to vibrational problems. The approach is tested using published potential energy surfaces for water, ammonia, and formaldehyde. Variational flexibility allowed in the current ansätze results in excellent zero-point energies expressed through single-product states and accurate fundamental transition frequencies realized by short configuration-interaction expansions. Fully variational optimization of single-product states for excited vibrational levels also is discussed. The highlighted methodology constitutes an excellent starting point for more sophisticated treatments, as the bulk characteristics of many-mode coupling are accounted for efficiently in terms of compact wavefunctions (as evident from the accurate prediction of transition frequencies).

  8. A fundamental study of laser-induced breakdown spectroscopy using fiber optics for remote measurements of trace metals. Interim progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goode, S.R.; Angel, S.M.

    1997-01-01

    'The long-term goal of this project is to develop a system to measure the elemental composition of unprepared samples using laser-induced breakdown spectroscopy, LIBS, with a fiber-optic probe. From images shown in this report it is evident that the temporal and spatial behavior of laser-induced plasmas IS a complex process. However, through the use of spectral imaging, optimal conditions can be determined for collecting the atomic emission signal in these plasmas. By tailoring signal collection to the regions of the plasma that contain the highest emission signal with the least amount of background interference both the detection limits and themore » precision of LIBS measurements could be improved. The optimal regions for both gated and possibly non-gated LIBS measurements have been shown to correspond to the inner regions and outer regions, respectively, in an axial plasma. By using this data fiber-optic LIBS probe designs can be optimized for collecting plasma emission at the optimal regions for improved detection limits and precision in a LIBS measurement.'« less

  9. A global optimization perspective on molecular clusters.

    PubMed

    Marques, J M C; Pereira, F B; Llanio-Trujillo, J L; Abreu, P E; Albertí, M; Aguilar, A; Pirani, F; Bartolomei, M

    2017-04-28

    Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca 2+ ions with various types of solvents.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).

  10. Uncertainty in the analysis of the overall equipment effectiveness on the shop floor

    NASA Astrophysics Data System (ADS)

    Rößler, M. P.; Abele, E.

    2013-06-01

    In this article an approach will be presented which supports transparency regarding the effectiveness of manufacturing equipment by combining the fuzzy set theory with the method of the overall equipment effectiveness analysis. One of the key principles of lean production and also a fundamental task in production optimization projects is the prior analysis of the current state of a production system by the use of key performance indicators to derive possible future states. The current state of the art in overall equipment effectiveness analysis is usually performed by cumulating different machine states by means of decentralized data collection without the consideration of uncertainty. In manual data collection or semi-automated plant data collection systems the quality of derived data often diverges and leads optimization teams to distorted conclusions about the real optimization potential of manufacturing equipment. The method discussed in this paper is to help practitioners to get more reliable results in the analysis phase and so better results of optimization projects. Under consideration of a case study obtained results are discussed.

  11. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  12. A global optimization perspective on molecular clusters

    PubMed Central

    Pereira, F. B.; Llanio-Trujillo, J. L.; Abreu, P. E.; Albertí, M.; Aguilar, A.; Pirani, F.; Bartolomei, M.

    2017-01-01

    Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca2+ ions with various types of solvents. This article is part of the themed issue ‘Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces’. PMID:28320902

  13. Multi-dimensional optimization of a terawatt seeded tapered Free Electron Laser with a Multi-Objective Genetic Algorithm

    DOE PAGES

    Wu, Juhao; Hu, Newman; Setiawan, Hananiel; ...

    2016-11-20

    There is a great interest in generating high-power hard X-ray Free Electron Laser (FEL) in the terawatt (TW) level that can enable coherent diffraction imaging of complex molecules like proteins and probe fundamental high-field physics. A feasibility study of producing such X-ray pulses was carried out in this paper employing a configuration beginning with a Self-Amplified Spontaneous Emission FEL, followed by a “self-seeding” crystal monochromator generating a fully coherent seed, and finishing with a long tapered undulator where the coherent seed recombines with the electron bunch and is amplified to high power. The undulator tapering profile, the phase advance inmore » the undulator break sections, the quadrupole focusing strength, etc. are parameters to be optimized. A Genetic Algorithm (GA) is adopted for this multi-dimensional optimization. Concrete examples are given for LINAC Coherent Light Source (LCLS) and LCLS-II-type systems. Finally, analytical estimate is also developed to cross check the simulation and optimization results as a quick and complimentary tool.« less

  14. Temporal Changes in Education Gradients of ‘Preventable’ Mortality: A Test of Fundamental Cause Theory

    PubMed Central

    Masters, Ryan K.; Link, Bruce G.; Phelan, Jo C.

    2015-01-01

    Fundamental cause theory explains persisting associations between socioeconomic status and mortality in terms of personal resources such as knowledge, money, power, prestige, and social connections, as well as disparate social contexts related to these resources. We review evidence concerning fundamental cause theory and test three central claims using the National Health Interview Survey Linked Mortality Files 1986-2004. We then examine cohort-based variation in the associations between a fundamental social cause of disease, educational attainment, and mortality rates from heart disease, other “preventable” causes of death, and less preventable causes of death. We further explore race/ethnic and gender variation in these associations. Overall, findings are consistent with nearly all features of fundamental cause theory. Results show, first, larger education gradients in mortality risk for causes of death that are under greater human control than for less preventable causes of death, and, second, that these gradients grew more rapidly across successive cohorts than gradients for less preventable causes. Results also show that relative sizes and cohort-based changes in the education gradients vary substantially by race/ethnicity and gender. PMID:25556675

  15. Second-harmonic generation of a dual-frequency laser in a MgO:PPLN crystal.

    PubMed

    Kang, Ying; Yang, Suhui; Brunel, Marc; Cheng, Lijun; Zhao, Changming; Zhang, Haiyang

    2017-04-10

    A dual-frequency CW laser at a wavelength of 1.064 μm is frequency doubled in a MgO:PPLN nonlinear crystal. The fundamental dual-frequency laser has a tunable beat note from 125 MHz to 175 MHz. A laser-diode pumped fiber amplifier is used to amplify the dual-frequency fundamental output to a maximum power of 50 W before frequency doubling. The maximum output power of the green light is 1.75 W when the input fundamental power is 12 W, corresponding to a frequency doubling efficiency of 14.6%. After frequency doubling, green light with modulation frequencies in two bands from 125 MHz to 175 MHz and from 250 MHz to 350 MHz is achieved simultaneously. The relative intensities of the beat notes at the two bands can be adjusted by changing the relative intensities at different frequencies of the fundamental light. The spectral width and frequency stabilities of the beat notes in fundamental wave and green light are also measured, respectively. The modulated green light has potential applications in underwater ranging, communication, and imaging.

  16. Asset price and trade volume relation in artificial market impacted by value investors

    NASA Astrophysics Data System (ADS)

    Tangmongkollert, K.; Suwanna, S.

    2016-05-01

    The relationship between return and trade volume has been of great interests in a financial market. The appearance of asymmetry in the price-volume relation in the bull and bear market is still unsettled. We present a model of the value investor traders (VIs) in the double auction system, in which agents make trading decision based on the pseudo fundamental price modelled by sawtooth oscillations. We investigate the system by two different time series for the asset fundamental price: one corresponds to the fundamental price in a growing phase; and the other corresponds to that in a declining phase. The simulation results show that the trade volume is proportional to the difference between the market price and the fundamental price, and that there is asymmetry between the buying and selling phases. Furthermore, the selling phase has more significant impact of price on the trade volume than the buying phase.

  17. Thermoelectric Properties of 2D Ni 3(HITP) 2 and 3D Cu 3(BTC) 2 MOFs: First-Principles Studies

    DOE PAGES

    He, Yuping; Talin, A. Alec; Allendorf, Mark D.

    2017-08-08

    Metal organic frameworks (MOFs) have recently attracted great attentions for the thermoelectric (TE) applications, owing to their intrinsic low thermal conductivity, but their TE efficiencies are still low due to the poor electronic transport properties. Here, various synthetic strategies have been designed to optimize the electronic properties of MOFs. Using a series of first principle calculations and band theory, we explore the effect of structural topology and redox matching between the metal and coordinated atoms on the TE transport properties. In conclusion, the presented results provide a fundamental guidance for optimizing electronic charge transport of existing MOFs, and for designingmore » yet to be discovered conductive MOFs for thermoelectric applications.« less

  18. Development of conductometric biosensors based on alkaline phosphatases for the water quality control

    NASA Astrophysics Data System (ADS)

    Berezhetskyy, A.

    2008-09-01

    Researches are focused on the elaboration of enzymatic microconductometric device for heavy metal ions detection in water solutions. The manuscript includes a general introduction, the first chapter contains bibliographic review, the second chapter described the fundamentals of conductometric transducers, the third chapter examining the possibility to create and to optimize conductometric biosensor based on bovine alkaline phosphatase for heavy metals ions detection, the fourth chapter devoted to creation and optimization of conductometric biosensor based on alkaline phosphatase active microalgae and sol gel technology, the last chapter described application of the proposed algal biosensor for measurements of heavy metal ions toxicity of waste water, general conclusions stating the progresses achieved in the field of environmental monitoring

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spong, D.A.

    The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a varietymore » of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood.« less

  20. A controls engineering approach for analyzing airplane input-output characteristics

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

Top