Sample records for efficient formal optimization

  1. Improving the efficiency of single and multiple teleportation protocols based on the direct use of partially entangled states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br

    We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less

  2. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.

  3. General Formalism of Mass Scaling Approach for Replica-Exchange Molecular Dynamics and its Application

    NASA Astrophysics Data System (ADS)

    Nagai, Tetsuro

    2017-01-01

    Replica-exchange molecular dynamics (REMD) has demonstrated its efficiency by combining trajectories of a wide range of temperatures. As an extension of the method, the author formalizes the mass-manipulating replica-exchange molecular dynamics (MMREMD) method that allows for arbitrary mass scaling with respect to temperature and individual particles. The formalism enables the versatile application of mass-scaling approaches to the REMD method. The key change introduced in the novel formalism is the generalized rules for the velocity and momentum scaling after accepted replica-exchange attempts. As an application of this general formalism, the refinement of the viscosity-REMD (V-REMD) method [P. H. Nguyen, J. Chem. Phys. 132, 144109 (2010)] is presented. Numerical results are provided using a pilot system, demonstrating easier and more optimized applicability of the new version of V-REMD as well as the importance of adherence to the generalized velocity scaling rules. With the new formalism, more sound and efficient simulations will be performed.

  4. Optimization through satisficing with prospects

    NASA Astrophysics Data System (ADS)

    Oyo, Kuratomo; Takahashi, Tatsuji

    2017-07-01

    As the broadening scope of reinforcement learning calls for a rational and more efficient heuristics, we test a satisficing strategy named RS, based on the theory of bounded rationality that considers the limited resources in agents. In K-armed bandit problems, despite its simpler form than the previous formalization of satisficing, RS shows better-than-optimal performances when the optimal aspiration level is given. We also show that RS shows a scalability for the number of actions, K, and an adaptability in the face of an infinite number of actions. It may be an efficient means for online learning in a complex or real environments.

  5. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.

  6. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippi, Claudia, E-mail: c.filippi@utwente.nl; Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr; Moroni, Saverio, E-mail: moroni@democritos.it

    2016-05-21

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, inmore » both all-electron and pseudopotential calculations.« less

  7. Unified theory for inhomogeneous thermoelectric generators and coolers including multistage devices.

    PubMed

    Gerstenmaier, York Christian; Wachutka, Gerhard

    2012-11-01

    A novel generalized Lagrange multiplier method for functional optimization with inclusion of subsidiary conditions is presented and applied to the optimization of material distributions in thermoelectric converters. Multistaged devices are considered within the same formalism by inclusion of position-dependent electric current in the legs leading to a modified thermoelectric equation. Previous analytical solutions for maximized efficiencies for generators and coolers obtained by Sherman [J. Appl. Phys. 31, 1 (1960)], Snyder [Phys. Rev. B 86, 045202 (2012)], and Seifert et al. [Phys. Status Solidi A 207, 760 (2010)] by a method of local optimization of reduced efficiencies are recovered by independent proof. The outstanding maximization problems for generated electric power and cooling power can be solved swiftly numerically by solution of a differential equation-system obtained within the new formalism. As far as suitable materials are available, the inhomogeneous TE converters can have increased performance by use of purely temperature-dependent material properties in the thermoelectric legs or by use of purely spatial variation of material properties or by a combination of both. It turns out that the optimization domain is larger for the second kind of device which can, thus, outperform the first kind of device.

  8. Intrinsic retrieval efficiency for quantum memories: A three-dimensional theory of light interaction with an atomic ensemble

    NASA Astrophysics Data System (ADS)

    Gujarati, Tanvi P.; Wu, Yukai; Duan, Luming

    2018-03-01

    Duan-Lukin-Cirac-Zoller quantum repeater protocol, which was proposed to realize long distance quantum communication, requires usage of quantum memories. Atomic ensembles interacting with optical beams based on off-resonant Raman scattering serve as convenient on-demand quantum memories. Here, a complete free space, three-dimensional theory of the associated read and write process for this quantum memory is worked out with the aim of understanding intrinsic retrieval efficiency. We develop a formalism to calculate the transverse mode structure for the signal and the idler photons and use the formalism to study the intrinsic retrieval efficiency under various configurations. The effects of atomic density fluctuations and atomic motion are incorporated by numerically simulating this system for a range of realistic experimental parameters. We obtain results that describe the variation in the intrinsic retrieval efficiency as a function of the memory storage time for skewed beam configuration at a finite temperature, which provides valuable information for optimization of the retrieval efficiency in experiments.

  9. Constrained optimization of sequentially generated entangled multiqubit states

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique

    2009-08-01

    We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Kuang; Libisch, Florian; Carter, Emily A., E-mail: eac@princeton.edu

    We report a new implementation of the density functional embedding theory (DFET) in the VASP code, using the projector-augmented-wave (PAW) formalism. Newly developed algorithms allow us to efficiently perform optimized effective potential optimizations within PAW. The new algorithm generates robust and physically correct embedding potentials, as we verified using several test systems including a covalently bound molecule, a metal surface, and bulk semiconductors. We show that with the resulting embedding potential, embedded cluster models can reproduce the electronic structure of point defects in bulk semiconductors, thereby demonstrating the validity of DFET in semiconductors for the first time. Compared to ourmore » previous version, the new implementation of DFET within VASP affords use of all features of VASP (e.g., a systematic PAW library, a wide selection of functionals, a more flexible choice of U correction formalisms, and faster computational speed) with DFET. Furthermore, our results are fairly robust with respect to both plane-wave and Gaussian type orbital basis sets in the embedded cluster calculations. This suggests that the density functional embedding method is potentially an accurate and efficient way to study properties of isolated defects in semiconductors.« less

  11. The Aeronautical Data Link: Taxonomy, Architectural Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Goode, Plesent W.

    2002-01-01

    The future Communication, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) System will rely on global satellite navigation, and ground-based and satellite based communications via Multi-Protocol Networks (e.g. combined Aeronautical Telecommunications Network (ATN)/Internet Protocol (IP)) to bring about needed improvements in efficiency and safety of operations to meet increasing levels of air traffic. This paper will discuss the development of an approach that completely describes optimal data link architecture configuration and behavior to meet the multiple conflicting objectives of concurrent and different operations functions. The practical application of the approach enables the design and assessment of configurations relative to airspace operations phases. The approach includes a formal taxonomic classification, an architectural analysis methodology, and optimization techniques. The formal taxonomic classification provides a multidimensional correlation of data link performance with data link service, information protocol, spectrum, and technology mode; and to flight operations phase and environment. The architectural analysis methodology assesses the impact of a specific architecture configuration and behavior on the local ATM system performance. Deterministic and stochastic optimization techniques maximize architectural design effectiveness while addressing operational, technology, and policy constraints.

  12. On making things the best - Aeronautical uses of optimization /Wright Bros. lecture/

    NASA Technical Reports Server (NTRS)

    Ashley, H.

    1981-01-01

    The paper's purpose is to summarize and evaluate the results of an investigation into the degree to which formal optimization methods have contributed practically to the design and operation of atmospheric flight vehicles. The nature of this technology is reviewed and illustrated with simple structural examples. A series of published successful applications is described, from the fields of aerodynamics, structures, guidance and control, optimal trajectories and vehicle configuration optimization. The corresponding improvements over conventional analysis are assessed. Speculations are offered as to why these tools have made such little headway toward acceptance by designers. The growing need for their use in the future is explained; they hold out an unparalleled opportunity for improved efficiencies.

  13. The time-efficiency principle: time as the key diagnostic strategy in primary care.

    PubMed

    Irving, Greg; Holden, John

    2013-08-01

    The test and retest opportunity afforded by reviewing a patient over time substantially increases the total gain in certainty when making a diagnosis in low-prevalence settings (the time-efficiency principle). This approach safely and efficiently reduces the number of patients who need to be formally tested in order to make a correct diagnosis for a person. Time, in terms of observed disease trajectory, provides a vital mechanism for achieving this task. It remains the best strategy for delivering near-optimal diagnoses in low-prevalence settings and should be used to its full advantage.

  14. Development of a composite tailoring procedure for airplane wing

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Zhang, Sen

    1995-01-01

    The development of a composite wing box section using a higher order-theory is proposed for accurate and efficient estimation of both static and dynamic responses. The theory includes the effect of through-the-thickness transverse shear deformations which is important in laminated composites and is ignored in the classical approach. The box beam analysis is integrated with an aeroelastic analysis to investigate the effect of composite tailoring using a formal design optimization technique. A hybrid optimization procedure is proposed for addressing both continuous and discrete design variables.

  15. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same timemore » a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.« less

  16. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    DOE PAGES

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.; ...

    2017-07-21

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same timemore » a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.« less

  17. A Novel Latin Hypercube Algorithm via Translational Propagation

    PubMed Central

    Pan, Guang; Ye, Pengcheng

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844

  18. Organizational Decision Making

    DTIC Science & Technology

    1975-08-01

    the lack of formal techniques typically used by large organizations, digress on the advantages of formal over informal... optimization ; for example one might do a number of optimization calculations, each time using a different measure of effectiveness as the optimized ...final decision. The next level of computer application involves the use of computerized optimization techniques. Optimization

  19. Implementation of High-Order Multireference Coupled-Cluster Methods on Intel Many Integrated Core Architecture.

    PubMed

    Aprà, E; Kowalski, K

    2016-03-08

    In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.

  20. Extended Lagrangian formulation of charge-constrained tight-binding molecular dynamics.

    PubMed

    Cawkwell, M J; Coe, J D; Yadav, S K; Liu, X-Y; Niklasson, A M N

    2015-06-09

    The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [Niklasson, Phys. Rev. Lett., 2008, 100, 123004] has been applied to a tight-binding model under the constraint of local charge neutrality to yield microcanonical trajectories with both precise, long-term energy conservation and a reduced number of self-consistent field optimizations at each time step. The extended Lagrangian molecular dynamics formalism restores time reversal symmetry in the propagation of the electronic degrees of freedom, and it enables the efficient and accurate self-consistent optimization of the chemical potential and atomwise potential energy shifts in the on-site elements of the tight-binding Hamiltonian that are required when enforcing local charge neutrality. These capabilities are illustrated with microcanonical molecular dynamics simulations of a small metallic cluster using an sd-valent tight-binding model for titanium. The effects of weak dissipation on the propagation of the auxiliary degrees of freedom for the chemical potential and on-site Hamiltonian matrix elements that is used to counteract the accumulation of numerical noise during trajectories was also investigated.

  1. Method for computationally efficient design of dielectric laser accelerator structures

    DOE PAGES

    Hughes, Tyler; Veronis, Georgios; Wootton, Kent P.; ...

    2017-06-22

    Here, dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of onlymore » two full-field electromagnetic simulations, the original and ‘adjoint’. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.« less

  2. Asymmetric information and economics

    NASA Astrophysics Data System (ADS)

    Frieden, B. Roy; Hawkins, Raymond J.

    2010-01-01

    We present an expression of the economic concept of asymmetric information with which it is possible to derive the dynamical laws of an economy. To illustrate the utility of this approach we show how the assumption of optimal information flow leads to a general class of investment strategies including the well-known Q theory of Tobin. Novel consequences of this formalism include a natural definition of market efficiency and an uncertainty principle relating capital stock and investment flow.

  3. Superlattice design for optimal thermoelectric generator performance

    NASA Astrophysics Data System (ADS)

    Priyadarshi, Pankaj; Sharma, Abhishek; Mukherjee, Swarnadip; Muralidharan, Bhaskaran

    2018-05-01

    We consider the design of an optimal superlattice thermoelectric generator via the energy bandpass filter approach. Various configurations of superlattice structures are explored to obtain a bandpass transmission spectrum that approaches the ideal ‘boxcar’ form, which is now well known to manifest the largest efficiency at a given output power in the ballistic limit. Using the coherent non-equilibrium Green’s function formalism coupled self-consistently with the Poisson’s equation, we identify such an ideal structure and also demonstrate that it is almost immune to the deleterious effect of self-consistent charging and device variability. Analyzing various superlattice designs, we conclude that superlattice with a Gaussian distribution of the barrier thickness offers the best thermoelectric efficiency at maximum power. It is observed that the best operating regime of this device design provides a maximum power in the range of 0.32–0.46 MW/m 2 at efficiencies between 54%–43% of Carnot efficiency. We also analyze our device designs with the conventional figure of merit approach to counter support the results so obtained. We note a high zT el   =  6 value in the case of Gaussian distribution of the barrier thickness. With the existing advanced thin-film growth technology, the suggested superlattice structures can be achieved, and such optimized thermoelectric performances can be realized.

  4. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  5. Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Falcone, Ylies; Havelund, Klaus; Reger, Giles; Rydeheard, David

    2012-01-01

    Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach.We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.

  6. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  7. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James

    2014-01-01

    NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.

  8. Neural networks: What non-linearity to choose

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.; Quintana, Chris

    1991-01-01

    Neural networks are now one of the most successful learning formalisms. Neurons transform inputs (x(sub 1),...,x(sub n)) into an output f(w(sub 1)x(sub 1) + ... + w(sub n)x(sub n)), where f is a non-linear function and w, are adjustable weights. What f to choose? Usually the logistic function is chosen, but sometimes the use of different functions improves the practical efficiency of the network. The problem of choosing f as a mathematical optimization problem is formulated and solved under different optimality criteria. As a result, a list of functions f that are optimal under these criteria are determined. This list includes both the functions that were empirically proved to be the best for some problems, and some new functions that may be worth trying.

  9. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks †

    PubMed Central

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  10. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  11. Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1991-01-01

    The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.

  12. Genetic programming assisted stochastic optimization strategies for optimization of glucose to gluconic acid fermentation.

    PubMed

    Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D

    2002-01-01

    This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.

  13. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  14. Near-optimal experimental design for model selection in systems biology.

    PubMed

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M

    2013-10-15

    Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).

  15. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  16. TU-G-BRB-03: Iterative Optimization of Normalized Transmission Maps for IMRT Using Arbitrary Beam Profiles.

    PubMed

    Choi, K; Suh, T; Xing, L

    2012-06-01

    Newly available flattening filter free (FFF) beam increases the dose rate by 3∼6 times at the central axis. In reality, even flattening filtered beam is not perfectly flat. In addition, the beam profiles across different fields may not have the same amplitude. The existing inverse planning formalism based on the total-variation of intensity (or fluence) map cannot consider these properties of beam profiles. The purpose of this work is to develop a novel dose optimization scheme with incorporation of the inherent beam profiles to maximally utilize the efficacy of arbitrary beam profiles while preserving the convexity of the optimization problem. To increase the accuracy of the problem formalism, we decompose the fluence map as an elementwise multiplication of the inherent beam profile and a normalized transmission map (NTM). Instead of attempting to optimize the fluence maps directly, we optimize the NTMs and beam profiles separately. A least-squares problem constrained by total-variation of NTMs is developed to derive the optimal fluence maps that balances the dose conformality and FFF beam delivery efficiency. With the resultant NTMs, we find beam profiles to renormalized NTMs. The proposed method iteratively optimizes and renormalizes NTMs in a closed loop manner. The advantage of the proposed method is demonstrated by using a head-neck case with flat beam profiles and a prostate case with non-flat beam profiles. The obtained NTMs achieve more conformal dose distribution while preserving piecewise constancy compared to the existing solution. The proposed formalism has two major advantages over the conventional inverse planning schemes: (1) it provides a unified framework for inverse planning with beams of arbitrary fluence profiles, including treatment with beams of mixed fluence profiles; (2) the use of total-variation constraints on NTMs allows us to optimally balance the dose confromality and deliverability for a given beam configuration. This project was supported in part by grants from the National Science Foundation (0854492), National Cancer Institute (1R01 CA104205), and Leading Foreign Research Institute Recruitment Program by the Korean Ministry of Education, Science and Technology (K20901000001-09E0100-00110). To the authors' best knowledgement, there is no conflict interest. © 2012 American Association of Physicists in Medicine.

  17. Designing and optimizing a healthcare kiosk for the community.

    PubMed

    Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji

    2015-03-01

    Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. Fast state transfer in a Λ-system: a shortcut-to-adiabaticity approach to robust and resource optimized control

    NASA Astrophysics Data System (ADS)

    Mortensen, Henrik Lund; Sørensen, Jens Jakob W. H.; Mølmer, Klaus; Sherson, Jacob Friis

    2018-02-01

    We propose an efficient strategy to find optimal control functions for state-to-state quantum control problems. Our procedure first chooses an input state trajectory, that can realize the desired transformation by adiabatic variation of the system Hamiltonian. The shortcut-to-adiabaticity formalism then provides a control Hamiltonian that realizes the reference trajectory exactly but on a finite time scale. As the final state is achieved with certainty, we define a cost functional that incorporates the resource requirements and a perturbative expression for robustness. We optimize this functional by systematically varying the reference trajectory. We demonstrate the method by application to population transfer in a laser driven three-level Λ-system, where we find solutions that are fast and robust against perturbations while maintaining a low peak laser power.

  19. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James L.

    2010-01-01

    NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.

  20. Formal development of a clock synchronization circuit

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.

    1995-01-01

    This talk presents the latest stage in formal development of a fault-tolerant clock synchronization circuit. The development spans from a high level specification of the required properties to a circuit realizing the core function of the system. An abstract description of an algorithm has been verified to satisfy the high-level properties using the mechanical verification system EHDM. This abstract description is recast as a behavioral specification input to the Digital Design Derivation system (DDD) developed at Indiana University. DDD provides a formal design algebra for developing correct digital hardware. Using DDD as the principle design environment, a core circuit implementing the clock synchronization algorithm was developed. The design process consisted of standard DDD transformations augmented with an ad hoc refinement justified using the Prototype Verification System (PVS) from SRI International. Subsequent to the above development, Wilfredo Torres-Pomales discovered an area-efficient realization of the same function. Establishing correctness of this optimization requires reasoning in arithmetic, so a general verification is outside the domain of both DDD transformations and model-checking techniques. DDD represents digital hardware by systems of mutually recursive stream equations. A collection of PVS theories was developed to aid in reasoning about DDD-style streams. These theories include a combinator for defining streams that satisfy stream equations, and a means for proving stream equivalence by exhibiting a stream bisimulation. DDD was used to isolate the sub-system involved in Torres-Pomales' optimization. The equivalence between the original design and the optimized verified was verified in PVS by exhibiting a suitable bisimulation. The verification depended upon type constraints on the input streams and made extensive use of the PVS type system. The dependent types in PVS provided a useful mechanism for defining an appropriate bisimulation.

  1. A Model of Adding Relations in Multi-levels to a Formal Organization Structure with Two Subordinates

    NASA Astrophysics Data System (ADS)

    Sawada, Kiyoshi; Amano, Kazuyuki

    2009-10-01

    This paper proposes a model of adding relations in multi-levels to a formal organization structure with two subordinates such that the communication of information between every member in the organization becomes the most efficient. When edges between every pair of nodes with the same depth in L (L = 1, 2, …, H) levels are added to a complete binary tree of height H, an optimal set of depths {N1, N2, …, NL} (H⩾N1>N2> …>NL⩾1) is obtained by maximizing the total shortening path length which is the sum of shortening lengths of shortest paths between every pair of all nodes in the complete binary tree. It is shown that {N1, N2, …, NL}* = {H, H-1, …, H-L+1}.

  2. Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems

    PubMed Central

    Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.

    2016-01-01

    Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383

  3. Projector Augmented Wave formulation of orbital-dependent exchange-correlation functionals

    NASA Astrophysics Data System (ADS)

    Xu, Xiao; Holzwarth, N. A. W.

    2012-02-01

    The use of orbital-dependent exchange-correlation functionals within electronic structure calculations has recently received renewed attention for improving the accuracy of the calculations, especially correcting self-interaction errors. Since the Projector Augmented Wave (PAW) methodootnotetext P. Bl"ochl, Phys. Rev. B 50, 17953 (1994). is an efficient pseudopotential-like scheme which ensures accurate evaluation of all multipole moments of direct and exchange Coulomb integrals, it is a natural choice for implementing orbital-dependent formalisms. Using Fock exchange as an example of an orbital-dependent functional, we developed the formulation and numerical implementation of the approximate optimized effective potential formalism of Kreiger, Li, and Iafrate (KLI)ootnotetext J. B. Krieger, Y. Li, and G. J. Iafrate Phys. Rev. A 45, 101 (1992). within the PAW method, comparing results with the analogous Hartree-Fock treatment.ootnotetext Xiao Xu and N. A. W. Holzwarth, Phys. Rev. B 81, 245105 (2010); 84, 155113 (2011). Test results are presented for ground state properties of two well-known materials -- diamond and LiF. This formalism can be extended to treat orbital-dependent functionals more generally.

  4. The Capabilities of Chaos and Complexity

    PubMed Central

    Abel, David L.

    2009-01-01

    To what degree could chaos and complexity have organized a Peptide or RNA World of crude yet necessarily integrated protometabolism? How far could such protolife evolve in the absence of a heritable linear digital symbol system that could mutate, instruct, regulate, optimize and maintain metabolic homeostasis? To address these questions, chaos, complexity, self-ordered states, and organization must all be carefully defined and distinguished. In addition their cause-and-effect relationships and mechanisms of action must be delineated. Are there any formal (non physical, abstract, conceptual, algorithmic) components to chaos, complexity, self-ordering and organization, or are they entirely physicodynamic (physical, mass/energy interaction alone)? Chaos and complexity can produce some fascinating self-ordered phenomena. But can spontaneous chaos and complexity steer events and processes toward pragmatic benefit, select function over non function, optimize algorithms, integrate circuits, produce computational halting, organize processes into formal systems, control and regulate existing systems toward greater efficiency? The question is pursued of whether there might be some yet-to-be discovered new law of biology that will elucidate the derivation of prescriptive information and control. “System” will be rigorously defined. Can a low-informational rapid succession of Prigogine’s dissipative structures self-order into bona fide organization? PMID:19333445

  5. Use of an auxiliary basis set to describe the polarization in the fragment molecular orbital method

    NASA Astrophysics Data System (ADS)

    Fedorov, Dmitri G.; Kitaura, Kazuo

    2014-03-01

    We developed a dual basis approach within the fragment molecular orbital formalism enabling efficient and accurate use of large basis sets. The method was tested on water clusters and polypeptides and applied to perform geometry optimization of chignolin (PDB: 1UAO) in solution at the level of DFT/6-31++G∗∗, obtaining a structure in agreement with experiment (RMSD of 0.4526 Å). The polarization in polypeptides is discussed with a comparison of the α-helix and β-strand.

  6. Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph; Krishna, Lala; Gute, Douglas

    1997-01-01

    Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.

  7. Generalized SMO algorithm for SVM-based multitask learning.

    PubMed

    Cai, Feng; Cherkassky, Vladimir

    2012-06-01

    Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.

  8. A linear decomposition method for large optimization problems. Blueprint for development

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1982-01-01

    A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.

  9. Blended near-optimal tools for flexible water resources decision making

    NASA Astrophysics Data System (ADS)

    Rosenberg, David

    2015-04-01

    State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the static modelled issues and managers often seek near-optimal alternatives that address un-modelled or changing objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally-different alternatives that addressed select un-modelled issues. This paper presents new stratified, Monte Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and full extent of the near-optimal region to an optimization problem. Plot controls allow users to interactively explore region features of most interest. Controls also streamline the process to elicit un-modelled issues and update the model formulation in response to elicited issues. Use for a single-objective water quality management problem at Echo Reservoir, Utah identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, help elicit a larger set of un-modelled issues, and offer managers greater flexibility to cope in a changing world.

  10. An intermediate level of abstraction for computational systems chemistry.

    PubMed

    Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F

    2017-12-28

    Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  11. From Regulation to Virtue: A Critique of Ethical Formalism in Research Organizations

    ERIC Educational Resources Information Center

    Atkinson, Timothy N.; Butler, Jesse W.

    2012-01-01

    The following article argues that the research compliance system has some flaws that should be addressed, particularly with regard to excessive emphasis of and reliance upon formal regulations in research administration. Ethical formalism, understood here as the use of formal rules for the determination of behavior, is not an optimal perspective…

  12. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messud, J.; Dinh, P. M.; Suraud, Eric

    2009-10-15

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  13. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    NASA Astrophysics Data System (ADS)

    Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric

    2009-10-01

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  14. Investigation on the use of optimization techniques for helicopter airframe vibrations design studies

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.

    1992-01-01

    Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.

  15. Blended near-optimal alternative generation, visualization, and interaction for water resources decision making

    NASA Astrophysics Data System (ADS)

    Rosenberg, David E.

    2015-04-01

    State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the modeled issues and managers often seek near-optimal alternatives that address unmodeled objectives, preferences, limits, uncertainties, and other issues. Early on, Modeling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally different alternatives that addressed some unmodeled issues. This paper presents new stratified, Monte-Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and extent of the near-optimal region to an optimization problem. Interactive plot controls allow users to explore region features of most interest. Controls also streamline the process to elicit unmodeled issues and update the model formulation in response to elicited issues. Use for an example, single-objective, linear water quality management problem at Echo Reservoir, Utah, identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Flexibility is upheld by further interactive alternative generation, transforming the formulation into a multiobjective problem, and relaxing the tolerance parameter to expand the near-optimal region. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, and help elicit a larger set of unmodeled issues.

  16. The Aeronautical Data Link: Decision Framework for Architecture Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Goode, Plesent W.

    2003-01-01

    A decision analytic approach that develops optimal data link architecture configuration and behavior to meet multiple conflicting objectives of concurrent and different airspace operations functions has previously been developed. The approach, premised on a formal taxonomic classification that correlates data link performance with operations requirements, information requirements, and implementing technologies, provides a coherent methodology for data link architectural analysis from top-down and bottom-up perspectives. This paper follows the previous research by providing more specific approaches for mapping and transitioning between the lower levels of the decision framework. The goal of the architectural analysis methodology is to assess the impact of specific architecture configurations and behaviors on the efficiency, capacity, and safety of operations. This necessarily involves understanding the various capabilities, system level performance issues and performance and interface concepts related to the conceptual purpose of the architecture and to the underlying data link technologies. Efficient and goal-directed data link architectural network configuration is conditioned on quantifying the risks and uncertainties associated with complex structural interface decisions. Deterministic and stochastic optimal design approaches will be discussed that maximize the effectiveness of architectural designs.

  17. An efficient method of reducing glass dispersion tolerance sensitivity

    NASA Astrophysics Data System (ADS)

    Sparrold, Scott W.; Shepard, R. Hamilton

    2014-12-01

    Constraining the Seidel aberrations of optical surfaces is a common technique for relaxing tolerance sensitivities in the optimization process. We offer an observation that a lens's Abbe number tolerance is directly related to the magnitude by which its longitudinal and transverse color are permitted to vary in production. Based on this observation, we propose a computationally efficient and easy-to-use merit function constraint for relaxing dispersion tolerance sensitivity. Using the relationship between an element's chromatic aberration and dispersion sensitivity, we derive a fundamental limit for lens scale and power that is capable of achieving high production yield for a given performance specification, which provides insight on the point at which lens splitting or melt fitting becomes necessary. The theory is validated by comparing its predictions to a formal tolerance analysis of a Cooke Triplet, and then applied to the design of a 1.5x visible linescan lens to illustrate optimization for reduced dispersion sensitivity. A selection of lenses in high volume production is then used to corroborate the proposed method of dispersion tolerance allocation.

  18. Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny

    NASA Astrophysics Data System (ADS)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.

  19. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  20. Machine Learning-based Intelligent Formal Reasoning and Proving System

    NASA Astrophysics Data System (ADS)

    Chen, Shengqing; Huang, Xiaojian; Fang, Jiaze; Liang, Jia

    2018-03-01

    The reasoning system can be used in many fields. How to improve reasoning efficiency is the core of the design of system. Through the formal description of formal proof and the regular matching algorithm, after introducing the machine learning algorithm, the system of intelligent formal reasoning and verification has high efficiency. The experimental results show that the system can verify the correctness of propositional logic reasoning and reuse the propositional logical reasoning results, so as to obtain the implicit knowledge in the knowledge base and provide the basic reasoning model for the construction of intelligent system.

  1. Formal optimization of hovering performance using free wake lifting surface theory

    NASA Technical Reports Server (NTRS)

    Chung, S. Y.

    1986-01-01

    Free wake techniques for performance prediction and optimization of hovering rotor are discussed. The influence functions due to vortex ring, vortex cylinder, and source or vortex sheets are presented. The vortex core sizes of rotor wake vortices are calculated and their importance is discussed. Lifting body theory for finite thickness body is developed for pressure calculation, and hence performance prediction of hovering rotors. Numerical optimization technique based on free wake lifting line theory is presented and discussed. It is demonstrated that formal optimization can be used with the implicit and nonlinear objective or cost function such as the performance of hovering rotors as used in this report.

  2. The power of associative learning and the ontogeny of optimal behaviour.

    PubMed

    Enquist, Magnus; Lind, Johan; Ghirlanda, Stefano

    2016-11-01

    Behaving efficiently (optimally or near-optimally) is central to animals' adaptation to their environment. Much evolutionary biology assumes, implicitly or explicitly, that optimal behavioural strategies are genetically inherited, yet the behaviour of many animals depends crucially on learning. The question of how learning contributes to optimal behaviour is largely open. Here we propose an associative learning model that can learn optimal behaviour in a wide variety of ecologically relevant circumstances. The model learns through chaining, a term introduced by Skinner to indicate learning of behaviour sequences by linking together shorter sequences or single behaviours. Our model formalizes the concept of conditioned reinforcement (the learning process that underlies chaining) and is closely related to optimization algorithms from machine learning. Our analysis dispels the common belief that associative learning is too limited to produce 'intelligent' behaviour such as tool use, social learning, self-control or expectations of the future. Furthermore, the model readily accounts for both instinctual and learned aspects of behaviour, clarifying how genetic evolution and individual learning complement each other, and bridging a long-standing divide between ethology and psychology. We conclude that associative learning, supported by genetic predispositions and including the oft-neglected phenomenon of conditioned reinforcement, may suffice to explain the ontogeny of optimal behaviour in most, if not all, non-human animals. Our results establish associative learning as a more powerful optimizing mechanism than acknowledged by current opinion.

  3. The power of associative learning and the ontogeny of optimal behaviour

    PubMed Central

    Enquist, Magnus; Lind, Johan

    2016-01-01

    Behaving efficiently (optimally or near-optimally) is central to animals' adaptation to their environment. Much evolutionary biology assumes, implicitly or explicitly, that optimal behavioural strategies are genetically inherited, yet the behaviour of many animals depends crucially on learning. The question of how learning contributes to optimal behaviour is largely open. Here we propose an associative learning model that can learn optimal behaviour in a wide variety of ecologically relevant circumstances. The model learns through chaining, a term introduced by Skinner to indicate learning of behaviour sequences by linking together shorter sequences or single behaviours. Our model formalizes the concept of conditioned reinforcement (the learning process that underlies chaining) and is closely related to optimization algorithms from machine learning. Our analysis dispels the common belief that associative learning is too limited to produce ‘intelligent’ behaviour such as tool use, social learning, self-control or expectations of the future. Furthermore, the model readily accounts for both instinctual and learned aspects of behaviour, clarifying how genetic evolution and individual learning complement each other, and bridging a long-standing divide between ethology and psychology. We conclude that associative learning, supported by genetic predispositions and including the oft-neglected phenomenon of conditioned reinforcement, may suffice to explain the ontogeny of optimal behaviour in most, if not all, non-human animals. Our results establish associative learning as a more powerful optimizing mechanism than acknowledged by current opinion. PMID:28018662

  4. Management of unmanned moving sensors through human decision layers: a bi-level optimization process with calls to costly sub-processes

    NASA Astrophysics Data System (ADS)

    Dambreville, Frédéric

    2013-10-01

    While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.

  5. Optimizing Outcome in the University-Industry Technology Transfer Projects

    NASA Astrophysics Data System (ADS)

    Alavi, Hamed; Hąbek, Patrycja

    2016-06-01

    Transferring inventions of academic scientists to private enterprises for the purpose of commercialization is long known as University-Industry (firm) Technology Transfer While the importance of this phenomenon is simultaneously raising in public and private sector, only a part of patented academic inventions succeed in passing the process of commercialization. Despite the fact that formal Technology Transfer process and licencing of patented innovations to third party is the main legal tool for safeguarding rights of academic inventors in commercialization of their inventions, it is not sufficient for transmitting tacit knowledge which is necessary in exploitation of transferred technology. Existence of reciprocal and complementary relations between formal and informal technology transfer process has resulted in formation of different models for university-industry organizational collaboration or even integration where licensee firms keep contact with academic inventors after gaining legal right for commercialization of their patented invention. Current paper argues that despite necessity for patents to legally pass the right of commercialization of an invention, they are not sufficient for complete knowledge transmission in the process of technology transfer. Lack of efficiency of formal mechanism to end the Technology Transfer loop makes an opportunity to create innovative interpersonal and organizational connections among patentee and licensee company. With emphasize on need for further elaboration of informal mechanisms as critical and underappreciated aspect of technology transfer process, article will try to answer the questions of how to optimize knowledge transmission process in the framework of University-Industry Technology Transfer Projects? What is the theoretical basis for university-industry technology transfer process? What are organization collaborative models which can enhance overall performance by improving transmission of knowledge in University- Firm Technology Transfer process?

  6. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  7. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  8. A revised MRCI-algorithm. I. Efficient combination of spin adaptation with individual configuration selection coupled to an effective valence-shell Hamiltonian

    NASA Astrophysics Data System (ADS)

    Strodel, Paul; Tavan, Paul

    2002-09-01

    We present a revised multi-reference configuration interaction (MRCI) algorithm for balanced and efficient calculation of electronic excitations in molecules. The revision takes up an earlier method, which had been designed for flexible, state-specific, and individual selection (IS) of MRCI expansions, included perturbational corrections (PERT), and used the spin-coupled hole-particle formalism of Tavan and Schulten (1980) for matrix-element evaluation. It removes the deficiencies of this method by introducing tree structures, which code the CI bases and allow us to efficiently exploit the sparseness of the Hamiltonian matrices. The algorithmic complexity is shown to be optimal for IS/MRCI applications. The revised IS/MRCI/PERT module is combined with the effective valence shell Hamiltonian OM2 suggested by Weber and Thiel (2000). This coupling serves the purpose of making excited state surfaces of organic dye molecules accessible to relatively cheap and sufficiently precise descriptions.

  9. Green acetylation of solketal and glycerol formal by heterogeneous acid catalysts to form a biodiesel fuel additive.

    PubMed

    Dodson, Jennifer R; Leite, Thays d C M; Pontes, Nathália S; Peres Pinto, Bianca; Mota, Claudio J A

    2014-09-01

    A glut of glycerol has formed from the increased production of biodiesel, with the potential to integrate the supply chain by using glycerol additives to improve biodiesel properties. Acetylated acetals show interesting cold flow and viscosity effects. Herein, a solventless heterogeneously catalyzed process for the acetylation of both solketal and glycerol formal to new products is demonstrated. The process is optimized by studying the effect of acetylating reagent (acetic acid and acetic anhydride), reagent molar ratios, and a variety of commercial solid acid catalysts (Amberlyst-15, zeolite Beta, K-10 Montmorillonite, and niobium phosphate) on the conversion and selectivities. High conversions (72-95%) and selectivities (86-99%) to the desired products results from using acetic anhydride as the acetylation reagent and a 1:1 molar ratio with all catalysts. Overall, there is a complex interplay between the solid catalyst, reagent ratio, and acetylating agent on the conversion, selectivities, and byproducts formed. The variations are discussed and explained in terms of reactivity, thermodynamics, and reaction mechanisms. An alternative and efficient approach to the formation of 100% triacetin involves the ring-opening, acid-catalyzed acetylation from solketal or glycerol formal with excesses of acetic anhydride. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation

    PubMed Central

    Fernández-Llatas, Carlos; Meneu, Teresa; Traver, Vicente; Benedi, José-Miguel

    2013-01-01

    Born in the early nineteen nineties, evidence-based medicine (EBM) is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized. PMID:24185841

  11. Spin density and orbital optimization in open shell systems: A rational and computationally efficient proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giner, Emmanuel, E-mail: gnrmnl@unife.it; Angeli, Celestino, E-mail: anc@unife.it

    2016-03-14

    The present work describes a new method to compute accurate spin densities for open shell systems. The proposed approach follows two steps: first, it provides molecular orbitals which correctly take into account the spin delocalization; second, a proper CI treatment allows to account for the spin polarization effect while keeping a restricted formalism and avoiding spin contamination. The main idea of the optimization procedure is based on the orbital relaxation of the various charge transfer determinants responsible for the spin delocalization. The algorithm is tested and compared to other existing methods on a series of organic and inorganic open shellmore » systems. The results reported here show that the new approach (almost black-box) provides accurate spin densities at a reasonable computational cost making it suitable for a systematic study of open shell systems.« less

  12. Uncovering the community structure in signed social networks based on greedy optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yan, Jiaqi; Yang, Yu; Chen, Junhua

    2017-05-01

    The formality of signed relationships has been recently adopted in a lot of complicated systems. The relations among these entities are complicated and multifarious. We cannot indicate these relationships only by positive links, and signed networks have been becoming more and more universal in the study of social networks when community is being significant. In this paper, to identify communities in signed networks, we exploit a new greedy algorithm, taking signs and the density of these links into account. The main idea of the algorithm is the initial procedure of signed modularity and the corresponding update rules. Specially, we employ the “Asymmetric and Constrained Belief Evolution” procedure to evaluate the optimal number of communities. According to the experimental results, the algorithm is proved to be able to run well. More specifically, the proposed algorithm is very efficient for these networks with medium size, both dense and sparse.

  13. Cellular Signaling Networks Function as Generalized Wiener-Kolmogorov Filters to Suppress Noise

    NASA Astrophysics Data System (ADS)

    Hinczewski, Michael; Thirumalai, D.

    2014-10-01

    Cellular signaling involves the transmission of environmental information through cascades of stochastic biochemical reactions, inevitably introducing noise that compromises signal fidelity. Each stage of the cascade often takes the form of a kinase-phosphatase push-pull network, a basic unit of signaling pathways whose malfunction is linked with a host of cancers. We show that this ubiquitous enzymatic network motif effectively behaves as a Wiener-Kolmogorov optimal noise filter. Using concepts from umbral calculus, we generalize the linear Wiener-Kolmogorov theory, originally introduced in the context of communication and control engineering, to take nonlinear signal transduction and discrete molecule populations into account. This allows us to derive rigorous constraints for efficient noise reduction in this biochemical system. Our mathematical formalism yields bounds on filter performance in cases important to cellular function—such as ultrasensitive response to stimuli. We highlight features of the system relevant for optimizing filter efficiency, encoded in a single, measurable, dimensionless parameter. Our theory, which describes noise control in a large class of signal transduction networks, is also useful both for the design of synthetic biochemical signaling pathways and the manipulation of pathways through experimental probes such as oscillatory input.

  14. A formal theory of the selfish gene.

    PubMed

    Gardner, A; Welch, J J

    2011-08-01

    Adaptation is conventionally regarded as occurring at the level of the individual organism. In contrast, the theory of the selfish gene proposes that it is more correct to view adaptation as occurring at the level of the gene. This view has received much popular attention, yet has enjoyed only limited uptake in the primary research literature. Indeed, the idea of ascribing goals and strategies to genes has been highly controversial. Here, we develop a formal theory of the selfish gene, using optimization theory to capture the analogy of 'gene as fitness-maximizing agent' in mathematical terms. We provide formal justification for this view of adaptation by deriving mathematical correspondences that translate the optimization formalism into dynamical population genetics. We show that in the context of social interactions between genes, it is the gene's inclusive fitness that provides the appropriate maximand. Hence, genic selection can drive the evolution of altruistic genes. Finally, we use the formalism to assess the various criticisms that have been levelled at the theory of the selfish gene, dispelling some and strengthening others. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.

  15. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.

    PubMed

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-12-24

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.

  16. Testing Linear Temporal Logic Formulae on Finite Execution Traces

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)

    2001-01-01

    We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.

  17. Early Obstacle Detection and Avoidance for All to All Traffic Pattern in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Huc, Florian; Jarry, Aubin; Leone, Pierre; Moraru, Luminita; Nikoletseas, Sotiris; Rolim, Jose

    This paper deals with early obstacles recognition in wireless sensor networks under various traffic patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a constant factor of 2π + 1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.

  18. From Slow Repetition to Awkward Omission: Economic, Efficient, and Precise Language Use in Bilingual Formal Meetings

    ERIC Educational Resources Information Center

    Koskela, Merja; Pilke, Nina

    2016-01-01

    This article explores how linguistic resources from two local languages, Finnish and Swedish, are used in expert presentations in bilingual formal meetings and how they function with respect to the three ideal criteria of professional communication: economy, efficiency, and precision. Based on the results, the article suggests a typology of…

  19. Design of an optimized biomixture for the degradation of carbofuran based on pesticide removal and toxicity reduction of the matrix.

    PubMed

    Chin-Pampillo, Juan Salvador; Ruiz-Hidalgo, Karla; Masís-Mora, Mario; Carazo-Rojas, Elizabeth; Rodríguez-Rodríguez, Carlos E

    2015-12-01

    Pesticide biopurification systems contain a biologically active matrix (biomixture) responsible for the accelerated elimination of pesticides in wastewaters derived from pest control in crop fields. Biomixtures have been typically prepared using the volumetric composition 50:25:25 (lignocellulosic substrate/humic component/soil); nonetheless, formal composition optimization has not been performed so far. Carbofuran is an insecticide/nematicide of high toxicity widely employed in developing countries. Therefore, the composition of a highly efficient biomixture (composed of coconut fiber, compost, and soil, FCS) for the removal of carbofuran was optimized by means of a central composite design and response surface methodology. The volumetric content of soil and the ratio coconut fiber/compost were used as the design variables. The performance of the biomixture was assayed by considering the elimination of carbofuran, the mineralization of (14)C-carbofuran, and the residual toxicity of the matrix, as response variables. Based on the models, the optimal volumetric composition of the FCS biomixture consists of 45:13:42 (coconut fiber/compost/soil), which resulted in minimal residual toxicity and ∼99% carbofuran elimination after 3 days. This optimized biomixture considerably differs from the standard 50:25:25 composition, which remarks the importance of assessing the performance of newly developed biomixtures during the design of biopurification systems.

  20. Optimization of Regional Geodynamic Models for Mantle Dynamics

    NASA Astrophysics Data System (ADS)

    Knepley, M.; Isaac, T.; Jadamec, M. A.

    2016-12-01

    The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.

  1. The formal Darwinism project: a mid-term report.

    PubMed

    Grafen, A

    2007-07-01

    For 8 years I have been pursuing in print an ambitious and at times highly technical programme of work, the 'Formal Darwinism Project', whose essence is to underpin and formalize the fitness optimization ideas used by behavioural ecologists, using a new kind of argument linking the mathematics of motion and the mathematics of optimization. The value of the project is to give stronger support to current practices, and at the same time sharpening theoretical ideas and suggesting principled resolutions of some untidy areas, for example, how to define fitness. The aim is also to unify existing free-standing theoretical structures, such as inclusive fitness theory, Evolutionary Stable Strategy (ESS) theory and bet-hedging theory. The 40-year-old misunderstanding over the meaning of fitness optimization between mathematicians and biologists is explained. Most of the elements required for a general theory have now been implemented, but not together in the same framework, and 'general time' remains to be developed and integrated with the other elements to produce a final unified theory of neo-Darwinian natural selection.

  2. Habitability as a Tier One Criterion in Exploration Mission and Vehicle Design. Part 1; Habitability

    NASA Technical Reports Server (NTRS)

    Adams, Constance M.; McCurdy, Matthew Riegel

    1999-01-01

    Habitability and human factors are necessary criteria to include in the iterative process of Tier I mission design. Bringing these criteria in at the first, conceptual stage of design for exploration and other human-rated missions can greatly reduce mission development costs, raise the level of efficiency and viability, and improve the chances of success. In offering a rationale for this argument, the authors give an example of how the habitability expert can contribute to early mission and vehicle architecture by defining the formal implications of a habitable vehicle, assessing the viability of units already proposed for exploration missions on the basis of these criteria, and finally, by offering an optimal set of solutions for an example mission. In this, the first of three papers, we summarize the basic factors associated with habitability, delineate their formal implications for crew accommodations in a long-duration environment, and show examples of how these principles have been applied in two projects at NASA's Johnson Space Center: the BIO-Plex test facility, and TransHab.

  3. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  4. Bio-Inspired Genetic Algorithms with Formalized Crossover Operators for Robotic Applications.

    PubMed

    Zhang, Jie; Kang, Man; Li, Xiaojuan; Liu, Geng-Yang

    2017-01-01

    Genetic algorithms are widely adopted to solve optimization problems in robotic applications. In such safety-critical systems, it is vitally important to formally prove the correctness when genetic algorithms are applied. This paper focuses on formal modeling of crossover operations that are one of most important operations in genetic algorithms. Specially, we for the first time formalize crossover operations with higher-order logic based on HOL4 that is easy to be deployed with its user-friendly programing environment. With correctness-guaranteed formalized crossover operations, we can safely apply them in robotic applications. We implement our technique to solve a path planning problem using a genetic algorithm with our formalized crossover operations, and the results show the effectiveness of our technique.

  5. Inflationary dynamics and preheating of the nonminimally coupled inflaton field in the metric and Palatini formalisms

    NASA Astrophysics Data System (ADS)

    Fu, Chengjie; Wu, Puxun; Yu, Hongwei

    2017-11-01

    The inflationary dynamics and preheating in a model with a nonminimally coupled inflaton field in the metric and Palatini formalisms are studied in this paper. We find that in both formalisms, irrespective of the initial conditions, our Universe will evolve into a slow-roll inflationary era and then the scalar field rolls into an oscillating phase. The value of the scalar field at the end of the inflation in the Palatini formalism is always larger than that in the metric one, which becomes more and more obvious with the increase of the absolute value of the coupling parameter |ξ |. During the preheating, we find that the inflaton quanta are produced explosively due to the parameter resonance and the growth of inflaton quanta will be terminated by the backreaction. With the increase of |ξ |, the resonance bands gradually close to the zero momentum (k =0 ), and the structure of resonance changes and becomes broader and broader in the metric formalism, while it remains to be narrow in the Palatini formalism. The energy transfer from the inflaton field to the fluctuation becomes more and more efficient with the increase of |ξ |, and in the metric formalism the growth of the efficiency of energy transfer is much faster than that in the Palatini formalism. Therefore, the inflation and preheating show different characteristics in different formalisms.

  6. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  7. A frontier analysis approach for benchmarking hospital performance in the treatment of acute myocardial infarction.

    PubMed

    Stanford, Robert E

    2004-05-01

    This paper uses a non-parametric frontier model and adaptations of the concepts of cross-efficiency and peer-appraisal to develop a formal methodology for benchmarking provider performance in the treatment of Acute Myocardial Infarction (AMI). Parameters used in the benchmarking process are the rates of proper recognition of indications of six standard treatment processes for AMI; the decision making units (DMUs) to be compared are the Medicare eligible hospitals of a particular state; the analysis produces an ordinal ranking of individual hospital performance scores. The cross-efficiency/peer-appraisal calculation process is constructed to accommodate DMUs that experience no patients in some of the treatment categories. While continuing to rate highly the performances of DMUs which are efficient in the Pareto-optimal sense, our model produces individual DMU performance scores that correlate significantly with good overall performance, as determined by a comparison of the sums of the individual DMU recognition rates for the six standard treatment processes. The methodology is applied to data collected from 107 state Medicare hospitals.

  8. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  9. Microscopy as a statistical, Rényi-Ulam, half-lie game: a new heuristic search strategy to accelerate imaging.

    PubMed

    Drumm, Daniel W; Greentree, Andrew D

    2017-11-07

    Finding a fluorescent target in a biological environment is a common and pressing microscopy problem. This task is formally analogous to the canonical search problem. In ideal (noise-free, truthful) search problems, the well-known binary search is optimal. The case of half-lies, where one of two responses to a search query may be deceptive, introduces a richer, Rényi-Ulam problem and is particularly relevant to practical microscopy. We analyse microscopy in the contexts of Rényi-Ulam games and half-lies, developing a new family of heuristics. We show the cost of insisting on verification by positive result in search algorithms; for the zero-half-lie case bisectioning with verification incurs a 50% penalty in the average number of queries required. The optimal partitioning of search spaces directly following verification in the presence of random half-lies is determined. Trisectioning with verification is shown to be the most efficient heuristic of the family in a majority of cases.

  10. A polygonal double-layer coil design for high-efficiency wireless power transfer

    NASA Astrophysics Data System (ADS)

    Mao, Shitong; Wang, Hao; Mao, Zhi-Hong; Sun, Mingui

    2018-05-01

    In this work, we present a novel coil structure for the design of Wireless Power Transfer (WPT) systems via magnetic resonant coupling. The new coil consists of two layers of flat polygonal windings in square, pentagonal and hexagonal shapes. The double-layer coil can be conveniently fabricated using the print circuit broad (PCB) technology. In our design, we include an angle between the two layers which can be adjusted to change the area of inter-layer overlap. This unique structure is thoroughly investigated with respect to the quality factor Q and the power transfer efficiency (PTE) using the finite element method (FEM). An equivalent circuit is derived and used to explain the properties of the angularly shifted double-layer coil theoretically. Comparative experiments are conducted from which the performance of the new coil is evaluated quantitatively. Our results have shown that an increased shift angle improves the Q-factor, and the optimal PTE is achieved when the angle reaches the maximum. When compared to the pentagonal and hexagonal coils, the square coil achieves the highest PTE due to its lowest parasitic capacitive effects. In summary, our new coil design improves the performance of WPT systems and allows a formal design procedure for optimization in a given application.

  11. Benchmarking organ procurement organizations: a national study.

    PubMed Central

    Ozcan, Y A; Begun, J W; McKinney, M M

    1999-01-01

    OBJECTIVE: An exploratory examination of the technical efficiency of organ procurement organizations (OPOs) relative to optimal patterns of production in the population of OPOs in the United States. DATA SOURCES: A composite data set with the OPO as the unit of analysis, constructed from a 1995 national survey of OPOs (n = 64), plus secondary data from the Association of Organ Procurement Organizations and the United Network for Organ Sharing. STUDY DESIGN: The study uses data envelopment analysis (DEA) to evaluate the technical efficiency of all OPOs. PRINCIPAL FINDINGS: Overall, six of the 22 larger OPOs (27 percent) are classified as inefficient, while 23 of the 42 smaller OPOs (55 percent) are classified as inefficient. Efficient OPOs recover significantly more kidneys and extrarenal organs; have higher operating expenses; and have more referrals, donors, extrarenal transplants, and kidney transplants. The quantities of hospital development personnel and other personnel, and formalization of hospital development activities in both small and large OPOs, do not significantly differ. CONCLUSIONS: Indications that larger OPOs are able to operate more efficiently relative to their peers suggest that smaller OPOs are more likely to benefit from technical assistance. More detailed information on the activities of OPO staff would help pinpoint activities that can increase OPO efficiency and referrals, and potentially improve outcomes for large numbers of patients awaiting transplants. PMID:10536974

  12. Waste management of printed wiring boards: a life cycle assessment of the metals recycling chain from liberation through refining.

    PubMed

    Xue, Mianqiang; Kendall, Alissa; Xu, Zhenming; Schoenung, Julie M

    2015-01-20

    Due to economic and societal reasons, informal activities including open burning, backyard recycling, and landfill are still the prevailing methods used for electronic waste treatment in developing countries. Great efforts have been made, especially in China, to promote formal approaches for electronic waste management by enacting laws, developing green recycling technologies, initiating pilot programs, etc. The formal recycling process can, however, engender environmental impact and resource consumption, although information on the environmental loads and resource consumption is currently limited. To quantitatively assess the environmental impact of the processes in a formal printed wiring board (PWB) recycling chain, life cycle assessment (LCA) was applied to a formal recycling chain that includes the steps from waste liberation through materials refining. The metal leaching in the refining stage was identified as a critical process, posing most of the environmental impact in the recycling chain. Global warming potential was the most significant environmental impact category after normalization and weighting, followed by fossil abiotic depletion potential, and marine aquatic eco-toxicity potential. Scenario modeling results showed that variations in the power source and chemical reagents consumption had the greatest influence on the environmental performance. The environmental impact from transportation used for PWB collection was also evaluated. The results were further compared to conventional primary metals production processes, highlighting the environmental benefit of metal recycling from waste PWBs. Optimizing the collection mode, increasing the precious metals recovery efficiency in the beneficiation stage and decreasing the chemical reagents consumption in the refining stage by effective materials liberation and separation are proposed as potential improvement strategies to make the recycling chain more environmentally friendly. The LCA results provide environmental information for the improvement of future integrated technologies and electronic waste management.

  13. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  14. Stencils and problem partitionings: Their influence on the performance of multiple processor systems

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Adams, L. M.; Patrick, M. L.

    1986-01-01

    Given a discretization stencil, partitioning the problem domain is an important first step for the efficient solution of partial differential equations on multiple processor systems. Partitions are derived that minimize interprocessor communication when the number of processors is known a priori and each domain partition is assigned to a different processor. This partitioning technique uses the stencil structure to select appropriate partition shapes. For square problem domains, it is shown that non-standard partitions (e.g., hexagons) are frequently preferable to the standard square partitions for a variety of commonly used stencils. This investigation is concluded with a formalization of the relationship between partition shape, stencil structure, and architecture, allowing selection of optimal partitions for a variety of parallel systems.

  15. An Efficient Algorithm for a Visibility-Based Surveillance-Evasion Game

    DTIC Science & Technology

    2012-01-01

    function vs : Ω2free → R for the static game as (8) vs(x0E , x 0 P ) := sup σP∈ A inf σE∈ A J (x0E , x0P , σE , σP ). In general, it can be shown that...the optimal controls (or -suboptimal controls, see Remark 2.2), formally written as σ∗P ∈ arg sup σP∈ A inf σE∈ A J (x0E , x0P , σE , σP ),(9) σ∗E...arg inf σE∈ A J (x0E , x0P , σE , σ∗P ),(10) provided the game ends in finite time. For brevity, we shall refer to the static visibility-based

  16. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  17. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits

    PubMed Central

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-01-01

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334

  18. Monitoring Programs Using Rewriting

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Lan, Sonie (Technical Monitor)

    2001-01-01

    We present a rewriting algorithm for efficiently testing future time Linear Temporal Logic (LTL) formulae on finite execution traces, The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive in most past applications of LTL, theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications, corresponding to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property end then suggest an optimized algorithm based on transforming LTL formulae. We use the Maude rewriting logic, which turns out to be a good notation and being supported by an efficient rewriting engine for performing these experiments. The work constitutes part of the Java PathExplorer (JPAX) project, the purpose of which is to develop a flexible tool for monitoring Java program executions.

  19. Design of Astrometric Mission (JASMINE) by Applying Model Driven System Engineering

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Miyashita, H.; Nakamura, H.; Suenaga, K.; Kamiyoshi, S.; Tsuiki, A.

    2010-12-01

    We are planning space astrometric satellite mission named JASMINE. The target accuracy of parallaxes in JASMINE observation is 10 micro arc second, which corresponds to 1 nm scale on the focal plane. It is very hard to measure the 1 nm scale deformation of focal plane. Eventually, we need to add the deformation to the observation equations when estimating stellar astrometric parameters, which requires considering many factors such as instrument models and observation data analysis. In this situation, because the observation equations become more complex, we may reduce the stability of the hardware, nevertheless, we require more samplings due to the lack of rigidity of each estimation. This mission imposes a number of trades-offs in the engineering choices and then decide the optimal design from a number of candidates. In order to efficiently support such decisions, we apply Model Driven Systems Engineering (MDSE), which improves the efficiency of the engineering by revealing and formalizing requirements, specifications, and designs to find a good balance among various trade-offs.

  20. A modified operational sequence methodology for zoo exhibit design and renovation: conceptualizing animals, staff, and visitors as interdependent coworkers.

    PubMed

    Kelling, Nicholas J; Gaalema, Diann E; Kelling, Angela S

    2014-01-01

    Human factors analyses have been used to improve efficiency and safety in various work environments. Although generally limited to humans, the universality of these analyses allows for their formal application to a much broader domain. This paper outlines a model for the use of human factors to enhance zoo exhibits and optimize spaces for all user groups; zoo animals, zoo visitors, and zoo staff members. Zoo exhibits are multi-faceted and each user group has a distinct set of requirements that can clash or complement each other. Careful analysis and a reframing of the three groups as interdependent coworkers can enhance safety, efficiency, and experience for all user groups. This paper details a general creation and specific examples of the use of the modified human factors tools of function allocation, operational sequence diagram and needs assessment. These tools allow for adaptability and ease of understanding in the design or renovation of exhibits. © 2014 Wiley Periodicals, Inc.

  1. Reconfigurable Very Long Instruction Word (VLIW) Processor

    NASA Technical Reports Server (NTRS)

    Velev, Miroslav N.

    2015-01-01

    Future NASA missions will depend on radiation-hardened, power-efficient processing systems-on-a-chip (SOCs) that consist of a range of processor cores custom tailored for space applications. Aries Design Automation, LLC, has developed a processing SOC that is optimized for software-defined radio (SDR) uses. The innovation implements the Institute of Electrical and Electronics Engineers (IEEE) RazorII voltage management technique, a microarchitectural mechanism that allows processor cores to self-monitor, self-analyze, and selfheal after timing errors, regardless of their cause (e.g., radiation; chip aging; variations in the voltage, frequency, temperature, or manufacturing process). This highly automated SOC can also execute legacy PowerPC 750 binary code instruction set architecture (ISA), which is used in the flight-control computers of many previous NASA space missions. In developing this innovation, Aries Design Automation has made significant contributions to the fields of formal verification of complex pipelined microprocessors and Boolean satisfiability (SAT) and has developed highly efficient electronic design automation tools that hold promise for future developments.

  2. Capturing the superorganism: a formal theory of group adaptation.

    PubMed

    Gardner, A; Grafen, A

    2009-04-01

    Adaptation is conventionally regarded as occurring at the level of the individual organism. However, in recent years there has been a revival of interest in the possibility for group adaptations and superorganisms. Here, we provide the first formal theory of group adaptation. In particular: (1) we clarify the distinction between group selection and group adaptation, framing the former in terms of gene frequency change and the latter in terms of optimization; (2) we capture the superorganism in the form of a 'group as maximizing agent' analogy that links an optimization program to a model of a group-structured population; (3) we demonstrate that between-group selection can lead to group adaptation, but only in rather special circumstances; (4) we provide formal support for the view that between-group selection is the best definition for 'group selection'; and (5) we reveal that mechanisms of conflict resolution such as policing cannot be regarded as group adaptations.

  3. Crowd Sourced Formal Verification-Augmentation (CSFV-A)

    DTIC Science & Technology

    2016-06-01

    Formal Verification (CSFV) program built games that recast FV problems into puzzles to make these problems more accessible, increasing the manpower to...construct FV proofs. This effort supported the CSFV program by hosting the games on a public website, and analyzed the gameplay for efficiency to...provide FV proofs. 15. SUBJECT TERMS Crowd Source, Software, Formal Verification, Games 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT

  4. Intramolecular carbolithiation of N-allyl-ynamides: an efficient entry to 1,4-dihydropyridines and pyridines - application to a formal synthesis of sarizotan.

    PubMed

    Gati, Wafa; Rammah, Mohamed M; Rammah, Mohamed B; Evano, Gwilherm

    2012-01-01

    We have developed a general synthesis of polysubstituted 1,4-dihydropyridines and pyridines based on a highly regioselective lithiation/6-endo-dig intramolecular carbolithiation from readily available N-allyl-ynamides. This reaction, which has been successfully applied to the formal synthesis of the anti-dyskinesia agent sarizotan, further extends the use of ynamides in organic synthesis and further demonstrates the synthetic efficiency of carbometallation reactions.

  5. What's in a Grammar? Modeling Dominance and Optimization in Contact

    ERIC Educational Resources Information Center

    Sharma, Devyani

    2013-01-01

    Muysken's article is a timely call for us to seek deeper regularities in the bewildering diversity of language contact outcomes. His model provocatively suggests that most such outcomes can be subsumed under four speaker optimization strategies. I consider two aspects of the proposal here: the formalization in Optimality Theory (OT) and the…

  6. Formalizing Evaluation Procedures for Marketing Faculty Research Performance.

    ERIC Educational Resources Information Center

    McDermott, Dennis R.; And Others

    1994-01-01

    Results of a national survey of marketing department heads (n=142) indicate that few marketing departments have formalized the development and communication of research performance standards to faculty. Guidelines and methods to accomplish those procedures most efficiently were proposed. (Author/JOW)

  7. Interpretable Decision Sets: A Joint Framework for Description and Prediction

    PubMed Central

    Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec

    2016-01-01

    One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627

  8. Intramolecular carbolithiation of N-allyl-ynamides: an efficient entry to 1,4-dihydropyridines and pyridines – application to a formal synthesis of sarizotan

    PubMed Central

    Gati, Wafa; Rammah, Mohamed M; Rammah, Mohamed B

    2012-01-01

    Summary We have developed a general synthesis of polysubstituted 1,4-dihydropyridines and pyridines based on a highly regioselective lithiation/6-endo-dig intramolecular carbolithiation from readily available N-allyl-ynamides. This reaction, which has been successfully applied to the formal synthesis of the anti-dyskinesia agent sarizotan, further extends the use of ynamides in organic synthesis and further demonstrates the synthetic efficiency of carbometallation reactions. PMID:23365632

  9. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  10. The path to next generation biofuels: successes and challenges in the era of synthetic biology

    PubMed Central

    2010-01-01

    Volatility of oil prices along with major concerns about climate change, oil supply security and depleting reserves have sparked renewed interest in the production of fuels from renewable resources. Recent advances in synthetic biology provide new tools for metabolic engineers to direct their strategies and construct optimal biocatalysts for the sustainable production of biofuels. Metabolic engineering and synthetic biology efforts entailing the engineering of native and de novo pathways for conversion of biomass constituents to short-chain alcohols and advanced biofuels are herewith reviewed. In the foreseeable future, formal integration of functional genomics and systems biology with synthetic biology and metabolic engineering will undoubtedly support the discovery, characterization, and engineering of new metabolic routes and more efficient microbial systems for the production of biofuels. PMID:20089184

  11. Splitting efficiency and interference effects in a Cooper pair splitter based on a triple quantum dot with ferromagnetic contacts

    NASA Astrophysics Data System (ADS)

    Bocian, Kacper; Rudziński, Wojciech; Weymann, Ireneusz

    2018-05-01

    We theoretically study the spin-resolved subgap transport properties of a Cooper pair splitter based on a triple quantum dot attached to superconducting and ferromagnetic leads. Using the Keldysh Green's function formalism, we analyze the dependence of the Andreev conductance, Cooper pair splitting efficiency, and tunnel magnetoresistance on the gate and bias voltages applied to the system. We show that the system's transport properties are strongly affected by spin dependence of tunneling processes and quantum interference between different local and nonlocal Andreev reflections. We also study the effects of finite hopping between the side quantum dots on the Andreev current. This allows for identifying the optimal conditions for enhancing the Cooper pair splitting efficiency of the device. We find that the splitting efficiency exhibits a nonmonotonic dependence on the degree of spin polarization of the leads and the magnitude and type of hopping between the dots. An almost perfect splitting efficiency is predicted in the nonlinear response regime when the energies of the side quantum dots are tuned to the energies of the corresponding Andreev bound states. In addition, we analyzed features of the tunnel magnetoresistance (TMR) for a wide range of the gate and bias voltages, as well as for different model parameters, finding the corresponding sign changes of the TMR in certain transport regimes. The mechanisms leading to these effects are thoroughly discussed.

  12. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  13. Ab Initio Optimized Effective Potentials for Real Molecules in Optical Cavities: Photon Contributions to the Molecular Ground State

    PubMed Central

    2018-01-01

    We introduce a simple scheme to efficiently compute photon exchange-correlation contributions due to the coupling to transversal photons as formulated in the newly developed quantum-electrodynamical density-functional theory (QEDFT).1−5 Our construction employs the optimized-effective potential (OEP) approach by means of the Sternheimer equation to avoid the explicit calculation of unoccupied states. We demonstrate the efficiency of the scheme by applying it to an exactly solvable GaAs quantum ring model system, a single azulene molecule, and chains of sodium dimers, all located in optical cavities and described in full real space. While the first example is a two-dimensional system and allows to benchmark the employed approximations, the latter two examples demonstrate that the correlated electron-photon interaction appreciably distorts the ground-state electronic structure of a real molecule. By using this scheme, we not only construct typical electronic observables, such as the electronic ground-state density, but also illustrate how photon observables, such as the photon number, and mixed electron-photon observables, for example, electron–photon correlation functions, become accessible in a density-functional theory (DFT) framework. This work constitutes the first three-dimensional ab initio calculation within the new QEDFT formalism and thus opens up a new computational route for the ab initio study of correlated electron–photon systems in quantum cavities. PMID:29594185

  14. Body Bias usage in UTBB FDSOI designs: A parametric exploration approach

    NASA Astrophysics Data System (ADS)

    Puschini, Diego; Rodas, Jorge; Beigne, Edith; Altieri, Mauricio; Lesecq, Suzanne

    2016-03-01

    Some years ago, UTBB FDSOI has appeared in the horizon of low-power circuit designers. With the 14 nm and 10 nm nodes in the road-map, the industrialized 28 nm platform promises highly efficient designs with Ultra-Wide Voltage Range (UWVR) thanks to extended Body Bias properties. From the power management perspective, this new opportunity is considered as a new degree of freedom in addition to the classic Dynamic Voltage Scaling (DVS), increasing the complexity of the power optimization problem at design time. However, so far no formal or empiric tool allows to early evaluate the real need for a Dynamic Body Bias (DBB) mechanism on future designs. This paper presents a parametric exploration approach that analyzes the benefits of using Body Bias in 28 nm UTBB FDSOI circuits. The exploration is based on electrical simulations of a ring-oscillator structure. These experiences show that a Body Bias strategy is not always required but, they underline the large power reduction that can be achieved when mandatory. Results are summarized in order to help designers to analyze how to choose the best dynamic power management strategy for a given set of operating conditions in terms of temperature, circuit activity and process choice. This exploration contributes to the identification of conditions that make DBB more efficient than DVS, and vice versa, and when both methods are mandatory to optimize power consumption.

  15. Tunneling and speedup in quantum optimization for permutation-symmetric problems

    DOE PAGES

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    2016-07-21

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less

  16. Tunneling and speedup in quantum optimization for permutation-symmetric problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less

  17. Formalising multidisciplinary peer review: developing a haematological malignancy-specific electronic proforma and standard operating procedure to facilitate procedural efficiency and evidence-based clinical practice.

    PubMed

    Trotman, Judith; Trinh, Jimmy; Kwan, Yiu Lam; Estell, Jane A; Fletcher, Julie; Archer, Kate; Lee, Kenneth; Foo, Kerwin; Curnow, Jennifer; Bianchi, Alessandra; Wignall, Lynda; Verner, Emma; Gasiorowski, Robin; Siedlecka, Elizabeth; Cunningham, Ilona

    2017-05-01

    Multidisciplinary team (MDT) meetings aimed at facilitating peer review have become standard practice in oncology. However, there is scant literature on the optimal structure and conduct of such meetings. To develop a process for formal peer review of patients with haematological malignancies and to audit any resulting changes made to the management recommendations of the treating physician. A standard operating procedure (SOP) for MDT meetings was developed essentially to integrate clinical peer review with weekly pathology and radiology meetings. The centrepiece is the electronic submission of a patient-specific proforma (Microsoft InfoPath) prior to the meeting. It serves as the template for presentation, discussion and recording of recommendations and conclusions. The final verified document is stored in the electronic patient record, and a copy is sent to the general practitioner. The proposed management plans were compared to the consensus recommendations of the meeting for the first 4 years since inception. Both SOP and proforma underwent continual improvements. These provided the framework for the conduct of a robust weekly MDT meeting for peer review of the management of patients with haematological malignancies. On 20% of occasions, patient management plans were altered to optimise patient care as a direct consequence on peer review at the MDT. Our streamlined process, in its ultimate format, has provided a mature and efficient forum for formal peer review in a genuine multidisciplinary environment. Both initial data and informal feedback support its ongoing activity as an integral component of delivering quality patient care. © 2016 Royal Australasian College of Physicians.

  18. Evaluation of a Dispatcher's Route Optimization Decision Aid to Avoid Aviation Weather Hazards

    NASA Technical Reports Server (NTRS)

    Dorneich, Michael C.; Olofinboba, Olu; Pratt, Steve; Osborne, Dannielle; Feyereisen, Thea; Latorella, Kara

    2003-01-01

    This document describes the results and analysis of the formal evaluation plan for the Honeywell software tool developed under the NASA AWIN (Aviation Weather Information) 'Weather Avoidance using Route Optimization as a Decision Aid' project. The software tool aims to provide airline dispatchers with a decision aid for selecting optimal routes that avoid weather and other hazards. This evaluation compares and contrasts route selection performance with the AWIN tool to that of subjects using a more traditional dispatcher environment. The evaluation assesses gains in safety, in fuel efficiency of planned routes, and in time efficiency in the pre-flight dispatch process through the use of the AWIN decision aid. In addition, we are interested in how this AWIN tool affects constructs that can be related to performance. The construct of Situation Awareness (SA), workload, trust in an information system, and operator acceptance are assessed using established scales, where these exist, as well as through the evaluation of questionnaire responses and subject comments. The intention of the experiment is to set up a simulated operations area for the dispatchers to work in. They will be given scenarios in which they are presented with stored company routes for a particular city-pair and aircraft type. A diverse set of external weather information sources is represented by a stand-alone display (MOCK), containing the actual historical weather data typically used by dispatchers. There is also the possibility of presenting selected weather data on the route visualization tool. The company routes have not been modified to avoid the weather except in the case of one additional route generated by the Honeywell prototype flight planning system. The dispatcher will be required to choose the most appropriate and efficient flight plan route in the displayed weather conditions. The route may be modified manually or may be chosen from those automatically displayed.

  19. Binaural noise reduction via cue-preserving MMSE filter and adaptive-blocking-based noise PSD estimation

    NASA Astrophysics Data System (ADS)

    Azarpour, Masoumeh; Enzner, Gerald

    2017-12-01

    Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.

  20. Modeling Business Processes of the Social Insurance Fund in Information System Runa WFE

    NASA Astrophysics Data System (ADS)

    Kataev, M. Yu; Bulysheva, L. A.; Xu, Li D.; Loseva, N. V.

    2016-08-01

    Introduction - Business processes are gradually becoming a tool that allows you at a new level to put employees or to make more efficient document management system. In these directions the main work, and presents the largest possible number of publications. However, business processes are still poorly implemented in public institutions, where it is very difficult to formalize the main existing processes. Us attempts to build a system of business processes for such state agencies as the Russian social insurance Fund (SIF), where virtually all of the processes, when different inputs have the same output: public service. The parameters of the state services (as a rule, time limits) are set by state laws and regulations. The article provides a brief overview of the FSS, the formulation of requirements to business processes, the justification of the choice of software for modeling business processes and create models of work in the system Runa WFE and optimization models one of the main business processes of the FSS. The result of the work of Runa WFE is an optimized model of the business process of FSS.

  1. n-D shape/texture optimal synthetic description and modeling by GEOGINE

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco F.

    2004-12-01

    GEOGINE(GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for multidimensional shape/texture optimal synthetic description and learning, is presented. Usually elementary geometric shape robust characterization, subjected to geometric transformation, on a rigorous mathematical level is a key problem in many computer applications in different interest areas. The past four decades have seen solutions almost based on the use of n-Dimensional Moment and Fourier descriptor invariants. The present paper introduces a new approach for automatic model generation based on n -Dimensional Tensor Invariants as formal dictionary. An ontological model is the kernel used for specifying ontologies so that how close an ontology can be from the real world depends on the possibilities offered by the ontological model. By this approach even chromatic information content can be easily and reliably decoupled from target geometric information and computed into robus colour shape parameter attributes. Main GEOGINEoperational advantages over previous approaches are: 1) Automated Model Generation, 2) Invariant Minimal Complete Set for computational efficiency, 3) Arbitrary Model Precision for robust object description.

  2. Group adaptation, formal darwinism and contextual analysis.

    PubMed

    Okasha, S; Paternotte, C

    2012-06-01

    We consider the question: under what circumstances can the concept of adaptation be applied to groups, rather than individuals? Gardner and Grafen (2009, J. Evol. Biol.22: 659-671) develop a novel approach to this question, building on Grafen's 'formal Darwinism' project, which defines adaptation in terms of links between evolutionary dynamics and optimization. They conclude that only clonal groups, and to a lesser extent groups in which reproductive competition is repressed, can be considered as adaptive units. We re-examine the conditions under which the selection-optimization links hold at the group level. We focus on an important distinction between two ways of understanding the links, which have different implications regarding group adaptationism. We show how the formal Darwinism approach can be reconciled with G.C. Williams' famous analysis of group adaptation, and we consider the relationships between group adaptation, the Price equation approach to multi-level selection, and the alternative approach based on contextual analysis. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.

  3. Theoretical Foundations of Wireless Networks

    DTIC Science & Technology

    2015-07-22

    Optimal transmission over a fading channel with imperfect channel state information,” in Global Telecommun. Conf., pp. 1–5, Houston TX , December 5-9...SECURITY CLASSIFICATION OF: The goal of this project is to develop a formal theory of wireless networks providing a scientific basis to understand...randomness and optimality. Randomness, in the form of fading, is a defining characteristic of wireless networks. Optimality is a suitable design

  4. Space Radiation Transport Code Development: 3DHZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The space radiation transport code, HZETRN, has been used extensively for research, vehicle design optimization, risk analysis, and related applications. One of the simplifying features of the HZETRN transport formalism is the straight-ahead approximation, wherein all particles are assumed to travel along a common axis. This reduces the governing equation to one spatial dimension allowing enormous simplification and highly efficient computational procedures to be implemented. Despite the physical simplifications, the HZETRN code is widely used for space applications and has been found to agree well with fully 3D Monte Carlo simulations in many circumstances. Recent work has focused on the development of 3D transport corrections for neutrons and light ions (Z < 2) for which the straight-ahead approximation is known to be less accurate. Within the development of 3D corrections, well-defined convergence criteria have been considered, allowing approximation errors at each stage in model development to be quantified. The present level of development assumes the neutron cross sections have an isotropic component treated within N explicit angular directions and a forward component represented by the straight-ahead approximation. The N = 1 solution refers to the straight-ahead treatment, while N = 2 represents the bi-directional model in current use for engineering design. The figure below shows neutrons, protons, and alphas for various values of N at locations in an aluminum sphere exposed to a solar particle event (SPE) spectrum. The neutron fluence converges quickly in simple geometry with N > 14 directions. The improved code, 3DHZETRN, transports neutrons, light ions, and heavy ions under space-like boundary conditions through general geometry while maintaining a high degree of computational efficiency. A brief overview of the 3D transport formalism for neutrons and light ions is given, and extensive benchmarking results with the Monte Carlo codes Geant4, FLUKA, and PHITS are provided for a variety of boundary conditions and geometries. Improvements provided by the 3D corrections are made clear in the comparisons. Developments needed to connect 3DHZETRN to vehicle design and optimization studies will be discussed. Future theoretical development will relax the forward plus isotropic interaction assumption to more general angular dependence.

  5. The 2 Es

    ERIC Educational Resources Information Center

    Kroog, Heidi; Hess, Kristin King; Ruiz-Primo, Maria Araceli

    2016-01-01

    What are the characteristics of formal formative assessments that are both effective in improving student learning and an efficient use of a teacher's time and efforts? That's the question that the authors explore in this article drawing on a five-year research study. First, formal formative assessment is defined as being planned in advance,…

  6. Development of a web based monitoring system for safety and activity analysis in operating theatres.

    PubMed

    Frosini, Francesco; Miniati, Roberto; Avezzano, Paolo; Cecconi, Giulio; Dori, Fabrizio; Gentili, Guido Biffi; Belardinelli, Andrea

    2016-01-01

    The management and the monitoring of the operating rooms on the part of the general management have the objective of optimizing their use and maximizing the internal safety. The expenses owed to their safe use represent, besides reimbursements coming from the surgical activity, important factors for the analysis of the medical facility. Given that it is not possible to reduce the safety, it is necessary to develop supporting systems with the aim to enhance and optimize the use of the rooms. The developed analysis model of the operating rooms in this study is based on the specific performance indicators and allows the effective monitoring of both the parameters that influence the safety (environmental, microbiological parameters) and those that influence the efficiency of the usage (employment rate, delays, necessary formalities, etc.). This allows you to have a systematic dashboard on hand for all of the OTs and, thus, organize the intervention schedules and more appropriate improvements. A monitoring dashboard has been achieved, accessible from any platform and any device, capable of aggregating hospital information. The undertaken organizational modifications, through the use of the dashboard, have allowed for an average annual savings of 29.52 minutes per intervention and increase the use of the ORs of 5%. The increment of the employment rate and the optimization of the operating room have allowed for savings of around $299,88 for every intervention carried out in 2013, corresponding to an annual savings of $343,362,60. Integration dashboards, as the one proposed in this study as a prototype, represent a governance model of economically sustainable healthcare systems capable of guiding the hospital management in the choices and in the implementation of the most efficient organizational modifications.

  7. Optimally cloned binary coherent states

    NASA Astrophysics Data System (ADS)

    Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.

    2017-10-01

    Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.

  8. A comparison between state-specific and linear-response formalisms for the calculation of vertical electronic transition energy in solution with the CCSD-PCM method.

    PubMed

    Caricato, Marco

    2013-07-28

    The calculation of vertical electronic transition energies of molecular systems in solution with accurate quantum mechanical methods requires the use of approximate and yet reliable models to describe the effect of the solvent on the electronic structure of the solute. The polarizable continuum model (PCM) of solvation represents a computationally efficient way to describe this effect, especially when combined with coupled cluster (CC) methods. Two formalisms are available to compute transition energies within the PCM framework: State-Specific (SS) and Linear-Response (LR). The former provides a more complete account of the solute-solvent polarization in the excited states, while the latter is computationally very efficient (i.e., comparable to gas phase) and transition properties are well defined. In this work, I review the theory for the two formalisms within CC theory with a focus on their computational requirements, and present the first implementation of the LR-PCM formalism with the coupled cluster singles and doubles method (CCSD). Transition energies computed with LR- and SS-CCSD-PCM are presented, as well as a comparison between solvation models in the LR approach. The numerical results show that the two formalisms provide different absolute values of transition energy, but similar relative solvatochromic shifts (from nonpolar to polar solvents). The LR formalism may then be used to explore the solvent effect on multiple states and evaluate transition probabilities, while the SS formalism may be used to refine the description of specific states and for the exploration of excited state potential energy surfaces of solvated systems.

  9. High-Performance Solid-State Thermionic Energy Conversion Based on 2D van der Waals Heterostructures: A First-Principles Study.

    PubMed

    Wang, Xiaoming; Zebarjadi, Mona; Esfarjani, Keivan

    2018-06-18

    Two-dimensional (2D) van der Waals heterostructures (vdWHs) have shown multiple functionalities with great potential in electronics and photovoltaics. Here, we show their potential for solid-state thermionic energy conversion and demonstrate a designing strategy towards high-performance devices. We propose two promising thermionic devices, namely, the p-type Pt-G-WSe 2 -G-Pt and n-type Sc-WSe 2 -MoSe 2 -WSe 2 -Sc. We characterize the thermionic energy conversion performance of the latter using first-principles GW calculations combined with real space Green's function (GF) formalism. The optimal barrier height and high thermal resistance lead to an excellent performance. The proposed device is found to have a room temperature equivalent figure of merit of 1.2 which increases to 3 above 600 K. A high performance with cooling efficiency over 30% of the Carnot efficiency above 450 K is achieved. Our designing and characterization method can be used to pursue other potential thermionic devices based on vdWHs.

  10. Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach

    DTIC Science & Technology

    2012-10-10

    IrwIn D. OlIn Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Sotera Defense Solutions, Inc...2012 Formal Report Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Irwin D. Olin* Naval...Manuscript approved June 30, 2012. 1 FLAT-TOP SECTOR BEAMS USING ONLY ARRAY ELEMENT PHASE WEIGHTING: A METAHEURISTIC

  11. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  12. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  13. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    DOE PAGES

    Song, Chenchen; Martinez, Todd J.

    2017-08-29

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less

  14. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2017-10-01

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.

  15. Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Chenchen; Martinez, Todd J.

    Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less

  16. Quasivelocities and Optimal Control for underactuated Mechanical Systems

    NASA Astrophysics Data System (ADS)

    Colombo, L.; de Diego, D. Martín

    2010-07-01

    This paper is concerned with the application of the theory of quasivelocities for optimal control for underactuated mechanical systems. Using this theory, we convert the original problem in a variational second-order lagrangian system subjected to constraints. The equations of motion are geometrically derived using an adaptation of the classical Skinner and Rusk formalism.

  17. Quasivelocities and Optimal Control for underactuated Mechanical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colombo, L.; Martin de Diego, D.

    2010-07-28

    This paper is concerned with the application of the theory of quasivelocities for optimal control for underactuated mechanical systems. Using this theory, we convert the original problem in a variational second-order lagrangian system subjected to constraints. The equations of motion are geometrically derived using an adaptation of the classical Skinner and Rusk formalism.

  18. Parent Involvement in Education in Terms of Their Socio-Economic Status

    ERIC Educational Resources Information Center

    Kuru Cetin, Saadet; Taskin, Pelin

    2016-01-01

    Problem Statement: Increasing the quality of education and educating well-qualified students is one of the most important objectives of formal education. Informal resources are as important as formal resources in improving this efficiency and productivity. In this respect, it can be said that family is the most important informal structure…

  19. Assessing, Recognising and Certifying Informal and Non-Formal Learning (ARCNIL): Evolution and Challenges

    ERIC Educational Resources Information Center

    Svetlik, Ivan

    2009-01-01

    Certifying non-formal and informal knowledge may be a consequence of separating education and training from other social and economic activities. Specialisation and formalisation of education and training both aim to increase learning efficiency. In the emerging knowledge society, this has attracted particular attention among researchers and…

  20. A Formal Algorithm for Routing Traces on a Printed Circuit Board

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    1996-01-01

    This paper addresses the classical problem of printed circuit board routing: that is, the problem of automatic routing by a computer other than by brute force that causes the execution time to grow exponentially as a function of the complexity. Most of the present solutions are either inexpensive but not efficient and fast, or efficient and fast but very costly. Many solutions are proprietary, so not much is written or known about the actual algorithms upon which these solutions are based. This paper presents a formal algorithm for routing traces on a print- ed circuit board. The solution presented is very fast and efficient and for the first time speaks to the question eloquently by way of symbolic statements.

  1. Evaluating efficiency-equality tradeoffs for mobile source control strategies in an urban area

    PubMed Central

    Levy, Jonathan I.; Greco, Susan L.; Melly, Steven J.; Mukhi, Neha

    2013-01-01

    In environmental risk management, there are often interests in maximizing public health benefits (efficiency) and addressing inequality in the distribution of health outcomes. However, both dimensions are not generally considered within a single analytical framework. In this study, we estimate both total population health benefits and changes in quantitative indicators of health inequality for a number of alternative spatial distributions of diesel particulate filter retrofits across half of an urban bus fleet in Boston, Massachusetts. We focus on the impact of emissions controls on primary fine particulate matter (PM2.5) emissions, modeling the effect on PM2.5 concentrations and premature mortality. Given spatial heterogeneity in baseline mortality rates, we apply the Atkinson index and other inequality indicators to quantify changes in the distribution of mortality risk. Across the different spatial distributions of control strategies, the public health benefits varied by more than a factor of two, related to factors such as mileage driven per day, population density near roadways, and baseline mortality rates in exposed populations. Changes in health inequality indicators varied across control strategies, with the subset of optimal strategies considering both efficiency and equality generally robust across different parametric assumptions and inequality indicators. Our analysis demonstrates the viability of formal analytical approaches to jointly address both efficiency and equality in risk assessment, providing a tool for decision-makers who wish to consider both issues. PMID:18793281

  2. Data-driven non-Markovian closure models

    NASA Astrophysics Data System (ADS)

    Kondrashov, Dmitri; Chekroun, Mickaël D.; Ghil, Michael

    2015-03-01

    This paper has two interrelated foci: (i) obtaining stable and efficient data-driven closure models by using a multivariate time series of partial observations from a large-dimensional system; and (ii) comparing these closure models with the optimal closures predicted by the Mori-Zwanzig (MZ) formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a generalization and a time-continuous limit of existing multilevel, regression-based approaches to closure in a data-driven setting; these approaches include empirical model reduction (EMR), as well as more recent multi-layer modeling. It is shown that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the MZ formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are derived on the structure of the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a broad class of MSM applications, a class that includes non-polynomial predictors and nonlinearities that do not necessarily preserve quadratic energy invariants. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. It is shown that the resulting closure model with energy-conserving nonlinearities efficiently captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lotka-Volterra model of population dynamics in its chaotic regime. The challenges here include the rarity of strange attractors in the model's parameter space and the existence of multiple attractor basins with fractal boundaries. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up.

  3. Privatization of solid waste collection services: Lessons from Gaborone.

    PubMed

    Bolaane, Benjamin; Isaac, Emmanuel

    2015-06-01

    Formal privatization of solid waste collection activities has often been flagged as a suitable intervention for some of the challenges of solid waste management experienced by developing countries. Proponents of outsourcing collection to the private sector argue that in contrast to the public sector, it is more effective and efficient in delivering services. This essay is a comparative case study of efficiency and effectiveness attributes between the public and the formal private sector, in relation to the collection of commercial waste in Gaborone. The paper is based on analysis of secondary data and key informant interviews. It was found that while, the private sector performed comparatively well in most of the chosen indicators of efficiency and effectiveness, the public sector also had areas where it had a competitive advantage. For instance, the private sector used the collection crew more efficiently, while the public sector was found to have a more reliable workforce. The study recommends that, while formal private sector participation in waste collection has some positive effects in terms of quality of service rendered, in most developing countries, it has to be enhanced by building sufficient capacity within the public sector on information about services contracted out and evaluation of performance criteria within the contracting process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Towards a formal semantics for Ada 9X

    NASA Technical Reports Server (NTRS)

    Guaspari, David; Mchugh, John; Wolfgang, Polak; Saaltink, Mark

    1995-01-01

    The Ada 9X language precision team was formed during the revisions of Ada 83, with the goal of analyzing the proposed design, identifying problems, and suggesting improvements, through the use of mathematical models. This report defines a framework for formally describing Ada 9X, based on Kahn's 'natural semantics', and applies the framework to portions of the language. The proposals for exceptions and optimization freedoms are also analyzed, using a different technique.

  5. Abstract shapes of RNA.

    PubMed

    Giegerich, Robert; Voss, Björn; Rehmsmeier, Marc

    2004-01-01

    The function of a non-protein-coding RNA is often determined by its structure. Since experimental determination of RNA structure is time-consuming and expensive, its computational prediction is of great interest, and efficient solutions based on thermodynamic parameters are known. Frequently, however, the predicted minimum free energy structures are not the native ones, leading to the necessity of generating suboptimal solutions. While this can be accomplished by a number of programs, the user is often confronted with large outputs of similar structures, although he or she is interested in structures with more fundamental differences, or, in other words, with different abstract shapes. Here, we formalize the concept of abstract shapes and introduce their efficient computation. Each shape of an RNA molecule comprises a class of similar structures and has a representative structure of minimal free energy within the class. Shape analysis is implemented in the program RNAshapes. We applied RNAshapes to the prediction of optimal and suboptimal abstract shapes of several RNAs. For a given energy range, the number of shapes is considerably smaller than the number of structures, and in all cases, the native structures were among the top shape representatives. This demonstrates that the researcher can quickly focus on the structures of interest, without processing up to thousands of near-optimal solutions. We complement this study with a large-scale analysis of the growth behaviour of structure and shape spaces. RNAshapes is available for download and as an online version on the Bielefeld Bioinformatics Server.

  6. The Role of the Board of Education in the Process of Resource Allocation for Public Schools.

    ERIC Educational Resources Information Center

    Chichura, Elaine Marie

    Public schools as formal organizations have broad-based goals, limited resources, and a formal hierarchy with which to manage the goal achievement process. The board of education combines this organization's economic and political dimensions to provide a thorough, efficient education for all children in the state. This paper investigates the…

  7. Formal Solutions for Polarized Radiative Transfer. III. Stiffness and Instability

    NASA Astrophysics Data System (ADS)

    Janett, Gioele; Paganini, Alberto

    2018-04-01

    Efficient numerical approximation of the polarized radiative transfer equation is challenging because this system of ordinary differential equations exhibits stiff behavior, which potentially results in numerical instability. This negatively impacts the accuracy of formal solvers, and small step-sizes are often necessary to retrieve physical solutions. This work presents stability analyses of formal solvers for the radiative transfer equation of polarized light, identifies instability issues, and suggests practical remedies. In particular, the assumptions and the limitations of the stability analysis of Runge–Kutta methods play a crucial role. On this basis, a suitable and pragmatic formal solver is outlined and tested. An insightful comparison to the scalar radiative transfer equation is also presented.

  8. Efficient Research Design: Using Value-of-Information Analysis to Estimate the Optimal Mix of Top-down and Bottom-up Costing Approaches in an Economic Evaluation alongside a Clinical Trial.

    PubMed

    Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee

    2016-04-01

    In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.

  9. A linguistic geometry for space applications

    NASA Technical Reports Server (NTRS)

    Stilman, Boris

    1994-01-01

    We develop a formal theory, the so-called Linguistic Geometry, in order to discover the inner properties of human expert heuristics, which were successful in a certain class of complex control systems, and apply them to different systems. This research relies on the formalization of search heuristics of high-skilled human experts which allow for the decomposition of complex system into the hierarchy of subsystems, and thus solve intractable problems reducing the search. The hierarchy of subsystems is represented as a hierarchy of formal attribute languages. This paper includes a formal survey of the Linguistic Geometry, and new example of a solution of optimization problem for the space robotic vehicles. This example includes actual generation of the hierarchy of languages, some details of trajectory generation and demonstrates the drastic reduction of search in comparison with conventional search algorithms.

  10. Optimization of inclusive fitness.

    PubMed

    Grafen, Alan

    2006-02-07

    The first fully explicit argument is given that broadly supports a widespread belief among whole-organism biologists that natural selection tends to lead to organisms acting as if maximizing their inclusive fitness. The use of optimization programs permits a clear statement of what this belief should be understood to mean, in contradistinction to the common mathematical presumption that it should be formalized as some kind of Lyapunov or even potential function. The argument reveals new details and uncovers latent assumptions. A very general genetic architecture is allowed, and there is arbitrary uncertainty. However, frequency dependence of fitnesses is not permitted. The logic of inclusive fitness immediately draws together various kinds of intra-genomic conflict, and the concept of 'p-family' is introduced. Inclusive fitness is thus incorporated into the formal Darwinism project, which aims to link the mathematics of motion (difference and differential equations) used to describe gene frequency trajectories with the mathematics of optimization used to describe purpose and design. Important questions remain to be answered in the fundamental theory of inclusive fitness.

  11. Removing interference-based effects from the infrared transflectance spectra of thin films on metallic substrates: a fast and wave optics conform solution.

    PubMed

    Mayerhöfer, Thomas G; Pahlow, Susanne; Hübner, Uwe; Popp, Jürgen

    2018-06-25

    A hybrid formalism combining elements from Kramers-Kronig based analyses and dispersion analysis was developed, which allows removing interference-based effects in the infrared spectra of layers on highly reflecting substrates. In order to enable a highly convenient application, the correction procedure is fully automatized and usually requires less than a minute with non-optimized software on a typical office PC. The formalism was tested with both synthetic and experimental spectra of poly(methyl methacrylate) on gold. The results confirmed the usefulness of the formalism: apparent peak ratios as well as the interference fringes in the original spectra were successfully corrected. Accordingly, the introduced formalism makes it possible to use inexpensive and robust highly reflecting substrates for routine infrared spectroscopic investigations of layers or films the thickness of which is limited by the imperative that reflectance absorbance must be smaller than about 1. For thicker films the formalism is still useful, but requires estimates for the optical constants.

  12. δ M formalism and anisotropic chaotic inflation power spectrum

    NASA Astrophysics Data System (ADS)

    Talebian-Ashkezari, A.; Ahmadi, N.

    2018-05-01

    A new analytical approach to linear perturbations in anisotropic inflation has been introduced in [A. Talebian-Ashkezari, N. Ahmadi and A.A. Abolhasani, JCAP 03 (2018) 001] under the name of δ M formalism. In this paper we apply the mentioned approach to a model of anisotropic inflation driven by a scalar field, coupled to the kinetic term of a vector field with a U(1) symmetry. The δ M formalism provides an efficient way of computing tensor-tensor, tensor-scalar as well as scalar-scalar 2-point correlations that are needed for the analysis of the observational features of an anisotropic model on the CMB. A comparison between δ M results and the tedious calculations using in-in formalism shows the aptitude of the δ M formalism in calculating accurate two point correlation functions between physical modes of the system.

  13. Mathematical Formalism for Designing Wide-Field X-Ray Telescopes: Mirror Nodal Positions and Detector Tilts

    NASA Technical Reports Server (NTRS)

    Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.

    2011-01-01

    We provide a mathematical formalism for optimizing the mirror nodal positions along the optical axis and the tilt of a commonly employed detector configuration at the focus of a x-ray telescope consisting of nested mirror shells with known mirror surface prescriptions. We adopt the spatial resolution averaged over the field-of-view as the figure of merit M. A more complete description appears in our paper in these proceedings.

  14. Sensitivity enhancement by multiple-contact cross-polarization under magic-angle spinning.

    PubMed

    Raya, J; Hirschinger, J

    2017-08-01

    Multiple-contact cross-polarization (MC-CP) is applied to powder samples of ferrocene and l-alanine under magic-angle spinning (MAS) conditions. The method is described analytically through the density matrix formalism. The combination of a two-step memory function approach and the Anderson-Weiss approximation is found to be particularly useful to derive approximate analytical solutions for single-contact Hartmann-Hahn CP (HHCP) and MC-CP dynamics under MAS. We show that the MC-CP sequence requiring no pulse-shape optimization yields higher polarizations at short contact times than optimized adiabatic passage through the HH condition CP (APHH-CP) when the MAS frequency is comparable to the heteronuclear dipolar coupling, i.e., when APHH-CP through a single sideband matching condition is impossible or difficult to perform. It is also shown that the MC-CP sideband HH conditions are generally much broader than for single-contact HHCP and that efficient polarization transfer at the centerband HH condition can be reintroduced by rotor-asynchronous multiple equilibrations-re-equilibrations with the proton spin bath. Boundary conditions for the successful use of the MC-CP experiment when relying on spin-lattice relaxation for repolarization are also examined. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Sensitivity enhancement by multiple-contact cross-polarization under magic-angle spinning

    NASA Astrophysics Data System (ADS)

    Raya, J.; Hirschinger, J.

    2017-08-01

    Multiple-contact cross-polarization (MC-CP) is applied to powder samples of ferrocene and L-alanine under magic-angle spinning (MAS) conditions. The method is described analytically through the density matrix formalism. The combination of a two-step memory function approach and the Anderson-Weiss approximation is found to be particularly useful to derive approximate analytical solutions for single-contact Hartmann-Hahn CP (HHCP) and MC-CP dynamics under MAS. We show that the MC-CP sequence requiring no pulse-shape optimization yields higher polarizations at short contact times than optimized adiabatic passage through the HH condition CP (APHH-CP) when the MAS frequency is comparable to the heteronuclear dipolar coupling, i.e., when APHH-CP through a single sideband matching condition is impossible or difficult to perform. It is also shown that the MC-CP sideband HH conditions are generally much broader than for single-contact HHCP and that efficient polarization transfer at the centerband HH condition can be reintroduced by rotor-asynchronous multiple equilibrations-re-equilibrations with the proton spin bath. Boundary conditions for the successful use of the MC-CP experiment when relying on spin-lattice relaxation for repolarization are also examined.

  16. Phenomenological theory of collective decision-making

    NASA Astrophysics Data System (ADS)

    Zafeiris, Anna; Koman, Zsombor; Mones, Enys; Vicsek, Tamás

    2017-08-01

    An essential task of groups is to provide efficient solutions for the complex problems they face. Indeed, considerable efforts have been devoted to the question of collective decision-making related to problems involving a single dominant feature. Here we introduce a quantitative formalism for finding the optimal distribution of the group members' competences in the more typical case when the underlying problem is complex, i.e., multidimensional. Thus, we consider teams that are aiming at obtaining the best possible answer to a problem having a number of independent sub-problems. Our approach is based on a generic scheme for the process of evaluating the proposed solutions (i.e., negotiation). We demonstrate that the best performing groups have at least one specialist for each sub-problem - but a far less intuitive result is that finding the optimal solution by the interacting group members requires that the specialists also have some insight into the sub-problems beyond their unique field(s). We present empirical results obtained by using a large-scale database of citations being in good agreement with the above theory. The framework we have developed can easily be adapted to a variety of realistic situations since taking into account the weights of the sub-problems, the opinions or the relations of the group is straightforward. Consequently, our method can be used in several contexts, especially when the optimal composition of a group of decision-makers is designed.

  17. Designing a Broadband Pump for High-Quality Micro-Lasers via Modified Net Radiation Method.

    PubMed

    Nechayev, Sergey; Reusswig, Philip D; Baldo, Marc A; Rotschild, Carmel

    2016-12-07

    High-quality micro-lasers are key ingredients in non-linear optics, communication, sensing and low-threshold solar-pumped lasers. However, such micro-lasers exhibit negligible absorption of free-space broadband pump light. Recently, this limitation was lifted by cascade energy transfer, in which the absorption and quality factor are modulated with wavelength, enabling non-resonant pumping of high-quality micro-lasers and solar-pumped laser to operate at record low solar concentration. Here, we present a generic theoretical framework for modeling the absorption, emission and energy transfer of incoherent radiation between cascade sensitizer and laser gain media. Our model is based on linear equations of the modified net radiation method and is therefore robust, fast converging and has low complexity. We apply this formalism to compute the optimal parameters of low-threshold solar-pumped lasers. It is revealed that the interplay between the absorption and self-absorption of such lasers defines the optimal pump absorption below the maximal value, which is in contrast to conventional lasers for which full pump absorption is desired. Numerical results are compared to experimental data on a sensitized Nd 3+ :YAG cavity, and quantitative agreement with theoretical models is found. Our work modularizes the gain and sensitizing components and paves the way for the optimal design of broadband-pumped high-quality micro-lasers and efficient solar-pumped lasers.

  18. Designing a Broadband Pump for High-Quality Micro-Lasers via Modified Net Radiation Method

    PubMed Central

    Nechayev, Sergey; Reusswig, Philip D.; Baldo, Marc A.; Rotschild, Carmel

    2016-01-01

    High-quality micro-lasers are key ingredients in non-linear optics, communication, sensing and low-threshold solar-pumped lasers. However, such micro-lasers exhibit negligible absorption of free-space broadband pump light. Recently, this limitation was lifted by cascade energy transfer, in which the absorption and quality factor are modulated with wavelength, enabling non-resonant pumping of high-quality micro-lasers and solar-pumped laser to operate at record low solar concentration. Here, we present a generic theoretical framework for modeling the absorption, emission and energy transfer of incoherent radiation between cascade sensitizer and laser gain media. Our model is based on linear equations of the modified net radiation method and is therefore robust, fast converging and has low complexity. We apply this formalism to compute the optimal parameters of low-threshold solar-pumped lasers. It is revealed that the interplay between the absorption and self-absorption of such lasers defines the optimal pump absorption below the maximal value, which is in contrast to conventional lasers for which full pump absorption is desired. Numerical results are compared to experimental data on a sensitized Nd3+:YAG cavity, and quantitative agreement with theoretical models is found. Our work modularizes the gain and sensitizing components and paves the way for the optimal design of broadband-pumped high-quality micro-lasers and efficient solar-pumped lasers. PMID:27924844

  19. Integrating Science and Engineering to Implement Evidence-Based Practices in Health Care Settings.

    PubMed

    Wu, Shinyi; Duan, Naihua; Wisdom, Jennifer P; Kravitz, Richard L; Owen, Richard R; Sullivan, J Greer; Wu, Albert W; Di Capua, Paul; Hoagwood, Kimberly Eaton

    2015-09-01

    Integrating two distinct and complementary paradigms, science and engineering, may produce more effective outcomes for the implementation of evidence-based practices in health care settings. Science formalizes and tests innovations, whereas engineering customizes and optimizes how the innovation is applied tailoring to accommodate local conditions. Together they may accelerate the creation of an evidence-based healthcare system that works effectively in specific health care settings. We give examples of applying engineering methods for better quality, more efficient, and safer implementation of clinical practices, medical devices, and health services systems. A specific example was applying systems engineering design that orchestrated people, process, data, decision-making, and communication through a technology application to implement evidence-based depression care among low-income patients with diabetes. We recommend that leading journals recognize the fundamental role of engineering in implementation research, to improve understanding of design elements that create a better fit between program elements and local context.

  20. Solid, liquid, and interfacial properties of TiAl alloys: parameterization of a new modified embedded atom method model

    NASA Astrophysics Data System (ADS)

    Sun, Shoutian; Ramu Ramachandran, Bala; Wick, Collin D.

    2018-02-01

    New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl’s surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.

  1. Solid, liquid, and interfacial properties of TiAl alloys: parameterization of a new modified embedded atom method model.

    PubMed

    Sun, Shoutian; Ramachandran, Bala Ramu; Wick, Collin D

    2018-02-21

    New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl's surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.

  2. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  3. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  4. (Preventing) two birds with one stone: improving vitamin D levels in the elderly.

    PubMed

    Lawless, Susie; White, Phil; Murdoch, Prue; Leitch, Sharon

    2011-06-01

    A majority of adults have sub-optimal vitamin D levels in the winter in southern New Zealand. This is associated with an increased risk of falls and fragility fractures in the elderly, with long-term adverse outcomes likely. Vitamin D supplementation decreases the risks of both falls and fractures. An intervention was undertaken by a small urban general practice to increase the number of elderly patients receiving vitamin D supplementation by linking vitamin D prescription to the annual flu vaccination campaign. Uptake of the supplementation was high and costs to the practice low. Thirty-eight patients were identified for whom long-term supplementation with vitamin D was indicated. The study could have been strengthened by incorporating a more formal method of evaluating uptake. Encouraging patients to take supplements as a population-based strategy is a realistic intervention, and linking it to the flu vaccination campaign is both seasonally appropriate and efficient.

  5. Rule acquisition in formal decision contexts based on formal, object-oriented and property-oriented concept lattices.

    PubMed

    Ren, Yue; Li, Jinhai; Aswani Kumar, Cherukuri; Liu, Wenqi

    2014-01-01

    Rule acquisition is one of the main purposes in the analysis of formal decision contexts. Up to now, there have been several types of rules in formal decision contexts such as decision rules, decision implications, and granular rules, which can be viewed as ∧-rules since all of them have the following form: "if conditions 1,2,…, and m hold, then decisions hold." In order to enrich the existing rule acquisition theory in formal decision contexts, this study puts forward two new types of rules which are called ∨-rules and ∨-∧ mixed rules based on formal, object-oriented, and property-oriented concept lattices. Moreover, a comparison of ∨-rules, ∨-∧ mixed rules, and ∧-rules is made from the perspectives of inclusion and inference relationships. Finally, some real examples and numerical experiments are conducted to compare the proposed rule acquisition algorithms with the existing one in terms of the running efficiency.

  6. Rule Acquisition in Formal Decision Contexts Based on Formal, Object-Oriented and Property-Oriented Concept Lattices

    PubMed Central

    Ren, Yue; Aswani Kumar, Cherukuri; Liu, Wenqi

    2014-01-01

    Rule acquisition is one of the main purposes in the analysis of formal decision contexts. Up to now, there have been several types of rules in formal decision contexts such as decision rules, decision implications, and granular rules, which can be viewed as ∧-rules since all of them have the following form: “if conditions 1,2,…, and m hold, then decisions hold.” In order to enrich the existing rule acquisition theory in formal decision contexts, this study puts forward two new types of rules which are called ∨-rules and ∨-∧ mixed rules based on formal, object-oriented, and property-oriented concept lattices. Moreover, a comparison of ∨-rules, ∨-∧ mixed rules, and ∧-rules is made from the perspectives of inclusion and inference relationships. Finally, some real examples and numerical experiments are conducted to compare the proposed rule acquisition algorithms with the existing one in terms of the running efficiency. PMID:25165744

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ni, Xiaotong; Van den Nest, Maarten; Buerschaper, Oliver

    We propose a non-commutative extension of the Pauli stabilizer formalism. The aim is to describe a class of many-body quantum states which is richer than the standard Pauli stabilizer states. In our framework, stabilizer operators are tensor products of single-qubit operators drawn from the group 〈αI, X, S〉, where α = e{sup iπ/4} and S = diag(1, i). We provide techniques to efficiently compute various properties related to bipartite entanglement, expectation values of local observables, preparation by means of quantum circuits, parent Hamiltonians, etc. We also highlight significant differences compared to the Pauli stabilizer formalism. In particular, we give examplesmore » of states in our formalism which cannot arise in the Pauli stabilizer formalism, such as topological models that support non-Abelian anyons.« less

  8. General formalism of local thermodynamics with an example: Quantum Otto engine with a spin-1/2 coupled to an arbitrary spin.

    PubMed

    Altintas, Ferdi; Müstecaplıoğlu, Özgür E

    2015-08-01

    We investigate a quantum heat engine with a working substance of two particles, one with a spin-1/2 and the other with an arbitrary spin (spin s), coupled by Heisenberg exchange interaction, and subject to an external magnetic field. The engine operates in a quantum Otto cycle. Work harvested in the cycle and its efficiency are calculated using quantum thermodynamical definitions. It is found that the engine has higher efficiencies at higher spins and can harvest work at higher exchange interaction strengths. The role of exchange coupling and spin s on the work output and the thermal efficiency is studied in detail. In addition, the engine operation is analyzed from the perspective of local work and efficiency. We develop a general formalism to explore local thermodynamics applicable to any coupled bipartite system. Our general framework allows for examination of local thermodynamics even when global parameters of the system are varied in thermodynamic cycles. The generalized definitions of local and cooperative work are introduced by using mean field Hamiltonians. The general conditions for which the global work is not equal to the sum of the local works are given in terms of the covariance of the subsystems. Our coupled spin quantum Otto engine is used as an example of the general formalism.

  9. General formalism of local thermodynamics with an example: Quantum Otto engine with a spin-1 /2 coupled to an arbitrary spin

    NASA Astrophysics Data System (ADS)

    Altintas, Ferdi; Müstecaplıoǧlu, Ã.-zgür E.

    2015-08-01

    We investigate a quantum heat engine with a working substance of two particles, one with a spin-1 /2 and the other with an arbitrary spin (spin s ), coupled by Heisenberg exchange interaction, and subject to an external magnetic field. The engine operates in a quantum Otto cycle. Work harvested in the cycle and its efficiency are calculated using quantum thermodynamical definitions. It is found that the engine has higher efficiencies at higher spins and can harvest work at higher exchange interaction strengths. The role of exchange coupling and spin s on the work output and the thermal efficiency is studied in detail. In addition, the engine operation is analyzed from the perspective of local work and efficiency. We develop a general formalism to explore local thermodynamics applicable to any coupled bipartite system. Our general framework allows for examination of local thermodynamics even when global parameters of the system are varied in thermodynamic cycles. The generalized definitions of local and cooperative work are introduced by using mean field Hamiltonians. The general conditions for which the global work is not equal to the sum of the local works are given in terms of the covariance of the subsystems. Our coupled spin quantum Otto engine is used as an example of the general formalism.

  10. Energy Efficiency Collaboratives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Michael; Bryson, Joe

    2015-09-01

    Collaboratives for energy efficiency have a long and successful history and are currently used, in some form, in more than half of the states. Historically, many state utility commissions have used some form of collaborative group process to resolve complex issues that emerge during a rate proceeding. Rather than debate the issues through the formality of a commission proceeding, disagreeing parties are sent to discuss issues in a less-formal setting and bring back resolutions to the commission. Energy efficiency collaboratives take this concept and apply it specifically to energy efficiency programs—often in anticipation of future issues as opposed to reactingmore » to a present disagreement. Energy efficiency collaboratives can operate long term and can address the full suite of issues associated with designing, implementing, and improving energy efficiency programs. Collaboratives can be useful to gather stakeholder input on changing program budgets and program changes in response to performance or market shifts, as well as to provide continuity while regulators come and go, identify additional energy efficiency opportunities and innovations, assess the role of energy efficiency in new regulatory contexts, and draw on lessons learned and best practices from a diverse group. Details about specific collaboratives in the United States are in the appendix to this guide. Collectively, they demonstrate the value of collaborative stakeholder processes in producing successful energy efficiency programs.« less

  11. Adaptation of the projector-augmented-wave formalism to the treatment of orbital-dependent exchange-correlation functionals

    NASA Astrophysics Data System (ADS)

    Xu, Xiao; Holzwarth, N. A. W.

    2011-10-01

    This paper presents the formulation and numerical implementation of a self-consistent treatment of orbital-dependent exchange-correlation functionals within the projector-augmented-wave method of Blöchl [Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.50.17953 50, 17953 (1994)] for electronic structure calculations. The methodology is illustrated with binding energy curves for C in the diamond structure and LiF in the rock salt structure, by comparing results from the Hartree-Fock (HF) formalism and the optimized effective potential formalism in the so-called KLI approximation [Krieger, Li, and Iafrate, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.45.101 45, 101 (1992)] with those of the local density approximation. While the work here uses pure Fock exchange only, the formalism can be extended to treat orbital-dependent functionals more generally.

  12. Beyond formalism

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1991-01-01

    The ongoing debate over the role of formalism and formal specifications in software features many speakers with diverse positions. Yet, in the end, they share the conviction that the requirements of a software system can be unambiguously specified, that acceptable software is a product demonstrably meeting the specifications, and that the design process can be carried out with little interaction between designers and users once the specification has been agreed to. This conviction is part of a larger paradigm prevalent in American management thinking, which holds that organizations are systems that can be precisely specified and optimized. This paradigm, which traces historically to the works of Frederick Taylor in the early 1900s, is no longer sufficient for organizations and software systems today. In the domain of software, a new paradigm, called user-centered design, overcomes the limitations of pure formalism. Pioneered in Scandinavia, user-centered design is spreading through Europe and is beginning to make its way into the U.S.

  13. Engaging Scientists in NASA Education and Public Outreach: K - 12 Formal Education

    NASA Astrophysics Data System (ADS)

    Bartolone, Lindsay; Smith, D. A.; Eisenhamer, B.; Lawton, B. L.; Universe Professional Development Collaborative, Multiwavelength; NASA Data Collaborative, Use of; SEPOF K-12 Formal Education Working Group; E/PO Community, SMD

    2014-01-01

    The NASA Science Education and Public Outreach Forums support the NASA Science Mission Directorate (SMD) and its education and public outreach (E/PO) community through a coordinated effort to enhance the coherence and efficiency of SMD-funded E/PO programs. The Forums foster collaboration between scientists with content expertise and educators with pedagogy expertise. We present opportunities for the astronomy community to participate in collaborations supporting the NASA SMD efforts in the K - 12 Formal Education community. Members of the K - 12 Formal Education community include classroom educators, homeschool educators, students, and curriculum developers. The Forums’ efforts for the K - 12 Formal Education community include a literature review, appraisal of educators’ needs, coordination of audience-based NASA resources and opportunities, professional development, and support with the Next Generation Science Standards. Learn how to join in our collaborative efforts to support the K - 12 Formal Education community based upon mutual needs and interests.

  14. Optimization of helicopter airframe structures for vibration reduction considerations, formulations and applications

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1988-01-01

    Several key issues involved in the application of formal optimization technique to helicopter airframe structures for vibration reduction are addressed. Considerations which are important in the optimization of real airframe structures are discussed. Considerations necessary to establish relevant set of design variables, constraints and objectives which are appropriate to conceptual, preliminary, detailed design, ground and flight test phases of airframe design are discussed. A methodology is suggested for optimization of airframes in various phases of design. Optimization formulations that are unique to helicopter airframes are described and expressions for vibration related functions are derived. Using a recently developed computer code, the optimization of a Bell AH-1G helicopter airframe is demonstrated.

  15. Spin-dependent optimized effective potential formalism for open and closed systems

    NASA Astrophysics Data System (ADS)

    Rigamonti, S.; Horowitz, C. M.; Proetto, C. R.

    2015-12-01

    Orbital-based exchange (x ) correlation (c ) energy functionals, leading to the optimized effective potential (OEP) formalism of density-functional theory (DFT), are gaining increasing importance in ground-state DFT, as applied to the calculation of the electronic structure of closed systems with a fixed number of particles, such as atoms and molecules. These types of functionals prove also to be extremely valuable for dealing with solid-state systems with reduced dimensionality, such as is the case of electrons trapped at the interface between two different semiconductors, or narrow metallic slabs. In both cases, electrons build a quasi-two-dimensional electron gas, or Q2DEG. We provide here a general DFT-OEP formal scheme valid both for Q2DEGs either isolated (closed) or in contact with a particle bath (open), and show that both possible representations are equivalent, being the choice of one or the other essentially a question of convenience. Based on this equivalence, a calculation scheme is proposed which avoids the noninvertibility problem of the density response function for closed systems. We also consider the case of spontaneously spin-polarized Q2DEGs, and find that far from the region where the Q2DEG is localized, the exact x -only exchange potential approaches two different, spin-dependent asymptotic limits. As an example, aside from these formal results, we also provide numerical results for a spin-polarized jellium slab, using the new OEP formalism for closed systems. The accuracy of the Krieger-Li-Iafrate approximation has been also tested for the same system, and found to be as good as it is for atoms and molecules.

  16. Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy.

    PubMed

    Katić, Darko; Schuck, Jürgen; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-06-01

    Computer assistance is increasingly common in surgery. However, the amount of information is bound to overload processing abilities of surgeons. We propose methods to recognize the current phase of a surgery for context-aware information filtering. The purpose is to select the most suitable subset of information for surgical situations which require special assistance. We combine formal knowledge, represented by an ontology, and experience-based knowledge, represented by training samples, to recognize phases. For this purpose, we have developed two different methods. Firstly, we use formal knowledge about possible phase transitions to create a composition of random forests. Secondly, we propose a method based on cultural optimization to infer formal rules from experience to recognize phases. The proposed methods are compared with a purely formal knowledge-based approach using rules and a purely experience-based one using regular random forests. The comparative evaluation on laparoscopic pancreas resections and adrenalectomies employs a consistent set of quality criteria on clean and noisy input. The rule-based approaches proved best with noisefree data. The random forest-based ones were more robust in the presence of noise. Formal and experience-based knowledge can be successfully combined for robust phase recognition.

  17. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.

  18. Transcription factor target site search and gene regulation in a background of unspecific binding sites.

    PubMed

    Hettich, J; Gebhardt, J C M

    2018-06-02

    Response time and transcription level are vital parameters of gene regulation. They depend on how fast transcription factors (TFs) find and how efficient they occupy their specific target sites. It is well known that target site search is accelerated by TF binding to and sliding along unspecific DNA and that unspecific associations alter the occupation frequency of a gene. However, whether target site search time and occupation frequency can be optimized simultaneously is mostly unclear. We developed a transparent and intuitively accessible state-based formalism to calculate search times to target sites on and occupation frequencies of promoters of arbitrary state structure. Our formalism is based on dissociation rate constants experimentally accessible in live cell experiments. To demonstrate our approach, we consider promoters activated by a single TF, by two coactivators or in the presence of a competitive inhibitor. We find that target site search time and promoter occupancy differentially vary with the unspecific dissociation rate constant. Both parameters can be harmonized by adjusting the specific dissociation rate constant of the TF. However, while measured DNA residence times of various eukaryotic TFs correspond to a fast search time, the occupation frequencies of target sites are generally low. Cells might tolerate low target site occupancies as they enable timely gene regulation in response to a changing environment. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-04-10

    Numerical simulation of subaperture tool influence functions (TIF) is widely known as a critical procedure in computer-controlled optical surfacing. However, it may lack practicability in engineering because the emulation TIF (e-TIF) has some discrepancy with the practical TIF (p-TIF), and the removal rate could not be predicted by simulations. Prior to the polishing of a formal workpiece, opticians have to conduct TIF spot experiments on another sample to confirm the p-TIF with a quantitative removal rate, which is difficult and time-consuming for sequential polishing runs with different tools. This work is dedicated to applying these e-TIFs into practical engineering by making improvements from two aspects: (1) modifies the pressure distribution model of a flat-pitch polisher by finite element analysis and least square fitting methods to make the removal shape of e-TIFs closer to p-TIFs (less than 5% relative deviation validated by experiments); (2) predicts the removal rate of e-TIFs by reverse calculating the material removal volume of a pre-polishing run to the formal workpiece (relative deviations of peak and volume removal rate were validated to be less than 5%). This can omit TIF spot experiments for the particular flat-pitch tool employed and promote the direct usage of e-TIFs in the optimization of a dwell time map, which can largely save on cost and increase fabrication efficiency.

  20. Development of an evidence-based review with recommendations using an online iterative process.

    PubMed

    Rudmik, Luke; Smith, Timothy L

    2011-01-01

    The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  1. Consideration of computer limitations in implementing on-line controls. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Roberts, G. K.

    1976-01-01

    A formal statement of the optimal control problem which includes the interval of dicretization as an optimization parameter, and extend this to include selection of a control algorithm as part of the optimization procedure, is formulated. The performance of the scalar linear system depends on the discretization interval. Discrete-time versions of the output feedback regulator and an optimal compensator, and the use of these results in presenting an example of a system for which fast partial-state-feedback control better minimizes a quadratic cost than either a full-state feedback control or a compensator, are developed.

  2. Discovery of Boolean metabolic networks: integer linear programming based approach.

    PubMed

    Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing

    2018-04-11

    Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".

  3. Efficient calculation of beyond RPA correlation energies in the dielectric matrix formalism

    NASA Astrophysics Data System (ADS)

    Beuerle, Matthias; Graf, Daniel; Schurkus, Henry F.; Ochsenfeld, Christian

    2018-05-01

    We present efficient methods to calculate beyond random phase approximation (RPA) correlation energies for molecular systems with up to 500 atoms. To reduce the computational cost, we employ the resolution-of-the-identity and a double-Laplace transform of the non-interacting polarization propagator in conjunction with an atomic orbital formalism. Further improvements are achieved using integral screening and the introduction of Cholesky decomposed densities. Our methods are applicable to the dielectric matrix formalism of RPA including second-order screened exchange (RPA-SOSEX), the RPA electron-hole time-dependent Hartree-Fock (RPA-eh-TDHF) approximation, and RPA renormalized perturbation theory using an approximate exchange kernel (RPA-AXK). We give an application of our methodology by presenting RPA-SOSEX benchmark results for the L7 test set of large, dispersion dominated molecules, yielding a mean absolute error below 1 kcal/mol. The present work enables calculating beyond RPA correlation energies for significantly larger molecules than possible to date, thereby extending the applicability of these methods to a wider range of chemical systems.

  4. Guideline validation in multiple trauma care through business process modeling.

    PubMed

    Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen

    2003-07-01

    Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.

  5. Evaluation of light extraction efficiency for the light-emitting diodes based on the transfer matrix formalism and ray-tracing method

    NASA Astrophysics Data System (ADS)

    Pingbo, An; Li, Wang; Hongxi, Lu; Zhiguo, Yu; Lei, Liu; Xin, Xi; Lixia, Zhao; Junxi, Wang; Jinmin, Li

    2016-06-01

    The internal quantum efficiency (IQE) of the light-emitting diodes can be calculated by the ratio of the external quantum efficiency (EQE) and the light extraction efficiency (LEE). The EQE can be measured experimentally, but the LEE is difficult to calculate due to the complicated LED structures. In this work, a model was established to calculate the LEE by combining the transfer matrix formalism and an in-plane ray tracing method. With the calculated LEE, the IQE was determined and made a good agreement with that obtained by the ABC model and temperature-dependent photoluminescence method. The proposed method makes the determination of the IQE more practical and conventional. Project supported by the National Natural Science Foundation of China (Nos.11574306, 61334009), the China International Science and Technology Cooperation Program (No. 2014DFG62280), and the National High Technology Program of China (No. 2015AA03A101).

  6. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  7. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  8. Failure to cope: the hidden curriculum of emergency department wait times and the implications for clinical training.

    PubMed

    Webster, Fiona; Rice, Kathleen; Dainty, Katie N; Zwarenstein, Merrick; Durant, Steve; Kuper, Ayelet

    2015-01-01

    The study explored optimal intraprofessional collaboration between physicians in the emergency department (ED) and those from general internal medicine (GIM). Prior to the study, a policy was initiated that mandated reductions in ED wait times. The researchers examined the impact of these changes on clinical practice and trainee education. In 2010-2011, an ethnographic study was undertaken to observe consults between GIM and ED at an urban teaching hospital in Ontario, Canada. Additional ad hoc interviews were conducted with residents, nurses, and faculty from both departments as well as formal one-on-one interviews with 12 physicians. Data were coded and analyzed using concepts of institutional ethnography. Participants perceived that efficiency was more important than education and was in fact the new definition of "good" patient care. The informal label "failure to cope" to describe high-needs patients suggested that in many instances, patients were experienced as a barrier to optimal efficiency. This resulted in tension during consults as well as reduced opportunities for education. The authors suggest that the emphasis on wait times resulted in more importance being placed on "getting the patient out" of the ED than on providing safe, compassionate, person-centered medical care. Resource constraints were hidden within a discourse that shifted the problem of overcrowding in the ED to patients with complex chronic conditions. The term "failure to cope" became activated when overworked physicians tried to avoid assuming care for high-needs patients, masking institutionally produced stress and possibly altering the way patients are perceived.

  9. Application of optimal control theory to the design of broadband excitation pulses for high-resolution NMR.

    PubMed

    Skinner, Thomas E; Reiss, Timo O; Luy, Burkhard; Khaneja, Navin; Glaser, Steffen J

    2003-07-01

    Optimal control theory is considered as a methodology for pulse sequence design in NMR. It provides the flexibility for systematically imposing desirable constraints on spin system evolution and therefore has a wealth of applications. We have chosen an elementary example to illustrate the capabilities of the optimal control formalism: broadband, constant phase excitation which tolerates miscalibration of RF power and variations in RF homogeneity relevant for standard high-resolution probes. The chosen design criteria were transformation of I(z)-->I(x) over resonance offsets of +/- 20 kHz and RF variability of +/-5%, with a pulse length of 2 ms. Simulations of the resulting pulse transform I(z)-->0.995I(x) over the target ranges in resonance offset and RF variability. Acceptably uniform excitation is obtained over a much larger range of RF variability (approximately 45%) than the strict design limits. The pulse performs well in simulations that include homonuclear and heteronuclear J-couplings. Experimental spectra obtained from 100% 13C-labeled lysine show only minimal coupling effects, in excellent agreement with the simulations. By increasing pulse power and reducing pulse length, we demonstrate experimental excitation of 1H over +/-32 kHz, with phase variations in the spectra <8 degrees and peak amplitudes >93% of maximum. Further improvements in broadband excitation by optimized pulses (BEBOP) may be possible by applying more sophisticated implementations of the optimal control formalism.

  10. The influence of social-cognitive factors on personal hygiene practices to protect against influenzas: using modelling to compare avian A/H5N1 and 2009 pandemic A/H1N1 influenzas in Hong Kong.

    PubMed

    Liao, Qiuyan; Cowling, Benjamin J; Lam, Wendy Wing Tak; Fielding, Richard

    2011-06-01

    Understanding population responses to influenza helps optimize public health interventions. Relevant theoretical frameworks remain nascent. To model associations between trust in information, perceived hygiene effectiveness, knowledge about the causes of influenza, perceived susceptibility and worry, and personal hygiene practices (PHPs) associated with influenza. Cross-sectional household telephone surveys on avian influenza A/H5N1 (2006) and pandemic influenza A/H1N1 (2009) gathered comparable data on trust in formal and informal sources of influenza information, influenza-related knowledge, perceived hygiene effectiveness, worry, perceived susceptibility, and PHPs. Exploratory factor analysis confirmed domain content while confirmatory factor analysis was used to evaluate the extracted factors. The hypothesized model, compiled from different theoretical frameworks, was optimized with structural equation modelling using the A/H5N1 data. The optimized model was then tested against the A/H1N1 dataset. The model was robust across datasets though corresponding path weights differed. Trust in formal information was positively associated with perceived hygiene effectiveness which was positively associated with PHPs in both datasets. Trust in formal information was positively associated with influenza worry in A/H5N1 data, and with knowledge of influenza cause in A/H1N1 data, both variables being positively associated with PHPs. Trust in informal information was positively associated with influenza worry in both datasets. Independent of information trust, perceived influenza susceptibility associated with influenza worry. Worry associated with PHPs in A/H5N1 data only. Knowledge of influenza cause and perceived PHP effectiveness were associated with PHPs. Improving trust in formal information should increase PHPs. Worry was significantly associated with PHPs in A/H5N1.

  11. Formal Synthesis of (±)-Aplykurodinone-1 through a Hetero-Pauson-Khand Cycloaddition Approach.

    PubMed

    Tao, Cheng; Zhang, Jing; Chen, Xiaoming; Wang, Huifei; Li, Yun; Cheng, Bin; Zhai, Hongbin

    2017-03-03

    The tricyclic intermediate 2 has been synthesized in eight steps from known compound 6 in 20% overall yield. As such, this constitutes a highly efficient formal synthesis of (±)-aplykurodinone-1. This synthesis features a unique, one-pot, intramolecular hetero-Pauson-Khand reaction (h-PKR)/desilylation sequence to expeditiously construct the tricyclic framework, providing valuable insights for expanding the scope and boundaries of h-PKR.

  12. Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing

    NASA Technical Reports Server (NTRS)

    Jones, Robert L.; Goode, Plesent W. (Technical Monitor)

    2000-01-01

    The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.

  13. An enhanced biometric authentication scheme for telecare medicine information systems with nonce using chaotic hash function.

    PubMed

    Das, Ashok Kumar; Goswami, Adrijit

    2014-06-01

    Recently, Awasthi and Srivastava proposed a novel biometric remote user authentication scheme for the telecare medicine information system (TMIS) with nonce. Their scheme is very efficient as it is based on efficient chaotic one-way hash function and bitwise XOR operations. In this paper, we first analyze Awasthi-Srivastava's scheme and then show that their scheme has several drawbacks: (1) incorrect password change phase, (2) fails to preserve user anonymity property, (3) fails to establish a secret session key beween a legal user and the server, (4) fails to protect strong replay attack, and (5) lacks rigorous formal security analysis. We then a propose a novel and secure biometric-based remote user authentication scheme in order to withstand the security flaw found in Awasthi-Srivastava's scheme and enhance the features required for an idle user authentication scheme. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks, including the replay and man-in-the-middle attacks. Our scheme is also efficient as compared to Awasthi-Srivastava's scheme.

  14. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  15. Thermodynamic metrics and optimal paths.

    PubMed

    Sivak, David A; Crooks, Gavin E

    2012-05-11

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  16. Increasing the Efficiency of the One Room School.

    ERIC Educational Resources Information Center

    Berg, Paul

    The one room school is a challenging educational setting for both teacher and student. Isolation of the school, limited availability of educational resources, and the demanding role of the school as the only formal educational institution within the community are conditions which make classroom efficiency an important consideration for the…

  17. Longer guts and higher food quality increase energy intake in migratory swans.

    PubMed

    van Gils, Jan A; Beekman, Jan H; Coehoorn, Pieter; Corporaal, Els; Dekkers, Ten; Klaassen, Marcel; van Kraaij, Rik; de Leeuw, Rinze; de Vries, Peter P

    2008-11-01

    1. Within the broad field of optimal foraging, it is increasingly acknowledged that animals often face digestive constraints rather than constraints on rates of food collection. This therefore calls for a formalization of how animals could optimize food absorption rates. 2. Here we generate predictions from a simple graphical optimal digestion model for foragers that aim to maximize their (true) metabolizable food intake over total time (i.e. including nonforaging bouts) under a digestive constraint. 3. The model predicts that such foragers should maintain a constant food retention time, even if gut length or food quality changes. For phenotypically flexible foragers, which are able to change the size of their digestive machinery, this means that an increase in gut length should go hand in hand with an increase in gross intake rate. It also means that better quality food should be digested more efficiently. 4. These latter two predictions are tested in a large avian long-distance migrant, the Bewick's swan (Cygnus columbianus bewickii), feeding on grasslands in its Dutch wintering quarters. 5. Throughout winter, free-ranging Bewick's swans, growing a longer gut and experiencing improved food quality, increased their gross intake rate (i.e. bite rate) and showed a higher digestive efficiency. These responses were in accordance with the model and suggest maintenance of a constant food retention time. 6. These changes doubled the birds' absorption rate. Had only food quality changed (and not gut length), then absorption rate would have increased by only 67%; absorption rate would have increased by only 17% had only gut length changed (and not food quality). 7. The prediction that gross intake rate should go up with gut length parallels the mechanism included in some proximate models of foraging that feeding motivation scales inversely to gut fullness. We plea for a tighter integration between ultimate and proximate foraging models.

  18. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  19. Optimization under uncertainty of parallel nonlinear energy sinks

    NASA Astrophysics Data System (ADS)

    Boroson, Ethan; Missoum, Samy; Mattei, Pierre-Olivier; Vergez, Christophe

    2017-04-01

    Nonlinear Energy Sinks (NESs) are a promising technique for passively reducing the amplitude of vibrations. Through nonlinear stiffness properties, a NES is able to passively and irreversibly absorb energy. Unlike the traditional Tuned Mass Damper (TMD), NESs do not require a specific tuning and absorb energy over a wider range of frequencies. Nevertheless, they are still only efficient over a limited range of excitations. In order to mitigate this limitation and maximize the efficiency range, this work investigates the optimization of multiple NESs configured in parallel. It is well known that the efficiency of a NES is extremely sensitive to small perturbations in loading conditions or design parameters. In fact, the efficiency of a NES has been shown to be nearly discontinuous in the neighborhood of its activation threshold. For this reason, uncertainties must be taken into account in the design optimization of NESs. In addition, the discontinuities require a specific treatment during the optimization process. In this work, the objective of the optimization is to maximize the expected value of the efficiency of NESs in parallel. The optimization algorithm is able to tackle design variables with uncertainty (e.g., nonlinear stiffness coefficients) as well as aleatory variables such as the initial velocity of the main system. The optimal design of several parallel NES configurations for maximum mean efficiency is investigated. Specifically, NES nonlinear stiffness properties, considered random design variables, are optimized for cases with 1, 2, 3, 4, 5, and 10 NESs in parallel. The distributions of efficiency for the optimal parallel configurations are compared to distributions of efficiencies of non-optimized NESs. It is observed that the optimization enables a sharp increase in the mean value of efficiency while reducing the corresponding variance, thus leading to more robust NES designs.

  20. Efficiency and formalism of quantum games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.F.; Johnson, Neil F.

    We show that quantum games are more efficient than classical games and provide a saturated upper bound for this efficiency. We also demonstrate that the set of finite classical games is a strict subset of the set of finite quantum games. Our analysis is based on a rigorous formulation of quantum games, from which quantum versions of the minimax theorem and the Nash equilibrium theorem can be deduced.

  1. Market-Based Decision Guidance Framework for Power and Alternative Energy Collaboration

    NASA Astrophysics Data System (ADS)

    Altaleb, Hesham

    With the introduction of power energy markets deregulation, innovations have transformed once a static network into a more flexible grid. Microgrids have also been deployed to serve various purposes (e.g., reliability, sustainability, etc.). With the rapid deployment of smart grid technologies, it has become possible to measure and record both, the quantity and time of the consumption of electrical power. In addition, capabilities for controlling distributed supply and demand have resulted in complex systems where inefficiencies are possible and where improvements can be made. Electric power like other volatile resources cannot be stored efficiently, therefore, managing such resource requires considerable attention. Such complex systems present a need for decisions that can streamline consumption, delay infrastructure investments, and reduce costs. When renewable power resources and the need for limiting harmful emissions are added to the equation, the search space for decisions becomes increasingly complex. As a result, the need for a comprehensive decision guidance system for electrical power resources consumption and productions becomes evident. In this dissertation, I formulate and implement a comprehensive framework that addresses different aspect of the electrical power generation and consumption using optimization models and utilizing collaboration concepts. Our solution presents a two-prong approach: managing interaction in real-time for the short-term immediate consumption of already allocated resources; and managing the operational planning for the long-run consumption. More specifically, in real-time, we present and implement a model of how to organize a secondary market for peak-demand allocation and describe the properties of the market that guarantees efficient execution and a method for the fair distribution of collaboration gains. We also propose and implement a primary market for peak demand bounds determination problem with the assumption that participants of this market have the ability to collaborate in real-time. Moreover, proposed in this dissertation is an extensible framework to facilitate C&I entities forming a consortium to collaborate on their electric power supply and demand. The collaborative framework includes the structure of market setting, bids, and market resolution that produces a schedule of how power components are controlled as well as the resulting payment. The market resolution must satisfy a number of desirable properties (i.e., feasibility, Nash equilibrium, Pareto optimality, and equal collaboration profitability) which are formally defined in the dissertation. Furthermore, to support the extensible framework components' library, power components such as utility contract, back-up power generator, renewable resource, and power consuming service are formally modeled. Finally, the validity of this framework is evaluated by a case study using simulated load scenarios to examine the ability of the framework to efficiently operate at the specified time intervals with minimal overhead cost.

  2. Lending to Parents and Insuring Children: Is There a Role for Microcredit in Complementing Health Insurance in Rural China?

    PubMed

    You, Jing

    2016-05-01

    This paper assesses the causal impact on child health of borrowing formal microcredit for Chinese rural households by exploiting a panel dataset (2000 and 2004) in a poor northwest province. Endogenous borrowing is controlled for in a dynamic regression-discontinuity design creating a quasi-experimental environment for causal inferences. There is causal relationship running from formal microcredit to improved child health in the short term, while past borrowing behaviour has no protracted impact on subsequent child health outcomes. Moreover, formal microcredit appears to be a complement to health insurance in improving child health through two mechanisms-it enhances affordability for out-of-pocket health care expenditure and helps buffer consumption against adverse health shocks and financial risk incurred by current health insurance arrangements. Government efforts in expanding health insurance for rural households would be more likely to achieve its optimal goals of improving child health outcomes if combined with sufficient access to formal microcredit. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Privatization of solid waste collection services: Lessons from Gaborone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolaane, Benjamin, E-mail: bolaaneb@mopipi.ub.bw; Isaac, Emmanuel, E-mail: eisaac300@gmail.com

    Highlights: • We compared efficiency and effectiveness of waste collection by the public and private sector. • Public sector performs better than private sector in some areas and vice versa. • Outsourcing waste collection in developing countries is hindered by limited capacity on contractual issues. • Outsourcing collection in developing countries is hampered by inadequate waste information. • There is need to build capacity in the public sector of developing countries to support outsourcing. - Abstract: Formal privatization of solid waste collection activities has often been flagged as a suitable intervention for some of the challenges of solid waste managementmore » experienced by developing countries. Proponents of outsourcing collection to the private sector argue that in contrast to the public sector, it is more effective and efficient in delivering services. This essay is a comparative case study of efficiency and effectiveness attributes between the public and the formal private sector, in relation to the collection of commercial waste in Gaborone. The paper is based on analysis of secondary data and key informant interviews. It was found that while, the private sector performed comparatively well in most of the chosen indicators of efficiency and effectiveness, the public sector also had areas where it had a competitive advantage. For instance, the private sector used the collection crew more efficiently, while the public sector was found to have a more reliable workforce. The study recommends that, while formal private sector participation in waste collection has some positive effects in terms of quality of service rendered, in most developing countries, it has to be enhanced by building sufficient capacity within the public sector on information about services contracted out and evaluation of performance criteria within the contracting process.« less

  4. Formal modeling of a system of chemical reactions under uncertainty.

    PubMed

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  5. Reflectance analysis of porosity gradient in nanostructured silicon layers

    NASA Astrophysics Data System (ADS)

    Jurečka, Stanislav; Imamura, Kentaro; Matsumoto, Taketoshi; Kobayashi, Hikaru

    2017-12-01

    In this work we study optical properties of nanostructured layers formed on silicon surface. Nanostructured layers on Si are formed in order to reach high suppression of the light reflectance. Low spectral reflectance is important for improvement of the conversion efficiency of solar cells and for other optoelectronic applications. Effective method of forming nanostructured layers with ultralow reflectance in a broad interval of wavelengths is in our approach based on metal assisted etching of Si. Si surface immersed in HF and H2O2 solution is etched in contact with the Pt mesh roller and the structure of the mesh is transferred on the etched surface. During this etching procedure the layer density evolves gradually and the spectral reflectance decreases exponentially with the depth in porous layer. We analyzed properties of the layer porosity by incorporating the porosity gradient into construction of the layer spectral reflectance theoretical model. Analyzed layer is splitted into 20 sublayers in our approach. Complex dielectric function in each sublayer is computed by using Bruggeman effective media theory and the theoretical spectral reflectance of modelled multilayer system is computed by using Abeles matrix formalism. Porosity gradient is extracted from the theoretical reflectance model optimized in comparison to the experimental values. Resulting values of the structure porosity development provide important information for optimization of the technological treatment operations.

  6. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less

  7. Finite-size effect on optimal efficiency of heat engines.

    PubMed

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  8. Optimizing observational networks combining gliders, moored buoys and FerryBox in the Bay of Biscay and English Channel

    NASA Astrophysics Data System (ADS)

    Charria, Guillaume; Lamouroux, Julien; De Mey, Pierre

    2016-10-01

    Designing optimal observation networks in coastal oceans remains one of the major challenges towards the implementation of future efficient Integrated Ocean Observing Systems to monitor the coastal environment. In the Bay of Biscay and the English Channel, the diversity of involved processes (e.g. tidally-driven circulation, plume dynamics) requires to adapt observing systems to the specific targeted environments. Also important is the requirement for those systems to sustain coastal applications. Two observational network design experiments have been implemented for the spring season in two regions: the Loire River plume (northern part of the Bay of Biscay) and the Western English Channel. The method used to perform these experiments is based on the ArM (Array Modes) formalism using an ensemble-based approach without data assimilation. The first experiment in the Loire River plume aims to explore different possible glider endurance lines combined with a fixed mooring to monitor temperature and salinity. Main results show an expected improvement when combining glider and mooring observations. The experiment also highlights that the chosen transect (along-shore and North-South, cross-shore) does not significantly impact the efficiency of the network. Nevertheless, the classification from the method results in slightly better performances for along-shore and North-South sections. In the Western English Channel, a tidally-driven circulation system, added value of using a glider below FerryBox temperature and salinity measurements has been assessed. FerryBox systems are characterised by a high frequency sampling rate crossing the region 2 to 3 times a day. This efficient sampling, as well as the specific vertical hydrological structure (which is homogeneous in many sub-regions of the domain), explains the fact that the added value of an associated glider transect is not significant. These experiments combining existing and future observing systems, as well as numerical ensemble simulations, highlight the key issue of monitoring the whole water column in and close to river plumes (using gliders for example) and the efficiency of the surface high frequency sampling from FerryBoxes in macrotidal regions.

  9. A call for formal telemedicine training during stroke fellowship

    PubMed Central

    Jia, Judy; Gildersleeve, Kasey; Ankrom, Christy; Cai, Chunyan; Rahbar, Mohammad; Savitz, Sean I.; Wu, Tzu-Ching

    2016-01-01

    During the 20 years since US Food and Drug Administration approval of IV tissue plasminogen activator for acute ischemic stroke, vascular neurology consultation via telemedicine has contributed to an increased frequency of IV tissue plasminogen activator administration and broadened geographic access to the drug. Nevertheless, a growing demand for acute stroke coverage persists, with the greatest disparity found in rural communities underserved by neurologists. To provide efficient and consistent acute care, formal training in telemedicine during neurovascular fellowship is warranted. Herein, we describe our experiences incorporating telestroke into the vascular neurology fellowship curriculum and propose recommendations on integrating formal telemedicine training into the Accreditation Council for Graduate Medical Education vascular neurology fellowship. PMID:27016522

  10. A call for formal telemedicine training during stroke fellowship.

    PubMed

    Jagolino, Amanda L; Jia, Judy; Gildersleeve, Kasey; Ankrom, Christy; Cai, Chunyan; Rahbar, Mohammad; Savitz, Sean I; Wu, Tzu-Ching

    2016-05-10

    During the 20 years since US Food and Drug Administration approval of IV tissue plasminogen activator for acute ischemic stroke, vascular neurology consultation via telemedicine has contributed to an increased frequency of IV tissue plasminogen activator administration and broadened geographic access to the drug. Nevertheless, a growing demand for acute stroke coverage persists, with the greatest disparity found in rural communities underserved by neurologists. To provide efficient and consistent acute care, formal training in telemedicine during neurovascular fellowship is warranted. Herein, we describe our experiences incorporating telestroke into the vascular neurology fellowship curriculum and propose recommendations on integrating formal telemedicine training into the Accreditation Council for Graduate Medical Education vascular neurology fellowship. © 2016 American Academy of Neurology.

  11. Medical Student Appraisal

    PubMed Central

    Sampognaro, P.J.; Mitchell, S.L.; Weeks, S.R.; Khalifian, S.; Markman, T.M.; Uebel, L.W.; Dattilo, J.R.

    2013-01-01

    Summary Background Pre-rounding is essential to preparing for morning rounds. Despite its importance, pre-rounding is rarely formally taught within the medical school curriculum and more often informally learned by modeling residents. The evolution of mobile applications provides opportunities to optimize this process. Objectives To evaluate three options available to medical students while pre-rounding and promote adoption of mobile resources in clinical care. Methods Six medical students formed the evaluation cohort. Students were surveyed to assess pre-rounding practices. Participants utilized paper-based pre-rounding templates for two weeks followed by two weeks of the electronic note-taking service EvernoteTM. A review of mobile applications on the iTunesTM and Google PlayTM stores was performed, with each application informally reviewed by a single student. The application ScutsheetTM was selected for formal review by all students. Data was collected from narrative responses supplied by students throughout the evaluation periods and aggregated to assess strengths and limitations of each application. Results Pre-study responses demonstrated two consistent processes: verbal sign-out of overnight events and template use to organize patient information. The paper-based template was praised for its organization and familiarity amongst residents, but perceived as limited by the requirement of re-copying data into the hospital’s electronic medical record (EMR). EvernoteTM excelled due to compatibility across multiple operating systems, including accessibility from clinical workstations and ability to copy notes into the hospital’s EMR. ScutsheetTM allowed for retention of data across multiple hospital days, but was limited by inability to export data or modify the electronic template. Aggregated user feedback identified the abilities to customize templates and copy information into the EMR as two prevailing characteristics that enhanced the efficiency of pre-rounding. Discussion Mobile devices offer the potential to enhance pre-rounding efficiency for medical students and residents. A customizable EvernoteTM-based system is described in sufficient detail for reproduction by interested students. PMID:24155792

  12. Medical student appraisal: electronic resources for inpatient pre-rounding.

    PubMed

    Sampognaro, P J; Mitchell, S L; Weeks, S R; Khalifian, S; Markman, T M; Uebel, L W; Dattilo, J R

    2013-01-01

    Pre-rounding is essential to preparing for morning rounds. Despite its importance, pre-rounding is rarely formally taught within the medical school curriculum and more often informally learned by modeling residents. The evolution of mobile applications provides opportunities to optimize this process. To evaluate three options available to medical students while pre-rounding and promote adoption of mobile resources in clinical care. Six medical students formed the evaluation cohort. Students were surveyed to assess pre-rounding practices. Participants utilized paper-based pre-rounding templates for two weeks followed by two weeks of the electronic note-taking service Evernote. A review of mobile applications on the iTunes and Google Play stores was performed, with each application informally reviewed by a single student. The application Scutsheet was selected for formal review by all students. Data was collected from narrative responses supplied by students throughout the evaluation periods and aggregated to assess strengths and limitations of each application. Pre-study responses demonstrated two consistent processes: verbal sign-out of overnight events and template use to organize patient information. The paper-based template was praised for its organization and familiarity amongst residents, but perceived as limited by the requirement of re-copying data into the hospital's electronic medical record (EMR). Evernote excelled due to compatibility across multiple operating systems, including accessibility from clinical workstations and ability to copy notes into the hospital's EMR. Scutsheet allowed for retention of data across multiple hospital days, but was limited by inability to export data or modify the electronic template. Aggregated user feedback identified the abilities to customize templates and copy information into the EMR as two prevailing characteristics that enhanced the efficiency of pre-rounding. Mobile devices offer the potential to enhance pre-rounding efficiency for medical students and residents. A customizable Evernote-based system is described in sufficient detail for reproduction by interested students.

  13. A phylogenetic Kalman filter for ancestral trait reconstruction using molecular data.

    PubMed

    Lartillot, Nicolas

    2014-02-15

    Correlation between life history or ecological traits and genomic features such as nucleotide or amino acid composition can be used for reconstructing the evolutionary history of the traits of interest along phylogenies. Thus far, however, such ancestral reconstructions have been done using simple linear regression approaches that do not account for phylogenetic inertia. These reconstructions could instead be seen as a genuine comparative regression problem, such as formalized by classical generalized least-square comparative methods, in which the trait of interest and the molecular predictor are represented as correlated Brownian characters coevolving along the phylogeny. Here, a Bayesian sampler is introduced, representing an alternative and more efficient algorithmic solution to this comparative regression problem, compared with currently existing generalized least-square approaches. Technically, ancestral trait reconstruction based on a molecular predictor is shown to be formally equivalent to a phylogenetic Kalman filter problem, for which backward and forward recursions are developed and implemented in the context of a Markov chain Monte Carlo sampler. The comparative regression method results in more accurate reconstructions and a more faithful representation of uncertainty, compared with simple linear regression. Application to the reconstruction of the evolution of optimal growth temperature in Archaea, using GC composition in ribosomal RNA stems and amino acid composition of a sample of protein-coding genes, confirms previous findings, in particular, pointing to a hyperthermophilic ancestor for the kingdom. The program is freely available at www.phylobayes.org.

  14. Improving Safety through Human Factors Engineering.

    PubMed

    Siewert, Bettina; Hochman, Mary G

    2015-10-01

    Human factors engineering (HFE) focuses on the design and analysis of interactive systems that involve people, technical equipment, and work environment. HFE is informed by knowledge of human characteristics. It complements existing patient safety efforts by specifically taking into consideration that, as humans, frontline staff will inevitably make mistakes. Therefore, the systems with which they interact should be designed for the anticipation and mitigation of human errors. The goal of HFE is to optimize the interaction of humans with their work environment and technical equipment to maximize safety and efficiency. Special safeguards include usability testing, standardization of processes, and use of checklists and forcing functions. However, the effectiveness of the safety program and resiliency of the organization depend on timely reporting of all safety events independent of patient harm, including perceived potential risks, bad outcomes that occur even when proper protocols have been followed, and episodes of "improvisation" when formal guidelines are found not to exist. Therefore, an institution must adopt a robust culture of safety, where the focus is shifted from blaming individuals for errors to preventing future errors, and where barriers to speaking up-including barriers introduced by steep authority gradients-are minimized. This requires creation of formal guidelines to address safety concerns, establishment of unified teams with open communication and shared responsibility for patient safety, and education of managers and senior physicians to perceive the reporting of safety concerns as a benefit rather than a threat. © RSNA, 2015.

  15. ADGS-2100 Adaptive Display and Guidance System Window Manager Analysis

    NASA Technical Reports Server (NTRS)

    Whalen, Mike W.; Innis, John D.; Miller, Steven P.; Wagner, Lucas G.

    2006-01-01

    Recent advances in modeling languages have made it feasible to formally specify and analyze the behavior of large system components. Synchronous data flow languages, such as Lustre, SCR, and RSML-e are particularly well suited to this task, and commercial versions of these tools such as SCADE and Simulink are growing in popularity among designers of safety critical systems, largely due to their ability to automatically generate code from the models. At the same time, advances in formal analysis tools have made it practical to formally verify important properties of these models to ensure that design defects are identified and corrected early in the lifecycle. This report describes how these tools have been applied to the ADGS-2100 Adaptive Display and Guidance Window Manager being developed by Rockwell Collins Inc. This work demonstrates how formal methods can be easily and cost-efficiently used to remove defects early in the design cycle.

  16. Helping System Engineers Bridge the Peaks

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen

    2014-01-01

    In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.

  17. Efficiency Enhancement for an Inductive Wireless Power Transfer System by Optimizing the Impedance Matching Networks.

    PubMed

    Miao, Zhidong; Liu, Dake; Gong, Chen

    2017-10-01

    Inductive wireless power transfer (IWPT) is a promising power technology for implantable biomedical devices, where the power consumption is low and the efficiency is the most important consideration. In this paper, we propose an optimization method of impedance matching networks (IMN) to maximize the IWPT efficiency. The IMN at the load side is designed to achieve the optimal load, and the IMN at the source side is designed to deliver the required amount of power (no-more-no-less) from the power source to the load. The theoretical analyses and design procedure are given. An IWPT system for an implantable glaucoma therapeutic prototype is designed as an example. Compared with the efficiency of the resonant IWPT system, the efficiency of our optimized system increases with a factor of 1.73. Besides, the efficiency of our optimized IWPT system is 1.97 times higher than that of the IWPT system optimized by the traditional maximum power transfer method. All the discussions indicate that the optimization method proposed in this paper could achieve a high efficiency and long working time when the system is powered by a battery.

  18. Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.

    PubMed

    Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho

    2017-09-15

    In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.

  19. Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks

    PubMed Central

    Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho

    2017-01-01

    In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818

  20. Probabilistic Multi-Scale, Multi-Level, Multi-Disciplinary Analysis and Optimization of Engine Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2000-01-01

    Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.

  1. Cooperative combinatorial optimization: evolutionary computation case study.

    PubMed

    Burgin, Mark; Eberbach, Eugene

    2008-01-01

    This paper presents a formalization of the notion of cooperation and competition of multiple systems that work toward a common optimization goal of the population using evolutionary computation techniques. It is proved that evolutionary algorithms are more expressive than conventional recursive algorithms, such as Turing machines. Three classes of evolutionary computations are introduced and studied: bounded finite, unbounded finite, and infinite computations. Universal evolutionary algorithms are constructed. Such properties of evolutionary algorithms as completeness, optimality, and search decidability are examined. A natural extension of evolutionary Turing machine (ETM) model is proposed to properly reflect phenomena of cooperation and competition in the whole population.

  2. Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism

    NASA Astrophysics Data System (ADS)

    Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.

    2018-05-01

    The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.

  3. Colloquium: Modeling the dynamics of multicellular systems: Application to tissue engineering

    NASA Astrophysics Data System (ADS)

    Kosztin, Ioan; Vunjak-Novakovic, Gordana; Forgacs, Gabor

    2012-10-01

    Tissue engineering is a rapidly evolving discipline that aims at building functional tissues to improve or replace damaged ones. To be successful in such an endeavor, ideally, the engineering of tissues should be based on the principles of developmental biology. Recent progress in developmental biology suggests that the formation of tissues from the composing cells is often guided by physical laws. Here a comprehensive computational-theoretical formalism is presented that is based on experimental input and incorporates biomechanical principles of developmental biology. The formalism is described and it is shown that it correctly reproduces and predicts the quantitative characteristics of the fundamental early developmental process of tissue fusion. Based on this finding, the formalism is then used toward the optimization of the fabrication of tubular multicellular constructs, such as a vascular graft, by bioprinting, a novel tissue engineering technology.

  4. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  5. Recovering time-varying networks of dependencies in social and biological studies.

    PubMed

    Ahmed, Amr; Xing, Eric P

    2009-07-21

    A plausible representation of the relational information among entities in dynamic systems such as a living cell or a social community is a stochastic network that is topologically rewiring and semantically evolving over time. Although there is a rich literature in modeling static or temporally invariant networks, little has been done toward recovering the network structure when the networks are not observable in a dynamic context. In this article, we present a machine learning method called TESLA, which builds on a temporally smoothed l(1)-regularized logistic regression formalism that can be cast as a standard convex-optimization problem and solved efficiently by using generic solvers scalable to large networks. We report promising results on recovering simulated time-varying networks and on reverse engineering the latent sequence of temporally rewiring political and academic social networks from longitudinal data, and the evolving gene networks over >4,000 genes during the life cycle of Drosophila melanogaster from a microarray time course at a resolution limited only by sample frequency.

  6. A 3D Model for Eddy Current Inspection in Aeronautics: Application to Riveted Structures

    NASA Astrophysics Data System (ADS)

    Paillard, S.; Pichenot, G.; Lambert, M.; Voillaume, H.; Dominguez, N.

    2007-03-01

    Eddy current technique is currently an operational tool used for fastener inspection which is an important issue for the maintenance of aircraft structures. The industry calls for faster, more sensitive and reliable NDT techniques for the detection and characterization of potential flaws nearby rivet. In order to reduce the development time and to optimize the design and the performances assessment of an inspection procedure, the CEA and EADS have started a collaborative work aiming at extending the modeling features of the CIVA non destructive simulation plat-form in order to handle the configuration of a layered planar structure with a rivet and an embedded flaw nearby. Therefore, an approach based on the Volume Integral Method using the Green dyadic formalism which greatly increases computation efficiency has been developed. The first step, modeling the rivet without flaw as a hole in a multi-stratified structure, has been reached and validated in several configurations with experimental data.

  7. Transformation of general binary MRF minimization to the first-order case.

    PubMed

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  8. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  9. Integrating Science and Engineering to Implement Evidence-Based Practices in Health Care Settings

    PubMed Central

    Wu, Shinyi; Duan, Naihua; Wisdom, Jennifer P.; Kravitz, Richard L.; Owen, Richard R.; Sullivan, Greer; Wu, Albert W.; Di Capua, Paul; Hoagwood, Kimberly Eaton

    2015-01-01

    Integrating two distinct and complementary paradigms, science and engineering, may produce more effective outcomes for the implementation of evidence-based practices in health care settings. Science formalizes and tests innovations, whereas engineering customizes and optimizes how the innovation is applied tailoring to accommodate local conditions. Together they may accelerate the creation of an evidence-based healthcare system that works effectively in specific health care settings. We give examples of applying engineering methods for better quality, more efficient, and safer implementation of clinical practices, medical devices, and health services systems. A specific example was applying systems engineering design that orchestrated people, process, data, decision-making, and communication through a technology application to implement evidence-based depression care among low-income patients with diabetes. We recommend that leading journals recognize the fundamental role of engineering in implementation research, to improve understanding of design elements that create a better fit between program elements and local context. PMID:25217100

  10. Data-based Non-Markovian Model Inference

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close collaboration with M.D. Chekroun, D. Kondrashov, S. Kravtsov and A.W. Robertson.

  11. Optimization of permanent breast seed implant dosimetry incorporating tissue heterogeneity

    NASA Astrophysics Data System (ADS)

    Mashouf, Shahram

    Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose around brachytherapy sources is based on the AAPM TG43 formalism, which generates the dose in homogeneous water medium. Recently, AAPM task group no. 186 (TG186) emphasized the importance of accounting for heterogeneities. In this work we introduce an analytical dose calculation algorithm in heterogeneous media using CT images. The advantages over other methods are computational efficiency and the ease of integration into clinical use. An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of the source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. The dose distributions obtained through applying ICF to TG43 protocol agreed very well with those of Monte Carlo simulations and experiments in all phantoms. In all cases, the mean relative error was reduced by at least a factor of two when ICF correction factor was applied to the TG43 protocol. In conclusion we have developed a new analytical dose calculation method, which enables personalized dose calculations in heterogeneous media using CT images. The methodology offers several advantages including the use of standard TG43 formalism, fast calculation time and extraction of the ICF parameters directly from Hounsfield Units. The methodology was implemented into our clinical treatment planning system where a cohort of 140 patients were processed to study the clinical benefits of a heterogeneity corrected dose.

  12. Analyses of Public Utility Building - Students Designs, Aimed at their Energy Efficiency Improvement

    NASA Astrophysics Data System (ADS)

    Wołoszyn, Marek Adam

    2017-10-01

    Public utility buildings are formally, structurally and functionally complex entities. Frequently, the process of their design involves the retroactive reconsideration of energy engineering issues, once a building concept has already been completed. At that stage, minor formal corrections are made along with the design of the external layer of the building in order to satisfy applicable standards. Architecture students do the same when designing assigned public utility buildings. In order to demonstrate energy-related defects of building designs developed by students, the conduct of analyses was proposed. The completed designs of public utility buildings were examined with regard to energy efficiency of the solutions they feature through the application of the following programs: Ecotect, Vasari, and in case of simpler analyses ArchiCad program extensions were sufficient.

  13. Economics in the School Curriculum.

    ERIC Educational Resources Information Center

    Brenneke, Judith Staley; Soper, John C.

    1987-01-01

    Various approaches to developing and implementing economics curricula are explored, including positive and normative economics, teacher-developed informal curriculum, district-developed formal curriculum, "outside" curriculum, the infusion approach, or as a separate course. It is suggested that a "blend" of the alternatives may optimize the…

  14. Practical State Machine Replication with Confidentiality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Zhang, Haibin

    2016-01-01

    We study how to enable arbitrary randomized algorithms in Byzantine fault-tolerant (BFT) settings. We formalize a randomized BFT protocol and provide a simple and efficient construction that can be built on any existing BFT protocols while adding practically no overhead. We go one step further to revisit a confidential BFT protocol (Yin et al., SOSP '03). We show that their scheme is potentially susceptible to safety and confidentiality attacks. We then present a new protocol that is secure in the stronger model we formalize, by extending the idea of a randomized BFT protocol. Our protocol uses only efficient symmetric cryptography,more » while Yin et al.'s uses costly threshold signatures. We implemented and evaluated our protocols on microbenchmarks and real-world use cases. We show that our randomized BFT protocol is as efficient as conventional BFT protocols, and our confidential BFT protocol is two to three orders of magnitude faster than Yin et al.'s, which is less secure than ours.« less

  15. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  16. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.

    1984-01-01

    A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.

  17. Belief Propagation Algorithm for Portfolio Optimization Problems

    PubMed Central

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462

  18. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  19. Tractable Pareto Optimization of Temporal Preferences

    NASA Technical Reports Server (NTRS)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  20. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  1. Sub-Optimal Breastfeeding and Its Associated Factors in Rural Communities of Hula District, Southern Ethiopia: A Cross-Sectional Study.

    PubMed

    Hoche, Shibru; Meshesha, Berhan; Wakgari, Negash

    2018-01-01

    Sub-optimal breast feeding contributed a significant number of infants' death. Although breast feeding is universal in Ethiopia, the practice is not optimal. Hence, this study assessed the prevalence of sub-optimal breast feeding practice and its associated factors in rural communities of Hula District, Southern Ethiopia. A community based cross-sectional study was conducted among 634 women with infants aged 6 to 12 months. Multistage sampling technique was employed to select study subjects. Interviewer administered structured questionnaire was used for data collection. Data were entered and analyzed by using SPSS version 20.0. Bivariate and multivariate logistic regression was used to identify predictors of delayed initiation of breastfeeding and non-exclusive breastfeeding. The prevalence of suboptimal breast feeding of infants was found to be 56.9%. Nearly half (49.4%) of the mothers delayed initiation of breast feeding, and 13.4% of the infants were fed breast non-exclusively. Having formal education [AOR: 1.74; 95% CI (1.17, 2.59)], family size < 5 [AOR=1.59; 95% CI (1.03, 2.45)], having one under five child [AOR=1.88; 95% CI (1.29, 2.75)], lower number of antenatal care visits [AOR= 2.40; 95% CI (1.68, 3.43)] and lack of counseling on breastfeeding [AOR= 1.69; 95% CI (1.19, 2.41)] were negatively associated with delayed initiation of breast feeding. Similarly, not attending formal education, low birth order and lack of knowledge about exclusive breastfeeding were also negatively associated with exclusive breastfeeding practice. In this study, sub-optimal breast feeding was found to be high. Delayed initiation and non-exclusive breastfeeding practices were major contributors to sub-optimal breast feeding.

  2. Considerations on the Optimal and Efficient Processing of Information-Bearing Signals

    ERIC Educational Resources Information Center

    Harms, Herbert Andrew

    2013-01-01

    Noise is a fundamental hurdle that impedes the processing of information-bearing signals, specifically the extraction of salient information. Processing that is both optimal and efficient is desired; optimality ensures the extracted information has the highest fidelity allowed by the noise, while efficiency ensures limited resource usage. Optimal…

  3. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  4. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  5. Uniform, optimal signal processing of mapped deep-sequencing data.

    PubMed

    Kumar, Vibhor; Muratani, Masafumi; Rayan, Nirmala Arul; Kraus, Petra; Lufkin, Thomas; Ng, Huck Hui; Prabhakar, Shyam

    2013-07-01

    Despite their apparent diversity, many problems in the analysis of high-throughput sequencing data are merely special cases of two general problems, signal detection and signal estimation. Here we adapt formally optimal solutions from signal processing theory to analyze signals of DNA sequence reads mapped to a genome. We describe DFilter, a detection algorithm that identifies regulatory features in ChIP-seq, DNase-seq and FAIRE-seq data more accurately than assay-specific algorithms. We also describe EFilter, an estimation algorithm that accurately predicts mRNA levels from as few as 1-2 histone profiles (R ∼0.9). Notably, the presence of regulatory motifs in promoters correlates more with histone modifications than with mRNA levels, suggesting that histone profiles are more predictive of cis-regulatory mechanisms. We show by applying DFilter and EFilter to embryonic forebrain ChIP-seq data that regulatory protein identification and functional annotation are feasible despite tissue heterogeneity. The mathematical formalism underlying our tools facilitates integrative analysis of data from virtually any sequencing-based functional profile.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivak, David; Crooks, Gavin

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  7. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  8. Formal and Informal Learning and First-Year Psychology Students’ Development of Scientific Thinking: A Two-Wave Panel Study

    PubMed Central

    Soyyılmaz, Demet; Griffin, Laura M.; Martín, Miguel H.; Kucharský, Šimon; Peycheva, Ekaterina D.; Vaupotič, Nina; Edelsbrunner, Peter A.

    2017-01-01

    Scientific thinking is a predicate for scientific inquiry, and thus important to develop early in psychology students as potential future researchers. The present research is aimed at fathoming the contributions of formal and informal learning experiences to psychology students’ development of scientific thinking during their 1st-year of study. We hypothesize that informal experiences are relevant beyond formal experiences. First-year psychology student cohorts from various European countries will be assessed at the beginning and again at the end of the second semester. Assessments of scientific thinking will include scientific reasoning skills, the understanding of basic statistics concepts, and epistemic cognition. Formal learning experiences will include engagement in academic activities which are guided by university authorities. Informal learning experiences will include non-compulsory, self-guided learning experiences. Formal and informal experiences will be assessed with a newly developed survey. As dispositional predictors, students’ need for cognition and self-efficacy in psychological science will be assessed. In a structural equation model, students’ learning experiences and personal dispositions will be examined as predictors of their development of scientific thinking. Commonalities and differences in predictive weights across universities will be tested. The project is aimed at contributing information for designing university environments to optimize the development of students’ scientific thinking. PMID:28239363

  9. Formal and Informal Learning and First-Year Psychology Students' Development of Scientific Thinking: A Two-Wave Panel Study.

    PubMed

    Soyyılmaz, Demet; Griffin, Laura M; Martín, Miguel H; Kucharský, Šimon; Peycheva, Ekaterina D; Vaupotič, Nina; Edelsbrunner, Peter A

    2017-01-01

    Scientific thinking is a predicate for scientific inquiry, and thus important to develop early in psychology students as potential future researchers. The present research is aimed at fathoming the contributions of formal and informal learning experiences to psychology students' development of scientific thinking during their 1st-year of study. We hypothesize that informal experiences are relevant beyond formal experiences. First-year psychology student cohorts from various European countries will be assessed at the beginning and again at the end of the second semester. Assessments of scientific thinking will include scientific reasoning skills, the understanding of basic statistics concepts, and epistemic cognition. Formal learning experiences will include engagement in academic activities which are guided by university authorities. Informal learning experiences will include non-compulsory, self-guided learning experiences. Formal and informal experiences will be assessed with a newly developed survey. As dispositional predictors, students' need for cognition and self-efficacy in psychological science will be assessed. In a structural equation model, students' learning experiences and personal dispositions will be examined as predictors of their development of scientific thinking. Commonalities and differences in predictive weights across universities will be tested. The project is aimed at contributing information for designing university environments to optimize the development of students' scientific thinking.

  10. Formal Darwinism, the individual-as-maximizing-agent analogy and bet-hedging

    PubMed Central

    Grafen, A.

    1999-01-01

    The central argument of The origin of species was that mechanical processes (inheritance of features and the differential reproduction they cause) can give rise to the appearance of design. The 'mechanical processes' are now mathematically represented by the dynamic systems of population genetics, and the appearance of design by optimization and game theory in which the individual plays the part of the maximizing agent. Establishing a precise individual-as-maximizing-agent (IMA) analogy for a population-genetics system justifies optimization approaches, and so provides a modern formal representation of the core of Darwinism. It is a hitherto unnoticed implication of recent population-genetics models that, contrary to a decades-long consensus, an IMA analogy can be found in models with stochastic environments (subject to a convexity assumption), in which individuals maximize expected reproductive value. The key is that the total reproductive value of a species must be considered as constant, so therefore reproductive value should always be calculated in relative terms. This result removes a major obstacle from the theoretical challenge to find a unifying framework which establishes the IMA analogy for all of Darwinian biology, including as special cases inclusive fitness, evolutionarily stable strategies, evolutionary life-history theory, age-structured models and sex ratio theory. This would provide a formal, mathematical justification of fruitful and widespread but 'intentional' terms in evolutionary biology, such as 'selfish', 'altruism' and 'conflict'.

  11. Formal expressions and corresponding expansions for the exact Kohn-Sham exchange potential

    NASA Astrophysics Data System (ADS)

    Bulat, Felipe A.; Levy, Mel

    2009-11-01

    Formal expressions and their corresponding expansions in terms of Kohn-Sham (KS) orbitals are deduced for the exchange potential vx(r) . After an alternative derivation of the basic optimized effective potential integrodifferential equations is given through a Hartree-Fock adiabatic connection perturbation theory, we present an exact infinite expansion for vx(r) that is particularly simple in structure. It contains the very same occupied-virtual quantities that appear in the well-known optimized effective potential integral equation, but in this new expression vx(r) is isolated on one side of the equation. An orbital-energy modified Slater potential is its leading term which gives encouraging numerical results. Along different lines, while the earlier Krieger-Li-Iafrate approximation truncates completely the necessary first-order perturbation orbitals, we observe that the improved localized Hartree-Fock (LHF) potential, or common energy denominator potential (CEDA), or effective local potential (ELP), incorporates the part of each first-order orbital that consists of the occupied KS orbitals. With this in mind, the exact correction to the LHF, CEDA, or ELP potential (they are all equivalent) is deduced and displayed in terms of the virtual portions of the first-order orbitals. We close by observing that the newly derived exact formal expressions and corresponding expansions apply as well for obtaining the correlation potential from an orbital-dependent correlation energy functional.

  12. Improving the Effectiveness and Efficiency of Teaching Large Classes: Development and Evaluation of a Novel e-Resource in Cancer Biology

    ERIC Educational Resources Information Center

    Hejmadi, Momna V.

    2007-01-01

    This paper describes the development and evaluation of a blended learning resource in the biosciences, created by combining online learning with formal face-face lectures and supported by formative assessments. In order to improve the effectiveness and efficiency of teaching large classes with mixed student cohorts, teaching was delivered through…

  13. Thermodynamics of the mesoscopic thermoelectric heat engine beyond the linear-response regime.

    PubMed

    Yamamoto, Kaoru; Hatano, Naomichi

    2015-10-01

    Mesoscopic thermoelectric heat engine is much anticipated as a device that allows us to utilize with high efficiency wasted heat inaccessible by conventional heat engines. However, the derivation of the heat current in this engine seems to be either not general or described too briefly, even inappropriately in some cases. In this paper, we give a clear-cut derivation of the heat current of the engine with suitable assumptions beyond the linear-response regime. It resolves the confusion in the definition of the heat current in the linear-response regime. After verifying that we can construct the same formalism as that of the cyclic engine, we find the following two interesting results within the Landauer-Büttiker formalism: the efficiency of the mesoscopic thermoelectric engine reaches the Carnot efficiency if and only if the transmission probability is finite at a specific energy and zero otherwise; the unitarity of the transmission probability guarantees the second law of thermodynamics, invalidating Benenti et al.'s argument in the linear-response regime that one could obtain a finite power with the Carnot efficiency under a broken time-reversal symmetry [Phys. Rev. Lett. 106, 230602 (2011)]. These results demonstrate how quantum mechanics constrains thermodynamics.

  14. Beam-steering efficiency optimization method based on a rapid-search algorithm for liquid crystal optical phased array.

    PubMed

    Xiao, Feng; Kong, Lingjiang; Chen, Jian

    2017-06-01

    A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.

  15. Modeling of clover detector in addback mode

    NASA Astrophysics Data System (ADS)

    Kshetri, R.

    2012-07-01

    Based on absorption and scattering of gamma-rays, a formalism has been presented for modeling the clover germanium detector in addback mode and to predict its response for high energy γ-rays. In the present formalism, the operation of a bare clover detector could be described in terms of three quantities only. Considering an additional parameter, the formalism could be extended for suppressed clover. Using experimental data on relative single crystal efficiency and addback factor as input, the peak-to-total ratio has been calculated for three energies (Eγ = 3.401, 5.324 and 10.430 MeV) where direct measurement of peak-to-total ratio is impossible due to absence of a radioactive source having single monoenergetic gamma-ray of that energy. The experimental validation and consistency of the formalism have been shown considering data for TIGRESS clover detector. In a recent work (R. Kshetri, JINST 2012 7 P04008), we showed that for a given γ-ray energy, the formalism could be used to predict the peak-to-total ratio as a function of number of detector modules. In the present paper, we have shown that for a given composite detector (clover detector is considered here), the formalism could be used to predict the peak-to-total ratio as a function of γ-ray energy.

  16. The near optimality of the stabilizing control in a weakly nonlinear system with state-dependent coefficients

    NASA Astrophysics Data System (ADS)

    Dmitriev, Mikhail G.; Makarov, Dmitry A.

    2016-08-01

    We carried out analysis of near optimality of one computationally effective nonlinear stabilizing control built for weakly nonlinear systems with coefficients depending on the state and the formal small parameter. First investigation of that problem was made in [M. G. Dmitriev, and D. A. Makarov, "The suboptimality of stabilizing regulator in a quasi-linear system with state-depended coefficients," in 2016 International Siberian Conference on Control and Communications (SIBCON) Proceedings, National Research University, Moscow, 2016]. In this paper, another optimal control and gain matrix representations were used and theoretical results analogous to cited work above were obtained. Also as in the cited work above the form of quality criterion on which this close-loop control is optimal was constructed.

  17. Optimal state transfer of a single dissipative two-level system

    NASA Astrophysics Data System (ADS)

    Jirari, Hamza; Wu, Ning

    2016-04-01

    Optimal state transfer of a single two-level system (TLS) coupled to an Ohmic boson bath via off-diagonal TLS-bath coupling is studied by using optimal control theory. In the weak system-bath coupling regime where the time-dependent Bloch-Redfield formalism is applicable, we obtain the Bloch equation to probe the evolution of the dissipative TLS in the presence of a time-dependent external control field. By using the automatic differentiation technique to compute the gradient for the cost functional, we calculate the optimal transfer integral profile that can achieve an ideal transfer within a dimer system in the Fenna-Matthews-Olson (FMO) model. The robustness of the control profile against temperature variation is also analyzed.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiurasek, Jaromir; Cerf, Nicolas J.

    We investigate the asymmetric Gaussian cloning of coherent states which produces M copies from N input replicas in such a way that the fidelity of each copy may be different. We show that the optimal asymmetric Gaussian cloning can be performed with a single phase-insensitive amplifier and an array of beam splitters. We obtain a simple analytical expression characterizing the set of optimal asymmetric Gaussian cloning machines and prove the optimality of these cloners using the formalism of Gaussian completely positive maps and semidefinite programming techniques. We also present an alternative implementation of the asymmetric cloning machine where the phase-insensitivemore » amplifier is replaced with a beam splitter, heterodyne detector, and feedforward.« less

  19. Design of materials with prescribed nonlinear properties

    NASA Astrophysics Data System (ADS)

    Wang, F.; Sigmund, O.; Jensen, J. S.

    2014-09-01

    We systematically design materials using topology optimization to achieve prescribed nonlinear properties under finite deformation. Instead of a formal homogenization procedure, a numerical experiment is proposed to evaluate the material performance in longitudinal and transverse tensile tests under finite deformation, i.e. stress-strain relations and Poissons ratio. By minimizing errors between actual and prescribed properties, materials are tailored to achieve the target. Both two dimensional (2D) truss-based and continuum materials are designed with various prescribed nonlinear properties. The numerical examples illustrate optimized materials with rubber-like behavior and also optimized materials with extreme strain-independent Poissons ratio for axial strain intervals of εi∈[0.00, 0.30].

  20. Efficiency bounds of molecular motors under a trade-off figure of merit

    NASA Astrophysics Data System (ADS)

    Zhang, Yanchao; Huang, Chuankun; Lin, Guoxing; Chen, Jincan

    2017-05-01

    On the basis of the theory of irreversible thermodynamics and an elementary model of the molecular motors converting chemical energy by ATP hydrolysis to mechanical work exerted against an external force, the efficiencies of the molecular motors at two different optimization configurations for trade-off figure of merit representing a best compromise between the useful energy and the lost energy are calculated. The upper and lower bounds for the efficiency at two different optimization configurations are determined. It is found that the optimal efficiencies at the two different optimization configurations are always larger than 1 / 2.

  1. Quality assurance in radiology: peer review and peer feedback.

    PubMed

    Strickland, N H

    2015-11-01

    Peer review in radiology means an assessment of the accuracy of a report issued by another radiologist. Inevitably, this involves a judgement opinion from the reviewing radiologist. Peer feedback is the means by which any form of peer review is communicated back to the original author of the report. This article defines terms, discusses the current status, identifies problems, and provides some recommendations as to the way forward, concentrating upon the software requirements for efficient peer review and peer feedback of reported imaging studies. Radiologists undertake routine peer review in their everyday clinical practice, particularly when reporting and preparing for multidisciplinary team meetings. More formal peer review of reported imaging studies has been advocated as a quality assurance measure to promote good clinical practice. It is also a way of assessing the competency of reporting radiologists referred for investigation to bodies such as the General Medical Council (GMC). The literature shows, firstly, that there is a very wide reported range of discrepancy rates in many studies, which have used a variety of non-comparable methodologies; and secondly, that applying scoring systems in formal peer review is often meaningless, unhelpful, and can even be detrimental. There is currently a lack of electronic peer feedback system software on the market to inform radiologists of any review of their work that has occurred or to provide them with clinical outcome information on cases they have previously reported. Learning opportunities are therefore missed. Radiologists should actively engage with the medical informatics industry to design optimal peer review and feedback software with features to meet their needs. Such a system should be easy to use, be fully integrated with the radiological information and picture archiving systems used clinically, and contain a free-text comment box, without a numerical scoring system. It should form a temporary record that cannot be permanently archived. It must provide automated feedback to the original author. Peer feedback, as part of everyday reporting, should enhance daily learning for radiologists. Software requirements for everyday peer feedback differ from those needed for a formal peer review process, which might only be necessary in the setting of a formal GMC enquiry into a particular radiologist's reporting competence, for example. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  2. Null tests of the standard model using the linear model formalism

    NASA Astrophysics Data System (ADS)

    Marra, Valerio; Sapone, Domenico

    2018-04-01

    We test both the Friedmann-Lemaître-Robertson-Walker geometry and Λ CDM cosmology in a model-independent way by reconstructing the Hubble function H (z ), the comoving distance D (z ), and the growth of structure f σ8(z ) using the most recent data available. We use the linear model formalism in order to optimally reconstruct the above cosmological functions, together with their derivatives and integrals. We then evaluate four of the null tests available in the literature that probe both background and perturbation assumptions. For all the four tests, we find agreement, within the errors, with the standard cosmological model.

  3. Optimising import risk mitigation: anticipating the unintended consequences and competing risks of informal trade.

    PubMed

    Hueston, W; Travis, D; van Klink, E

    2011-04-01

    The effectiveness of risk mitigation may be compromised by informal trade, including illegal activities, parallel markets and extra-legal activities. While no regulatory system is 100% effective in eliminating the risk of disease transmission through animal and animal product trade, extreme risk aversion in formal import health regulations may increase informal trade, with the unintended consequence of creating additional risks outside regulatory purview. Optimal risk mitigation on a national scale requires scientifically sound yet flexible mitigation strategies that can address the competing risks of formal and informal trade. More robust risk analysis and creative engagement of nontraditional partners provide avenues for addressing informal trade.

  4. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  5. Zone-boundary optimization for direct laser writing of continuous-relief diffractive optical elements.

    PubMed

    Korolkov, Victor P; Nasyrov, Ruslan K; Shimansky, Ruslan V

    2006-01-01

    Enhancing the diffraction efficiency of continuous-relief diffractive optical elements fabricated by direct laser writing is discussed. A new method of zone-boundary optimization is proposed to correct exposure data only in narrow areas along the boundaries of diffractive zones. The optimization decreases the loss of diffraction efficiency related to convolution of a desired phase profile with a writing-beam intensity distribution. A simplified stepped transition function that describes optimized exposure data near zone boundaries can be made universal for a wide range of zone periods. The approach permits a similar increase in the diffraction efficiency as an individual-pixel optimization but with fewer computation efforts. Computer simulations demonstrated that the zone-boundary optimization for a 6 microm period grating increases the efficiency by 7% and 14.5% for 0.6 microm and 1.65 microm writing-spot diameters, respectively. The diffraction efficiency of as much as 65%-90% for 4-10 microm zone periods was obtained experimentally with this method.

  6. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  7. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  8. Optimal ciliary beating patterns

    NASA Astrophysics Data System (ADS)

    Vilfan, Andrej; Osterman, Natan

    2011-11-01

    We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.

  9. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    PubMed

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  10. Biological optimization systems for enhancing photosynthetic efficiency and methods of use

    DOEpatents

    Hunt, Ryan W.; Chinnasamy, Senthil; Das, Keshav C.; de Mattos, Erico Rolim

    2012-11-06

    Biological optimization systems for enhancing photosynthetic efficiency and methods of use. Specifically, methods for enhancing photosynthetic efficiency including applying pulsed light to a photosynthetic organism, using a chlorophyll fluorescence feedback control system to determine one or more photosynthetic efficiency parameters, and adjusting one or more of the photosynthetic efficiency parameters to drive the photosynthesis by the delivery of an amount of light to optimize light absorption of the photosynthetic organism while providing enough dark time between light pulses to prevent oversaturation of the chlorophyll reaction centers are disclosed.

  11. Conditions for optimal efficiency of PCBM-based terahertz modulators

    NASA Astrophysics Data System (ADS)

    Yoo, Hyung Keun; Lee, Hanju; Lee, Kiejin; Kang, Chul; Kee, Chul-Sik; Hwang, In-Wook; Lee, Joong Wook

    2017-10-01

    We demonstrate the conditions for optimal modulation efficiency of active terahertz modulators based on phenyl-C61-butyric acid methyl ester (PCBM)-silicon hybrid structures. Highly efficient active control of the terahertz wave modulation was realized by controlling organic film thickness, annealing temperature, and laser excitation wavelength. Under the optimal conditions, the modulation efficiency reached nearly 100%. Charge distributions measured with a near-field scanning microwave microscanning technique corroborated the fact that the increase of photo-excited carriers due to the PCBM-silicon hybrid structure enables the enhancement of active modulation efficiency.

  12. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  13. A New On-Line Diagnosis Protocol for the SPIDER Family of Byzantine Fault Tolerant Architectures

    NASA Technical Reports Server (NTRS)

    Geser, Alfons; Miner, Paul S.

    2004-01-01

    This paper presents the formal verification of a new protocol for online distributed diagnosis for the SPIDER family of architectures. An instance of the Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) architecture consists of a collection of processing elements communicating over a Reliable Optical Bus (ROBUS). The ROBUS is a specialized fault-tolerant device that guarantees Interactive Consistency, Distributed Diagnosis (Group Membership), and Synchronization in the presence of a bounded number of physical faults. Formal verification of the original SPIDER diagnosis protocol provided a detailed understanding that led to the discovery of a significantly more efficient protocol. The original protocol was adapted from the formally verified protocol used in the MAFT architecture. It required O(N) message exchanges per defendant to correctly diagnose failures in a system with N nodes. The new protocol achieves the same diagnostic fidelity, but only requires O(1) exchanges per defendant. This paper presents this new diagnosis protocol and a formal proof of its correctness using PVS.

  14. State-vector formalism and the Legendre polynomial solution for modelling guided waves in anisotropic plates

    NASA Astrophysics Data System (ADS)

    Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin

    2018-01-01

    We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.

  15. Overall Traveling-Wave-Tube Efficiency Improved By Optimized Multistage Depressed Collector Design

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2002-01-01

    Depressed Collector Design The microwave traveling wave tube (TWT) is used widely for space communications and high-power airborne transmitting sources. One of the most important features in designing a TWT is overall efficiency. Yet, overall TWT efficiency is strongly dependent on the efficiency of the electron beam collector, particularly for high values of collector efficiency. For these reasons, the NASA Glenn Research Center developed an optimization algorithm based on simulated annealing to quickly design highly efficient multistage depressed collectors (MDC's). Simulated annealing is a strategy for solving highly nonlinear combinatorial optimization problems. Its major advantage over other methods is its ability to avoid becoming trapped in local minima. Simulated annealing is based on an analogy to statistical thermodynamics, specifically the physical process of annealing: heating a material to a temperature that permits many atomic rearrangements and then cooling it carefully and slowly, until it freezes into a strong, minimum-energy crystalline structure. This minimum energy crystal corresponds to the optimal solution of a mathematical optimization problem. The TWT used as a baseline for optimization was the 32-GHz, 10-W, helical TWT developed for the Cassini mission to Saturn. The method of collector analysis and design used was a 2-1/2-dimensional computational procedure that employs two types of codes, a large signal analysis code and an electron trajectory code. The large signal analysis code produces the spatial, energetic, and temporal distributions of the spent beam entering the MDC. An electron trajectory code uses the resultant data to perform the actual collector analysis. The MDC was optimized for maximum MDC efficiency and minimum final kinetic energy of all collected electrons (to reduce heat transfer). The preceding figure shows the geometric and electrical configuration of an optimized collector with an efficiency of 93.8 percent. The results show the improvement in collector efficiency from 89.7 to 93.8 percent, resulting in an increase of three overall efficiency points. In addition, the time to design a highly efficient MDC was reduced from a month to a few days. All work was done in-house at Glenn for the High Rate Data Delivery Program. Future plans include optimizing the MDC and TWT interaction circuit in tandem to further improve overall TWT efficiency.

  16. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  17. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  18. Towards Efficient and Accurate Description of Many-Electron Problems: Developments of Static and Time-Dependent Electronic Structure Methods

    NASA Astrophysics Data System (ADS)

    Ding, Feizhi

    Understanding electronic behavior in molecular and nano-scale systems is fundamental to the development and design of novel technologies and materials for application in a variety of scientific contexts from fundamental research to energy conversion. This dissertation aims to provide insights into this goal by developing novel methods and applications of first-principle electronic structure theory. Specifically, we will present new methods and applications of excited state multi-electron dynamics based on the real-time (RT) time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) formalism, and new development of the multi-configuration self-consist field theory (MCSCF) for modeling ground-state electronic structure. The RT-TDHF/TDDFT based developments and applications can be categorized into three broad and coherently integrated research areas: (1) modeling of the interaction between moleculars and external electromagnetic perturbations. In this part we will first prove both analytically and numerically the gauge invariance of the TDHF/TDDFT formalisms, then we will present a novel, efficient method for calculating molecular nonlinear optical properties, and last we will study quantum coherent plasmon in metal namowires using RT-TDDFT; (2) modeling of excited-state charge transfer in molecules. In this part, we will investigate the mechanisms of bridge-mediated electron transfer, and then we will introduce a newly developed non-equilibrium quantum/continuum embedding method for studying charge transfer dynamics in solution; (3) developments of first-principles spin-dependent many-electron dynamics. In this part, we will present an ab initio non-relativistic spin dynamics method based on the two-component generalized Hartree-Fock approach, and then we will generalized it to the two-component TDDFT framework and combine it with the Ehrenfest molecular dynamics approach for modeling the interaction between electron spins and nuclear motion. All these developments and applications will open up new computational and theoretical tools to be applied to the development and understanding of chemical reactions, nonlinear optics, electromagnetism, and spintronics. Lastly, we present a new algorithm for large-scale MCSCF calculations that can utilize massively parallel machines while still maintaining optimal performance for each single processor. This will great improve the efficiency in the MCSCF calculations for studying chemical dissociation and high-accuracy quantum-mechanical simulations.

  19. Implicational Markedness and Frequency in Constraint-Based Computational Models of Phonological Learning

    ERIC Educational Resources Information Center

    Jarosz, Gaja

    2010-01-01

    This study examines the interacting roles of implicational markedness and frequency from the joint perspectives of formal linguistic theory, phonological acquisition and computational modeling. The hypothesis that child grammars are rankings of universal constraints, as in Optimality Theory (Prince & Smolensky, 1993/2004), that learning involves a…

  20. Constrained variational calculus for higher order classical field theories

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; de León, Manuel; Martín de Diego, David

    2010-11-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  1. Rational Approximations to Rational Models: Alternative Algorithms for Category Learning

    ERIC Educational Resources Information Center

    Sanborn, Adam N.; Griffiths, Thomas L.; Navarro, Daniel J.

    2010-01-01

    Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models…

  2. Describing the What and Why of Students' Difficulties in Boolean Logic

    ERIC Educational Resources Information Center

    Herman, Geoffrey L.; Loui, Michael C.; Kaczmarczyk, Lisa; Zilles, Craig

    2012-01-01

    The ability to reason with formal logic is a foundational skill for computer scientists and computer engineers that scaffolds the abilities to design, debug, and optimize. By interviewing students about their understanding of propositional logic and their ability to translate from English specifications to Boolean expressions, we characterized…

  3. Expeditious construction of (+)-mintlactone via intramolecular hetero-Pauson-Khand reaction.

    PubMed

    Gao, Peng; Xu, Peng-Fei; Zhai, Hongbin

    2009-03-20

    (+)-Mintlactone, a bicyclic monoterpene natural product, has been efficiently assembled from (-)-citronellol in three steps. The synthesis features nitrous acid-induced formal isopropylidene "demethanation" and the molybdenum-mediated intramolecular hetero-Pauson-Khand reaction.

  4. Phosphate Tether-Mediated Approach to the Formal Total Synthesis of (-)-Salicylihalamides A and B

    PubMed Central

    Chegondi, Rambabu; Tan, Mary M. L.; Hanson, Paul R.

    2011-01-01

    A concise formal synthesis of the cytotoxic macrolides (-)-salicylihalamides A and B is reported. Key features of the synthetic strategy include a chemoselective hydroboration, highly regio- and diastereoselective methyl cuprate addition, Pd-catalyzed formate reduction, and an E-selective ring-closing metathesis to construct the 12-membered macrocycle subunit. Overall, two routes have been developed from a readily prepared bicyclic phosphate (4-steps), a 13-step route and a more efficient 9-step sequence relying on regioselective esterification of a key diol. PMID:21504150

  5. Can Regulatory Bodies Expect Efficient Help from Formal Methods?

    NASA Technical Reports Server (NTRS)

    Lopez Ruiz, Eduardo R.; Lemoine, Michel

    2010-01-01

    In the context of EDEMOI - a French national project that proposed the use of semiformal and formal methods to infer the consistency and robustness of aeronautical regulations through the analysis of faithfully representative models- a methodology had been suggested (and applied) to different (safety and security-related) aeronautical regulations. This paper summarizes the preliminary results of this experience by stating which were the methodology s expected benefits, from a scientific point of view, and which are its useful benefits, from a regulatory body s point of view.

  6. Developing an eLearning tool formalizing in YAWL the guidelines used in a transfusion medicine service.

    PubMed

    Russo, Paola; Piazza, Miriam; Leonardi, Giorgio; Roncoroni, Layla; Russo, Carlo; Spadaro, Salvatore; Quaglini, Silvana

    2012-01-01

    The blood transfusion is a complex activity subject to a high risk of eventually fatal errors. The development and application of computer-based systems could help reducing the error rate, playing a fundamental role in the improvement of the quality of care. This poster presents an under development eLearning tool formalizing the guidelines of the transfusion process. This system, implemented in YAWL (Yet Another Workflow Language), will be used to train the personnel in order to improve the efficiency of care and to reduce errors.

  7. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  8. Generalized Bondi-Sachs equations for characteristic formalism of numerical relativity

    NASA Astrophysics Data System (ADS)

    Cao, Zhoujian; He, Xiaokai

    2013-11-01

    The Cauchy formalism of numerical relativity has been successfully applied to simulate various dynamical spacetimes without any symmetry assumption. But discovering how to set a mathematically consistent and physically realistic boundary condition is still an open problem for Cauchy formalism. In addition, the numerical truncation error and finite region ambiguity affect the accuracy of gravitational wave form calculation. As to the finite region ambiguity issue, the characteristic extraction method helps much. But it does not solve all of the above issues. Besides the above problems for Cauchy formalism, the computational efficiency is another problem. Although characteristic formalism of numerical relativity suffers the difficulty from caustics in the inner near zone, it has advantages in relation to all of the issues listed above. Cauchy-characteristic matching (CCM) is a possible way to take advantage of characteristic formalism regarding these issues and treat the inner caustics at the same time. CCM has difficulty treating the gauge difference between the Cauchy part and the characteristic part. We propose generalized Bondi-Sachs equations for characteristic formalism for the Cauchy-characteristic matching end. Our proposal gives out a possible same numerical evolution scheme for both the Cauchy part and the characteristic part. And our generalized Bondi-Sachs equations have one adjustable gauge freedom which can be used to relate the gauge used in the Cauchy part. Then these equations can make the Cauchy part and the characteristic part share a consistent gauge condition. So our proposal gives a possible new starting point for Cauchy-characteristic matching.

  9. Optimal Learning for Efficient Experimentation in Nanotechnology and Biochemistry

    DTIC Science & Technology

    2015-12-22

    AFRL-AFOSR-VA-TR-2016-0018 Optimal Learning for Efficient Experimentation in Nanotechnology , Biochemistry Warren Powell TRUSTEES OF PRINCETON...3. DATES COVERED (From - To) 01-07-2012 to 30-09-2015 4. TITLE AND SUBTITLE Optimal Learning for Efficient Experimentation in Nanotechnology and...in Nanotechnology and Biochemistry Principal Investigators: Warren B. Powell Princeton University Department of Operations Research and

  10. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer

    PubMed Central

    Yu, Hongyan; Zhang, Yongqiang; Yang, Yuanyuan; Ji, Luyue

    2017-01-01

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively. PMID:28820496

  11. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer.

    PubMed

    Yu, Hongyan; Zhang, Yongqiang; Guo, Songtao; Yang, Yuanyuan; Ji, Luyue

    2017-08-18

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively.

  12. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    NASA Astrophysics Data System (ADS)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  13. Multiscale time-dependent density functional theory: Demonstration for plasmons.

    PubMed

    Jiang, Jiajian; Abi Mansour, Andrew; Ortoleva, Peter J

    2017-08-07

    Plasmon properties are of significant interest in pure and applied nanoscience. While time-dependent density functional theory (TDDFT) can be used to study plasmons, it becomes impractical for elucidating the effect of size, geometric arrangement, and dimensionality in complex nanosystems. In this study, a new multiscale formalism that addresses this challenge is proposed. This formalism is based on Trotter factorization and the explicit introduction of a coarse-grained (CG) structure function constructed as the Weierstrass transform of the electron wavefunction. This CG structure function is shown to vary on a time scale much longer than that of the latter. A multiscale propagator that coevolves both the CG structure function and the electron wavefunction is shown to bring substantial efficiency over classical propagators used in TDDFT. This efficiency follows from the enhanced numerical stability of the multiscale method and the consequence of larger time steps that can be used in a discrete time evolution. The multiscale algorithm is demonstrated for plasmons in a group of interacting sodium nanoparticles (15-240 atoms), and it achieves improved efficiency over TDDFT without significant loss of accuracy or space-time resolution.

  14. Efficiency improvement in proton dose calculations with an equivalent restricted stopping power formalism

    NASA Astrophysics Data System (ADS)

    Maneval, Daniel; Bouchard, Hugo; Ozell, Benoît; Després, Philippe

    2018-01-01

    The equivalent restricted stopping power formalism is introduced for proton mean energy loss calculations under the continuous slowing down approximation. The objective is the acceleration of Monte Carlo dose calculations by allowing larger steps while preserving accuracy. The fractional energy loss per step length ɛ was obtained with a secant method and a Gauss-Kronrod quadrature estimation of the integral equation relating the mean energy loss to the step length. The midpoint rule of the Newton-Cotes formulae was then used to solve this equation, allowing the creation of a lookup table linking ɛ to the equivalent restricted stopping power L eq, used here as a key physical quantity. The mean energy loss for any step length was simply defined as the product of the step length with L eq. Proton inelastic collisions with electrons were added to GPUMCD, a GPU-based Monte Carlo dose calculation code. The proton continuous slowing-down was modelled with the L eq formalism. GPUMCD was compared to Geant4 in a validation study where ionization processes alone were activated and a voxelized geometry was used. The energy straggling was first switched off to validate the L eq formalism alone. Dose differences between Geant4 and GPUMCD were smaller than 0.31% for the L eq formalism. The mean error and the standard deviation were below 0.035% and 0.038% respectively. 99.4 to 100% of GPUMCD dose points were consistent with a 0.3% dose tolerance. GPUMCD 80% falloff positions (R80 ) matched Geant’s R80 within 1 μm. With the energy straggling, dose differences were below 2.7% in the Bragg peak falloff and smaller than 0.83% elsewhere. The R80 positions matched within 100 μm. The overall computation times to transport one million protons with GPUMCD were 31-173 ms. Under similar conditions, Geant4 computation times were 1.4-20 h. The L eq formalism led to an intrinsic efficiency gain factor ranging between 30-630, increasing with the prescribed accuracy of simulations. The L eq formalism allows larger steps leading to a O(constant) algorithmic time complexity. It significantly accelerates Monte Carlo proton transport while preserving accuracy. It therefore constitutes a promising variance reduction technique for computing proton dose distributions in a clinical context.

  15. Clinical knowledge-based inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Xing, Lei

    2004-11-01

    Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.

  16. Maximizing the efficiency of multienzyme process by stoichiometry optimization.

    PubMed

    Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri

    2014-09-05

    Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Multi-rendezvous low-thrust trajectory optimization using costate transforming and homotopic approach

    NASA Astrophysics Data System (ADS)

    Chen, Shiyu; Li, Haiyang; Baoyin, Hexi

    2018-06-01

    This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.

  18. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  19. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less

  20. Optimization of radiotherapy. Some notes on the principles and practice of optimization in cancer treatment and implications for clinical research.

    PubMed

    Andrews, J R

    1981-01-01

    Two methods dominate cancer treatment--one, the traditional best practice, individualized treatment method and two, the a priori determined decision method of the interinstitutional, cooperative, clinical trial. In the first, choices are infinite and can be made at the time of treatment; in the second, choices are finite and are made in advance of treatment on a random basis. Neither method systematically selects, identifies, or formalizes the optimum level of effect in the treatment chosen. Of the two, it can be argued that the first, other things being equal, is more likely to select the optimum treatment. The determination of level of effect for the optimization of cancer treatment requires the generation of dose-response relationships for both benefit and risk and the introduction of benefit and risk considerations and judgements. The clinical trial, as presently constituted, doses not yield this kind of information, it being, generally, of the binary yes or no, better or worse type. The best practice, individualized treatment method can yield, when adequately documented, both a range of dose-response relationships and a variety of benefit and risk considerations. The presentation will be limited to a consideration of a single modality of cancer treatment, radiation therapy, but an analogy with other modalities of cancer treatment will be inferred. Criteria for optimization will be developed and graphic means for its identification and formalization will be demonstrated with examples taken from the radiotherapy literature. The general problem of optimization theory and practice will be discussed; the necessity for its exploration in relation to the increasing complexity of cancer treatment will be developed; and recommendations for clinical research will be made including a proposal for the support of clinics as an alternative to the support of programs.

  1. A Knowledge-based System for Intelligent Support in Pharmacogenomics Evidence Assessment: Ontology-driven Evidence Representation and Retrieval.

    PubMed

    Lee, Chia-Ju; Devine, Beth; Tarczy-Hornoch, Peter

    2017-01-01

    Pharmacogenomics holds promise as a critical component of precision medicine. Yet, the use of pharmacogenomics in routine clinical care is minimal, partly due to the lack of efficient and effective use of existing evidence. This paper describes the design, development, implementation and evaluation of a knowledge-based system that fulfills three critical features: a) providing clinically relevant evidence, b) applying an evidence-based approach, and c) using semantically computable formalism, to facilitate efficient evidence assessment to support timely decisions on adoption of pharmacogenomics in clinical care. To illustrate functionality, the system was piloted in the context of clopidogrel and warfarin pharmacogenomics. In contrast to existing pharmacogenomics knowledge bases, the developed system is the first to exploit the expressivity and reasoning power of logic-based representation formalism to enable unambiguous expression and automatic retrieval of pharmacogenomics evidence to support systematic review with meta-analysis.

  2. A demand-side view of risk adjustment.

    PubMed

    Feldman, R; Dowd, B E; Maciejewski, M

    2001-01-01

    This paper analyzes the efficient allocation of consumers to health plans. Specifically, we address the question of why employers that offer multiple health plans often make larger contributions to the premiums of the high-cost plans. Our perspective is that the subsidy for high-cost plans represents a form of demand-side risk adjustment that improves efficiency. Without such subsidies (and in the absence of formal risk adjustment), too few employees would choose the high-cost plans preferred by high-risk workers. We test the theory by estimating a model of the employer premium subsidy, using data from a survey of large public employers in 1994. Our empirical analysis shows that employers are more likely to subsidize high-cost plans when the benefits of risk adjustment are greater. The findings suggest that the premium subsidy can accomplish some of the benefits of formal risk adjustment.

  3. Benzene construction via organocatalytic formal [3+3] cycloaddition reaction.

    PubMed

    Zhu, Tingshun; Zheng, Pengcheng; Mou, Chengli; Yang, Song; Song, Bao-An; Chi, Yonggui Robin

    2014-09-25

    The benzene unit, in its substituted forms, is a most common scaffold in natural products, bioactive molecules and polymer materials. Nearly 80% of the 200 best selling small molecule drugs contain at least one benzene moiety. Not surprisingly, the synthesis of substituted benzenes receives constant attentions. At present, the dominant methods use pre-existing benzene framework to install substituents by using conventional functional group manipulations or transition metal-catalyzed carbon-hydrogen bond activations. These otherwise impressive approaches require multiple synthetic steps and are ineffective from both economic and environmental perspectives. Here we report an efficient method for the synthesis of substituted benzene molecules. Instead of relying on pre-existing aromatic rings, here we construct the benzene core through a carbene-catalyzed formal [3+3] reaction. Given the simplicity and high efficiency, we expect this strategy to be of wide use especially for large scale preparation of biomedicals and functional materials.

  4. Five-Junction Solar Cell Optimization Using Silvaco Atlas

    DTIC Science & Technology

    2017-09-01

    experimental sources [1], [4], [6]. f. Numerical Method The method selected for solving the non -linear equations that make up the simulation can be...and maximize efficiency. Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco...Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco ATLAS is utilized to

  5. Performance Limits of Non-Line-of-Sight Optical Communications

    DTIC Science & Technology

    2015-05-01

    high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV communication system...LEDs), solar blind filters, and high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV...solar blind filters, and high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV

  6. The multi-criteria optimization for the formation of the multiple-valued logic model of a robotic agent

    NASA Astrophysics Data System (ADS)

    Bykovsky, A. Yu; Sherbakov, A. A.

    2016-08-01

    The C-valued Allen-Givone algebra is the attractive tool for modeling of a robotic agent, but it requires the consensus method of minimization for the simplification of logic expressions. This procedure substitutes some undefined states of the function for the maximal truth value, thus extending the initially given truth table. This further creates the problem of different formal representations for the same initially given function. The multi-criteria optimization is proposed for the deliberate choice of undefined states and model formation.

  7. A unified stochastic formulation of dissipative quantum dynamics. I. Generalized hierarchical equations

    NASA Astrophysics Data System (ADS)

    Hsieh, Chang-Yu; Cao, Jianshu

    2018-01-01

    We extend a standard stochastic theory to study open quantum systems coupled to a generic quantum environment. We exemplify the general framework by studying a two-level quantum system coupled bilinearly to the three fundamental classes of non-interacting particles: bosons, fermions, and spins. In this unified stochastic approach, the generalized stochastic Liouville equation (SLE) formally captures the exact quantum dissipations when noise variables with appropriate statistics for different bath models are applied. Anharmonic effects of a non-Gaussian bath are precisely encoded in the bath multi-time correlation functions that noise variables have to satisfy. Starting from the SLE, we devise a family of generalized hierarchical equations by averaging out the noise variables and expand bath multi-time correlation functions in a complete basis of orthonormal functions. The general hierarchical equations constitute systems of linear equations that provide numerically exact simulations of quantum dynamics. For bosonic bath models, our general hierarchical equation of motion reduces exactly to an extended version of hierarchical equation of motion which allows efficient simulation for arbitrary spectral densities and temperature regimes. Similar efficiency and flexibility can be achieved for the fermionic bath models within our formalism. The spin bath models can be simulated with two complementary approaches in the present formalism. (I) They can be viewed as an example of non-Gaussian bath models and be directly handled with the general hierarchical equation approach given their multi-time correlation functions. (II) Alternatively, each bath spin can be first mapped onto a pair of fermions and be treated as fermionic environments within the present formalism.

  8. Transforming a Brutalist Monument into an Energy Efficient Building Without Destroying the Formal Appealing: The Example of the Mediterranean Bank in Potenza (Italy)

    NASA Astrophysics Data System (ADS)

    Lembo, Filiberto

    In years 1980 the "brutalist" and "monumentalist" architecture went even in Italy, and in Potenza, and it finds some interesting examples, such as the building designed as the home of the Mediterranean Bank. A monumental building, all in reinforced facing concrete and curtain walls, with refined proportional relationships, and so devoid of insulation (annual heat demand of 69 kWh/m3 year), that its management has become unbearable from the economic point of view, so it was abandoned a few years ago. Aim of this work was to define a design methodology that preserves all the qualities of architecture, but at a cost that is economically bearable and made the building energetically efficient. This was done by allying a very thick and efficient isolation, protected by a ventilated continues rainscreen, finished with several layers of a thin plaster with a bèton-like effect, which determines a morphology that, while different, recalls the original. The curtain walls were doubled with a double skin façade, whose performance has been optimized with a purpose created software. The huge skylight roof of the interior atrium has been doubled with a new, of different trend and thermally more effective. The roof was covered with photovoltaic panels. The result is an annual heat and refrigeration demand of 17 kWh/m3 year, at a cost of 20,000 €/m2, quickly depreciating, due to savings in operating costs for air conditioning (from €104,800.00 to €14,700.00 per year).

  9. Pump RIN-induced impairments in unrepeatered transmission systems using distributed Raman amplifier.

    PubMed

    Cheng, Jingchi; Tang, Ming; Lau, Alan Pak Tao; Lu, Chao; Wang, Liang; Dong, Zhenhua; Bilal, Syed Muhammad; Fu, Songnian; Shum, Perry Ping; Liu, Deming

    2015-05-04

    High spectral efficiency modulation format based unrepeatered transmission systems using distributed Raman amplifier (DRA) have attracted much attention recently. To enhance the reach and optimize system performance, careful design of DRA is required based on the analysis of various types of impairments and their balance. In this paper, we study various pump RIN induced distortions on high spectral efficiency modulation formats. The vector theory of both 1st and higher-order stimulated Raman scattering (SRS) effect using Jones-matrix formalism is presented. The pump RIN will induce three types of distortion on high spectral efficiency signals: intensity noise stemming from SRS, phase noise stemming from cross phase modulation (XPM), and polarization crosstalk stemming from cross polarization modulation (XPolM). An analytical model for the statistical property of relative phase noise (RPN) in higher order DRA without dealing with complex vector theory is derived. The impact of pump RIN induced impairments are analyzed in polarization-multiplexed (PM)-QPSK and PM-16QAM-based unrepeatered systems simulations using 1st, 2nd and 3rd-order forward pumped Raman amplifier. It is shown that at realistic RIN levels, negligible impairments will be induced to PM-QPSK signals in 1st and 2nd order DRA, while non-negligible impairments will occur in 3rd order case. PM-16QAM signals suffer more penalties compared to PM-QPSK with the same on-off gain where both 2nd and 3rd order DRA will cause non-negligible performance degradations. We also investigate the performance of digital signal processing (DSP) algorithms to mitigate such impairments.

  10. Energy-saving management modelling and optimization for lead-acid battery formation process

    NASA Astrophysics Data System (ADS)

    Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.

    2017-11-01

    In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.

  11. Quantifying the efficiency and equity implications of power plant air pollution control strategies in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, J.I.; Wilson, A.M.; Zwack, L.M.

    2007-05-15

    We modeled the public health benefits and the change in the spatial inequality of health risk for a number of hypothetical control scenarios for power plants in the United States to determine optimal control strategies. We simulated various ways by which emission reductions of sulfur dioxide (SO{sub 2}), nitrogen oxides, and fine particulate matter (PM2.5) could be distributed to reach national emissions caps. We applied a source-receptor matrix to determine the PM2.5 concentration changes associated with each control scenario and estimated the mortality reductions. We estimated changes in the spatial inequality of health risk using the Atkinson index and othermore » indicators, following previously derived axioms for measuring health risk inequality. In our baseline model, benefits ranged from 17,000-21,000 fewer premature deaths per year across control scenarios. Scenarios with greater health benefits also tended to have greater reductions in the spatial inequality of health risk, as many sources with high health benefits per unit emissions of SO{sub 2} were in areas with high background PM2.5 concentrations. Sensitivity analyses indicated that conclusions were generally robust to the choice of indicator and other model specifications. Our analysis demonstrates an approach for formally quantifying both the magnitude and spatial distribution of health benefits of pollution control strategies, allowing for joint consideration of efficiency and equity.« less

  12. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE PAGES

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    2018-01-28

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  13. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  14. Optimization of the multi-turn injection efficiency for a medical synchrotron

    NASA Astrophysics Data System (ADS)

    Kim, J.; Yoon, M.; Yim, H.

    2016-09-01

    We present a method for optimizing the multi-turn injection efficiency for a medical synchrotron. We show that for a given injection energy, the injection efficiency can be greatly enhanced by choosing transverse tunes appropriately and by optimizing the injection bump and the number of turns required for beam injection. We verify our study by applying the method to the Korea Heavy Ion Medical Accelerator (KHIMA) synchrotron which is currently being built at the campus of Dongnam Institute of Radiological and Medical Sciences (DIRAMS) in Busan, Korea. First the frequency map analysis was performed with the help of the ELEGANT and the ACCSIM codes. The tunes that yielded good injection efficiency were then selected. With these tunes, the injection bump and the number of turns required for injection were then optimized by tracking a number of particles for up to one thousand turns after injection, beyond which no further beam loss occurred. Results for the optimization of the injection efficiency for proton ions are presented.

  15. Methodology for the optimal design of an integrated first and second generation ethanol production plant combined with power cogeneration.

    PubMed

    Bechara, Rami; Gomez, Adrien; Saint-Antonin, Valérie; Schweitzer, Jean-Marc; Maréchal, François

    2016-08-01

    The application of methodologies for the optimal design of integrated processes has seen increased interest in literature. This article builds on previous works and applies a systematic methodology to an integrated first and second generation ethanol production plant with power cogeneration. The methodology breaks into process simulation, heat integration, thermo-economic evaluation, exergy efficiency vs. capital costs, multi-variable, evolutionary optimization, and process selection via profitability maximization. Optimization generated Pareto solutions with exergy efficiency ranging between 39.2% and 44.4% and capital costs from 210M$ to 390M$. The Net Present Value was positive for only two scenarios and for low efficiency, low hydrolysis points. The minimum cellulosic ethanol selling price was sought to obtain a maximum NPV of zero for high efficiency, high hydrolysis alternatives. The obtained optimal configuration presented maximum exergy efficiency, hydrolyzed bagasse fraction, capital costs and ethanol production rate, and minimum cooling water consumption and power production rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Access to Education over the Working Life in Sweden: Priorities, Institutions and Efficiency. OECD Education Working Papers, No. 62

    ERIC Educational Resources Information Center

    Stenberg, Anders

    2012-01-01

    To facilitate individuals to adjust their skills to changes in market demands, Sweden has a relatively generous policy to stimulate formal adult education at the compulsory, upper secondary and tertiary levels. This paper provides an overview of what research has reported to assess if and/or how it may be an efficient use of tax payers' money.…

  17. University-Affiliated Schools as Sites for Research Learning in Pre-Service Teacher Education

    ERIC Educational Resources Information Center

    Henning, Elizabeth; Petker, Gadija; Petersen, Nadine

    2015-01-01

    This article proposes that the "teaching/practice schools" formally affiliated to initial teacher education programmes at universities, can be utilised more optimally as research sites by student teachers. The argument is put forward with reference to the role that such schools have played historically in teacher education in the United…

  18. Mean-field theory of spin-glasses with finite coordination number

    NASA Technical Reports Server (NTRS)

    Kanter, I.; Sompolinsky, H.

    1987-01-01

    The mean-field theory of dilute spin-glasses is studied in the limit where the average coordination number is finite. The zero-temperature phase diagram is calculated and the relationship between the spin-glass phase and the percolation transition is discussed. The present formalism is applicable also to graph optimization problems.

  19. Audiovisual Resources in Formal and Informal Learning: Spanish and Mexican Students' Attitudes

    ERIC Educational Resources Information Center

    Fombona, Javier; Pascual, Maria Angeles

    2013-01-01

    This research analyses the evolution in the effectiveness of media messages and aims to optimize the use of ICTs in educational settings. The cultural impact of television and multimedia resources is increasing as they move to the Internet with ever greater quality. The integration of visual narrative techniques with multimedia playback…

  20. Tutorials in the Polytechnic University of the Philippines (PUP) Open University System

    ERIC Educational Resources Information Center

    Castolo, Carmencita L.

    2016-01-01

    Tutorial is one of the student support services often provided by open and distance teaching institutions. These are regularly scheduled meetings between a tutor and his/here students which may include individual consultation sessions, either face-to-face or through telephone; a more formal "lecture format;" optimal participation in…

  1. Thermodynamics of the mesoscopic thermoelectric heat engine beyond the linear-response regime

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kaoru; Hatano, Naomichi

    2015-10-01

    Mesoscopic thermoelectric heat engine is much anticipated as a device that allows us to utilize with high efficiency wasted heat inaccessible by conventional heat engines. However, the derivation of the heat current in this engine seems to be either not general or described too briefly, even inappropriately in some cases. In this paper, we give a clear-cut derivation of the heat current of the engine with suitable assumptions beyond the linear-response regime. It resolves the confusion in the definition of the heat current in the linear-response regime. After verifying that we can construct the same formalism as that of the cyclic engine, we find the following two interesting results within the Landauer-Büttiker formalism: the efficiency of the mesoscopic thermoelectric engine reaches the Carnot efficiency if and only if the transmission probability is finite at a specific energy and zero otherwise; the unitarity of the transmission probability guarantees the second law of thermodynamics, invalidating Benenti et al.'s argument in the linear-response regime that one could obtain a finite power with the Carnot efficiency under a broken time-reversal symmetry [Phys. Rev. Lett. 106, 230602 (2011), 10.1103/PhysRevLett.106.230602]. These results demonstrate how quantum mechanics constrains thermodynamics.

  2. Towards an accurate representation of electrostatics in classical force fields: Efficient implementation of multipolar interactions in biomolecular simulations

    NASA Astrophysics Data System (ADS)

    Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.

    2004-01-01

    The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is hoped that this new formalism will facilitate the systematic implementation of higher order multipoles in classical biomolecular force fields.

  3. Pulmonary and Critical Care Medicine Program Directors' Attitudes toward Training in Medical Education. A Nationwide Survey Study.

    PubMed

    Richards, Jeremy B; McCallister, Jennifer W; Lenz, Peter H

    2016-04-01

    Many pulmonary and critical care medicine (PCCM) fellows are interested in improving their teaching skills as well as learning about careers as clinician educators. Educational opportunities in PCCM fellowship programs designed to address these interests have not been well characterized in U.S. training programs. We aimed to characterize educational content and structure for training fellows to teach in PCCM fellowship programs. We evaluated three major domains: (1) existing educational opportunities, (2) PCCM program directors' attitudes toward the importance of teaching fellows how to teach, and (3) potential components of an optimal teaching skills curriculum for PCCM fellows. We surveyed program and associate program directors who were members of the Association of Pulmonary and Critical Care Medicine Program Directors in 2014. Survey domains included existing teaching skills content and structure, presence of a formal medical education curriculum or clinician educator track, perceived barriers to teaching fellows teaching skills, and open-ended qualitative inquiries about the ideal curricula. Data were analyzed both quantitatively and qualitatively. Of 158 invited Association of Pulmonary and Critical Care Medicine Program Directors members, 85 program directors and associate directors responded (53.8% response rate). Annual curricular time dedicated to teaching skills varied widely (median, 3 h; mean, 5.4 h; interquartile range, 2.0-6.3 h), with 17 respondents (20%) allotting no time to teaching fellows to teach and 14 respondents (17%) dedicating more than 10 hours. Survey participants stated that the optimal duration for training fellows in teaching skills was significantly less than what they reported was actually occurring (median optimal duration, 1.5 h/yr; mean, 2.1 h/yr; interquartile range, 1.5-3.5 h/yr; P < 0.001). Only 28 (33.7%) had a formal curriculum for teaching medical education skills. Qualitative analyses identified several barriers to implementing formal teaching skills curricula, including "time," "financial resources," "competing priorities," and "lack of expert faculty." While prior work has demonstrated that fellows are interested in obtaining medical education skills, PCCM program directors and associate directors noted significant challenges to implementing formal educational opportunities to teach fellows these skills. Effective strategies are needed to design, implement, sustain, and assess teaching skills curricula for PCCM fellowships.

  4. Non-Formal education in astronomy: The experience of the University the Carabobo

    NASA Astrophysics Data System (ADS)

    Falcón, Nelson

    2011-06-01

    Since 1995, the University the Carabobo, in Venezuela, has come developing a program of astronomical popularization and learning Astronomy using the Non formal education methods. A synopsis of the activities is presented. We will also discuss some conceptual aspects about the extension of the knowledge like supplementary function of the investigation and the university teaching. We illustrate the characteristics of the communication with an example of lectures and printed material. The efficiency of the heuristic arguments could be evaluated through a ethnology study. In that order of ideas, we show some images of the activities of astronomical popularization. We can see the population and great concurrence with chronological (and cultural) heterogeneity. We conclude that the Non formal education, structured with characteristic different to the usual educational instruction, constitutes a successful strategy in the diffusion and the communicating astronomy.

  5. On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound

    NASA Astrophysics Data System (ADS)

    Li, Ruihu; Li, Xueliang; Guo, Luobin

    2015-12-01

    The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c

  6. Physically motivated correlation formalism in hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Roy, Ankita; Rafert, J. Bruce

    2004-05-01

    Most remote sensing data-sets contain a limiting number of independent spatial and spectral measurements, beyond which no effective increase in information is achieved. This paper presents a Physically Motivated Correlation Formalism (PMCF) ,which places both Spatial and Spectral data on an equivalent mathematical footing in the context of a specific Kernel, such that, optimal combinations of independent data can be selected from the entire Hypercube via the method of "Correlation Moments". We present an experimental and computational analysis of Hyperspectral data sets using the Michigan Tech VFTHSI [Visible Fourier Transform Hyperspectral Imager] based on a Sagnac Interferometer, adjusted to obtain high SNR levels. The captured Signal Interferograms of different targets - aerial snaps of Houghton and lab-based data (white light , He-Ne laser , discharge tube sources) with the provision of customized scan of targets with the same exposures are processed using inverse imaging transformations and filtering techniques to obtain the Spectral profiles and generate Hypercubes to compute Spectral/Spatial/Cross Moments. PMCF answers the question of how optimally the entire hypercube should be sampled and finds how many spatial-spectral pixels are required for a particular target recognition.

  7. Rural planning organizations--their role in transportation planning and project development in Texas : technical report.

    DOT National Transportation Integrated Search

    2010-10-01

    While a formal planning and programming process is established for urbanized areas through Metropolitan : Planning Organizations, no similar requirement has been established for rural areas. Currently, under the : Safe, Accountable, Flexible, Efficie...

  8. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  9. An alternative animal protein source: cultured beef.

    PubMed

    Post, Mark J

    2014-11-01

    Alternative sources of animal proteins are needed that can be produced efficiently, thereby providing food security with diminished ecological burden. It is feasible to culture beef from bovine skeletal muscle stem cells, but the technology is still under development. The aim is to create a beef mimic with equivalent taste, texture, and appearance and with the same nutritional value as livestock-produced beef. More specifically, there is a need for optimization of protein content and fat content. In addition, scalability of production requires modification of current small-scale bioreactors to the largest possible scale. The necessary steps and current progress suggest that this aim is achievable, but formal evidence is still required. Similarly, we can be optimistic about consumer acceptance based on initial data, but detailed studies are needed to gain more insight into potential psychological obstacles that could lead to rejection. These challenges are formidable but likely surmountable. The severity of upcoming food-security threats warrants serious research and development efforts to address the challenges that come with bringing cultured beef to the market. © 2014 New York Academy of Sciences.

  10. Using attractiveness model for actors ranking in social media networks.

    PubMed

    Qasem, Ziyaad; Jansen, Marc; Hecking, Tobias; Hoppe, H Ulrich

    2017-01-01

    Influential actors detection in social media such as Twitter or Facebook can play a major role in gathering opinions on particular topics, improving the marketing efficiency, predicting the trends, etc. This work aims to extend our formally defined T measure to present a new measure aiming to recognize the actor's influence by the strength of attracting new important actors into a networked community. Therefore, we propose a model of the actor's influence based on the attractiveness of the actor in relation to the number of other attractors with whom he/she has established connections over time. Using an empirically collected social network for the underlying graph, we have applied the above-mentioned measure of influence in order to determine optimal seeds in a simulation of influence maximization. We study our extended measure in the context of information diffusion because this measure is based on a model of actors who attract others to be active members in a community. This corresponds to the idea of the IC simulation model which is used to identify the most important spreaders in a set of actors.

  11. Nonlocal response with local optics

    NASA Astrophysics Data System (ADS)

    Kong, Jiantao; Shvonski, Alexander J.; Kempa, Krzysztof

    2018-04-01

    For plasmonic systems too small for classical, local simulations to be valid, but too large for ab initio calculations to be computationally feasible, we developed a practical approach—a nonlocal-to-local mapping that enables the use of a modified local system to obtain the response due to nonlocal effects to lowest order, at the cost of higher structural complexity. In this approach, the nonlocal surface region of a metallic structure is mapped onto a local dielectric film, mathematically preserving the nonlocality of the entire system. The most significant feature of this approach is its full compatibility with conventional, highly efficient finite difference time domain (FDTD) simulation codes. Our optimized choice of mapping is based on the Feibelman's d -function formalism, and it produces an effective dielectric function of the local film that obeys all required sum rules, as well as the Kramers-Kronig causality relations. We demonstrate the power of our approach combined with an FDTD scheme, in a series of comparisons with experiments and ab initio density functional theory calculations from the literature, for structures with dimensions from the subnanoscopic to microscopic range.

  12. Realistic continuous-variable quantum teleportation with non-Gaussian resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, F.; De Siena, S.; CNR-INFM Coherentia, Napoli, Italy, and CNISM and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi, SA

    2010-01-15

    We present a comprehensive investigation of nonideal continuous-variable quantum teleportation implemented with entangled non-Gaussian resources. We discuss in a unified framework the main decoherence mechanisms, including imperfect Bell measurements and propagation of optical fields in lossy fibers, applying the formalism of the characteristic function. By exploiting appropriate displacement strategies, we compute analytically the success probability of teleportation for input coherent states and two classes of non-Gaussian entangled resources: two-mode squeezed Bell-like states (that include as particular cases photon-added and photon-subtracted de-Gaussified states), and two-mode squeezed catlike states. We discuss the optimization procedure on the free parameters of the non-Gaussian resourcesmore » at fixed values of the squeezing and of the experimental quantities determining the inefficiencies of the nonideal protocol. It is found that non-Gaussian resources enhance significantly the efficiency of teleportation and are more robust against decoherence than the corresponding Gaussian ones. Partial information on the alphabet of input states allows further significant improvement in the performance of the nonideal teleportation protocol.« less

  13. Efficient Ada multitasking on a RISC register window architecture

    NASA Technical Reports Server (NTRS)

    Kearns, J. P.; Quammen, D.

    1987-01-01

    This work addresses the problem of reducing context switch overhead on a processor which supports a large register file - a register file much like that which is part of the Berkeley RISC processors and several other emerging architectures (which are not necessarily reduced instruction set machines in the purest sense). Such a reduction in overhead is particularly desirable in a real-time embedded application, in which task-to-task context switch overhead may result in failure to meet crucial deadlines. A storage management technique by which a context switch may be implemented as cheaply as a procedure call is presented. The essence of this technique is the avoidance of the save/restore of registers on the context switch. This is achieved through analysis of the static source text of an Ada tasking program. Information gained during that analysis directs the optimized storage management strategy for that program at run time. A formal verification of the technique in terms of an operational control model and an evaluation of the technique's performance via simulations driven by synthetic Ada program traces are presented.

  14. Stanford Hardware Development Program

    NASA Technical Reports Server (NTRS)

    Peterson, A.; Linscott, I.; Burr, J.

    1986-01-01

    Architectures for high performance, digital signal processing, particularly for high resolution, wide band spectrum analysis were developed. These developments are intended to provide instrumentation for NASA's Search for Extraterrestrial Intelligence (SETI) program. The real time signal processing is both formal and experimental. The efficient organization and optimal scheduling of signal processing algorithms were investigated. The work is complemented by efforts in processor architecture design and implementation. A high resolution, multichannel spectrometer that incorporates special purpose microcoded signal processors is being tested. A general purpose signal processor for the data from the multichannel spectrometer was designed to function as the processing element in a highly concurrent machine. The processor performance required for the spectrometer is in the range of 1000 to 10,000 million instructions per second (MIPS). Multiple node processor configurations, where each node performs at 100 MIPS, are sought. The nodes are microprogrammable and are interconnected through a network with high bandwidth for neighboring nodes, and medium bandwidth for nodes at larger distance. The implementation of both the current mutlichannel spectrometer and the signal processor as Very Large Scale Integration CMOS chip sets was commenced.

  15. Analysis of satellite multibeam antennas’ performances

    NASA Astrophysics Data System (ADS)

    Sterbini, Guido

    2006-07-01

    In this work, we discuss the application of frequency reuse's concept in satellite communications, stressing the importance for a design-oriented mathematical model as first step for dimensioning antenna systems. We consider multibeam reflector antennas. The first part of the work consists in reorganizing, making uniform and completing the models already developed in the scientific literature. In doing it, we adopt the multidimensional Taylor development formalism. For computing the spillover efficiency of the antenna, we consider different feed's illuminations and we propose a completely original mathematical model, obtained by the interpolation of simulator results. The second part of the work is dedicated to characterize the secondary far field pattern. Combining this model together with the information on the cellular coverage geometry is possible to evaluate the isolation and the minimum directivity on the cell. As third part, in order to test the model and its analysis and synthesis capabilities, we implement a software tool that helps the designer in the rapid tuning of the fundamental quantities for the optimization of the performance: the proposed model shows an optimum agreement with the results of the simulations.

  16. A Criteria Standard for Conflict Resolution: A Vision for Guaranteeing the Safety of Self-Separation in NextGen

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Butler, Ricky; Narkawicz, Anthony; Maddalon, Jeffrey; Hagen, George

    2010-01-01

    Distributed approaches for conflict resolution rely on analyzing the behavior of each aircraft to ensure that system-wide safety properties are maintained. This paper presents the criteria method, which increases the quality and efficiency of a safety assurance analysis for distributed air traffic concepts. The criteria standard is shown to provide two key safety properties: safe separation when only one aircraft maneuvers and safe separation when both aircraft maneuver at the same time. This approach is complemented with strong guarantees of correct operation through formal verification. To show that an algorithm is correct, i.e., that it always meets its specified safety property, one must only show that the algorithm satisfies the criteria. Once this is done, then the algorithm inherits the safety properties of the criteria. An important consequence of this approach is that there is no requirement that both aircraft execute the same conflict resolution algorithm. Therefore, the criteria approach allows different avionics manufacturers or even different airlines to use different algorithms, each optimized according to their own proprietary concerns.

  17. The fusion of biology, computer science, and engineering: towards efficient and successful synthetic biology.

    PubMed

    Linshiz, Gregory; Goldberg, Alex; Konry, Tania; Hillson, Nathan J

    2012-01-01

    Synthetic biology is a nascent field that emerged in earnest only around the turn of the millennium. It aims to engineer new biological systems and impart new biological functionality, often through genetic modifications. The design and construction of new biological systems is a complex, multistep process, requiring multidisciplinary collaborative efforts from "fusion" scientists who have formal training in computer science or engineering, as well as hands-on biological expertise. The public has high expectations for synthetic biology and eagerly anticipates the development of solutions to the major challenges facing humanity. This article discusses laboratory practices and the conduct of research in synthetic biology. It argues that the fusion science approach, which integrates biology with computer science and engineering best practices, including standardization, process optimization, computer-aided design and laboratory automation, miniaturization, and systematic management, will increase the predictability and reproducibility of experiments and lead to breakthroughs in the construction of new biological systems. The article also discusses several successful fusion projects, including the development of software tools for DNA construction design automation, recursive DNA construction, and the development of integrated microfluidics systems.

  18. Optimization on the impeller of a low-specific-speed centrifugal pump for hydraulic performance improvement

    NASA Astrophysics Data System (ADS)

    Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng

    2016-09-01

    In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.

  19. Displacement Based Multilevel Structural Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszezanski-Sobieski, J.; Striz, A. G.

    1996-01-01

    In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.

  20. Optimization of Dish Solar Collectors with and without Secondary Concentrators

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1982-01-01

    Methods for optimizing parabolic dish solar collectors and the consequent effects of various optical, thermal, mechanical, and cost variables are examined. The most important performance optimization is adjusting the receiver aperture to maximize collector efficiency. Other parameters that can be adjusted to optimize efficiency include focal length, and, if a heat engine is used, the receiver temperature. The efficiency maxima associated with focal length and receiver temperature are relatively broad; it may, accordingly, be desirable to design somewhat away from the maxima. Performance optimization is sensitive to the slope and specularity errors of the concentrator. Other optical and thermal variables affecting optimization are the reflectance and blocking factor of the concentrator, the absorptance and losses of the receiver, and, if a heat engine is used, the shape of the engine efficiency versus temperature curve. Performance may sometimes be improved by use of an additional optical element (a secondary concentrator) or a receiver window if the errors of the primary concentrator are large or the receiver temperature is high.

  1. Rhodium-Catalyzed Asymmetric N-H Functionalization of Quinazolinones with Allenes and Allylic Carbonates: The First Enantioselective Formal Total Synthesis of (-)-Chaetominine.

    PubMed

    Zhou, Yirong; Breit, Bernhard

    2017-12-22

    An unprecedented asymmetric N-H functionalization of quinazolinones with allenes and allylic carbonates was successfully achieved by rhodium catalysis with the assistance of chiral bidentate diphosphine ligands. The high efficiency and practicality of this method was demonstrated by a low catalyst loading of 1 mol % as well as excellent chemo-, regio-, and enantioselectivities with broad functional group compatibility. Furthermore, this newly developed strategy was applied as key step in the first enantioselective formal total synthesis of (-)-chaetominine. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Diastereoselective Pyrrolidine Synthesis via Copper Promoted Intramolecular Aminooxygenation of Alkenes; Formal Synthesis of (+)-Monomorine

    PubMed Central

    Paderes, Monissa C; Chemler, Sherry R

    2009-01-01

    The diastereoselectivity of the copper-promoted intramolecular aminooxygenation of various alkene substrates was investigated. α-Substituted 4-pentenyl sulfonamides favor the formation of 2,5-cis-pyrrolidines (dr >20:1) giving excellent yields which range from 76–97% while γ-substituted substrates favor the 2,3-trans pyrrolidine adducts with moderate selectivity (ca. 3:1). A substrate whose N-substituent was directly tethered to the α-carbon exclusively yielded the 2,5-trans pyrrolidine. The synthetic utility of the method was demonstrated by a short and efficient formal synthesis of (+)-monomorine. PMID:19331361

  3. GPU-completeness: theory and implications

    NASA Astrophysics Data System (ADS)

    Lin, I.-Jong

    2011-01-01

    This paper formalizes a major insight into a class of algorithms that relate parallelism and performance. The purpose of this paper is to define a class of algorithms that trades off parallelism for quality of result (e.g. visual quality, compression rate), and we propose a similar method for algorithmic classification based on NP-Completeness techniques, applied toward parallel acceleration. We will define this class of algorithm as "GPU-Complete" and will postulate the necessary properties of the algorithms for admission into this class. We will also formally relate his algorithmic space and imaging algorithms space. This concept is based upon our experience in the print production area where GPUs (Graphic Processing Units) have shown a substantial cost/performance advantage within the context of HPdelivered enterprise services and commercial printing infrastructure. While CPUs and GPUs are converging in their underlying hardware and functional blocks, their system behaviors are clearly distinct in many ways: memory system design, programming paradigms, and massively parallel SIMD architecture. There are applications that are clearly suited to each architecture: for CPU: language compilation, word processing, operating systems, and other applications that are highly sequential in nature; for GPU: video rendering, particle simulation, pixel color conversion, and other problems clearly amenable to massive parallelization. While GPUs establishing themselves as a second, distinct computing architecture from CPUs, their end-to-end system cost/performance advantage in certain parts of computation inform the structure of algorithms and their efficient parallel implementations. While GPUs are merely one type of architecture for parallelization, we show that their introduction into the design space of printing systems demonstrate the trade-offs against competing multi-core, FPGA, and ASIC architectures. While each architecture has its own optimal application, we believe that the selection of architecture can be defined in terms of properties of GPU-Completeness. For a welldefined subset of algorithms, GPU-Completeness is intended to connect the parallelism, algorithms and efficient architectures into a unified framework to show that multiple layers of parallel implementation are guided by the same underlying trade-off.

  4. The Effective-One-Body Approach to the General Relativistic Two Body Problem

    NASA Astrophysics Data System (ADS)

    Damour, Thibault; Nagar, Alessandro

    The two-body problem in General Relativity has been the subject of many analytical investigations. After reviewing some of the methods used to tackle this problem (and, more generally, the N-body problem), we focus on a new, recently introduced approach to the motion and radiation of (comparable mass) binary systems: the Effective One Body (EOB) formalism. We review the basic elements of this formalism, and discuss some of its recent developments. Several recent comparisons between EOB predictions and Numerical Relativity (NR) simulations have shown the aptitude of the EOB formalism to provide accurate descriptions of the dynamics and radiation of various binary systems (comprising black holes or neutron stars) in regimes that are inaccessible to other analytical approaches (such as the last orbits and the merger of comparable mass black holes). In synergy with NR simulations, post-Newtonian (PN) theory and Gravitational Self-Force (GSF) computations, the EOB formalism is likely to provide an efficient way of computing the very many accurate template waveforms that are needed for Gravitational Wave (GW) data analysis purposes.

  5. Application of Electron-Beam Controlled Diffuse Discharges to Fast Switching

    DTIC Science & Technology

    1983-06-01

    pressure , switch area and length are estimated self-consistently for a given system efficiency is reviewed, The formalism is used to design a single pulse, 200 kV, 30 kA (6 omega) , 100 ns FWHM inductive storage generator.

  6. A Method to Determine Supply Voltage of Permanent Magnet Motor at Optimal Design Stage

    NASA Astrophysics Data System (ADS)

    Matustomo, Shinya; Noguchi, So; Yamashita, Hideo; Tanimoto, Shigeya

    The permanent magnet motors (PM motors) are widely used in electrical machinery, such as air conditioner, refrigerator and so on. In recent years, from the point of view of energy saving, it is necessary to improve the efficiency of PM motor by optimization. However, in the efficiency optimization of PM motor, many design variables and many restrictions are required. In this paper, the efficiency optimization of PM motor with many design variables was performed by using the voltage driven finite element analysis with the rotating simulation of the motor and the genetic algorithm.

  7. The effect of dropout on the efficiency of D-optimal designs of linear mixed models.

    PubMed

    Ortega-Azurduy, S A; Tan, F E S; Berger, M P F

    2008-06-30

    Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.

  8. An Adaptive Fuzzy-Logic Traffic Control System in Conditions of Saturated Transport Stream

    PubMed Central

    Marakhimov, A. R.; Igamberdiev, H. Z.; Umarov, Sh. X.

    2016-01-01

    This paper considers the problem of building adaptive fuzzy-logic traffic control systems (AFLTCS) to deal with information fuzziness and uncertainty in case of heavy traffic streams. Methods of formal description of traffic control on the crossroads based on fuzzy sets and fuzzy logic are proposed. This paper also provides efficient algorithms for implementing AFLTCS and develops the appropriate simulation models to test the efficiency of suggested approach. PMID:27517081

  9. Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model

    NASA Astrophysics Data System (ADS)

    Wiezel, Oren; Or, Yizhar

    2016-11-01

    Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.

  10. Carbon and nutrient use efficiencies optimally balance stoichiometric imbalances

    NASA Astrophysics Data System (ADS)

    Manzoni, Stefano; Čapek, Petr; Lindahl, Björn; Mooshammer, Maria; Richter, Andreas; Šantrůčková, Hana

    2016-04-01

    Decomposer organisms face large stoichiometric imbalances because their food is generally poor in nutrients compared to the decomposer cellular composition. The presence of excess carbon (C) requires adaptations to utilize nutrients effectively while disposing of or investing excess C. As food composition changes, these adaptations lead to variable C- and nutrient-use efficiencies (defined as the ratios of C and nutrients used for growth over the amounts consumed). For organisms to be ecologically competitive, these changes in efficiencies with resource stoichiometry have to balance advantages and disadvantages in an optimal way. We hypothesize that efficiencies are varied so that community growth rate is optimized along stoichiometric gradients of their resources. Building from previous theories, we predict that maximum growth is achieved when C and nutrients are co-limiting, so that the maximum C-use efficiency is reached, and nutrient release is minimized. This optimality principle is expected to be applicable across terrestrial-aquatic borders, to various elements, and at different trophic levels. While the growth rate maximization hypothesis has been evaluated for consumers and predators, in this contribution we test it for terrestrial and aquatic decomposers degrading resources across wide stoichiometry gradients. The optimality hypothesis predicts constant efficiencies at low substrate C:N and C:P, whereas above a stoichiometric threshold, C-use efficiency declines and nitrogen- and phosphorus-use efficiencies increase up to one. Thus, high resource C:N and C:P lead to low C-use efficiency, but effective retention of nitrogen and phosphorus. Predictions are broadly consistent with efficiency trends in decomposer communities across terrestrial and aquatic ecosystems.

  11. FUZZY-LOGIC-BASED CONTROLLERS FOR EFFICIENCY OPTIMIZATION OF INVERTER-FED INDUCTION MOTOR DRIVES

    EPA Science Inventory

    This paper describes a fuzzy-logic-based energy optimizing controller to improve the efficiency of induction motor/drives operating at various load (torque) and speed conditions. Improvement of induction motor efficiency is important not only from the considerations of energy sav...

  12. Publications | Grid Modernization | NREL

    Science.gov Websites

    Photovoltaics: Trajectories and Challenges Cover of Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow publication Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow

  13. Applications of a formal approach to decipher discrete genetic networks.

    PubMed

    Corblin, Fabien; Fanchon, Eric; Trilling, Laurent

    2010-07-20

    A growing demand for tools to assist the building and analysis of biological networks exists in systems biology. We argue that the use of a formal approach is relevant and applicable to address questions raised by biologists about such networks. The behaviour of these systems being complex, it is essential to exploit efficiently every bit of experimental information. In our approach, both the evolution rules and the partial knowledge about the structure and the behaviour of the network are formalized using a common constraint-based language. In this article our formal and declarative approach is applied to three biological applications. The software environment that we developed allows to specifically address each application through a new class of biologically relevant queries. We show that we can describe easily and in a formal manner the partial knowledge about a genetic network. Moreover we show that this environment, based on a constraint algorithmic approach, offers a wide variety of functionalities, going beyond simple simulations, such as proof of consistency, model revision, prediction of properties, search for minimal models relatively to specified criteria. The formal approach proposed here deeply changes the way to proceed in the exploration of genetic and biochemical networks, first by avoiding the usual trial-and-error procedure, and second by placing the emphasis on sets of solutions, rather than a single solution arbitrarily chosen among many others. Last, the constraint approach promotes an integration of model and experimental data in a single framework.

  14. Optimizing Conditions of Teachers' Professional Practice to Support Students with Special Educational Needs. Teacher Voice

    ERIC Educational Resources Information Center

    Froese-Germain, Bernie; McGahey, Bob

    2012-01-01

    Across the country, teachers are working to provide individualized instruction to the students in their classes. Teachers use their professional judgement to modify teaching to suit the learning needs of students. Occasionally, this modification is required as a result of students being formally identified as having a learning exceptionality. As…

  15. A logical approach to optimize the nanostructured lipid carrier system of irinotecan: efficient hybrid design methodology

    NASA Astrophysics Data System (ADS)

    Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama

    2013-01-01

    Development of an effective formulation involves careful optimization of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient optimization designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design methods leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different optimization methods. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were optimized for their effect on size using the Taguchi L9 orthogonal array. The optimized values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface method to optimize the entrapment efficiency. Finally, by performing only 38 trials, we have optimized 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.

  16. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    PubMed

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.

  17. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  18. Optimal tuning of a confined Brownian information engine.

    PubMed

    Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong

    2016-03-01

    A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.

  19. Dynamic regime of coherent population trapping and optimization of frequency modulation parameters in atomic clocks.

    PubMed

    Yudin, V I; Taichenachev, A V; Basalaev, M Yu; Kovalenko, D V

    2017-02-06

    We theoretically investigate the dynamic regime of coherent population trapping (CPT) in the presence of frequency modulation (FM). We have formulated the criteria for quasi-stationary (adiabatic) and dynamic (non-adiabatic) responses of atomic system driven by this FM. Using the density matrix formalism for Λ system, the error signal is exactly calculated and optimized. It is shown that the optimal FM parameters correspond to the dynamic regime of atomic-field interaction, which significantly differs from conventional description of CPT resonances in the frame of quasi-stationary approach (under small modulation frequency). Obtained theoretical results are in good qualitative agreement with different experiments. Also we have found CPT-analogue of Pound-Driver-Hall regime of frequency stabilization.

  20. The importance of geospatial data to calculate the optimal distribution of renewable energies

    NASA Astrophysics Data System (ADS)

    Díaz, Paula; Masó, Joan

    2013-04-01

    Specially during last three years, the renewable energies are revolutionizing the international trade while they are geographically diversifying markets. Renewables are experiencing a rapid growth in power generation. According to REN21 (2012), during last six years, the total renewables capacity installed grew at record rates. In 2011, the EU raised its share of global new renewables capacity till 44%. The BRICS nations (Brazil, Russia, India and China) accounted for about 26% of the total global. Moreover, almost twenty countries in the Middle East, North Africa, and sub-Saharan Africa have currently active markets in renewables. The energy return ratios are commonly used to calculate the efficiency of the traditional energy sources. The Energy Return On Investment (EROI) compares the energy returned for a certain source and the energy used to get it (explore, find, develop, produce, extract, transform, harvest, grow, process, etc.). These energy return ratios have demonstrated a general decrease of efficiency of the fossil fuels and gas. When considering the limitations of the quantity of energy produced by some sources, the energy invested to obtain them and the difficulties of finding optimal locations for the establishment of renewables farms (e.g. due to an ever increasing scarce of appropriate land) the EROI becomes relevant in renewables. A spatialized EROI, which uses variables with spatial distribution, enables the optimal position in terms of both energy production and associated costs. It is important to note that the spatialized EROI can be mathematically formalized and calculated the same way for different locations in a reproducible way. This means that having established a concrete EROI methodology it is possible to generate a continuous map that will highlight the best productive zones for renewable energies in terms of maximum energy return at minimum cost. Relevant variables to calculate the real energy invested are the grid connections between production and consumption, transportation loses and efficiency of the grid. If appropriate, the spatialized EROI analysis could include any indirect costs that the source of energy might produce, such as visual impacts, food market impacts and land price. Such a spatialized study requires GIS tools to compute operations using both spatial relations like distances and frictions, and topological relations like connectivity, not easy to consider in the way that EROI is currently calculated. In a broader perspective, by applying the EROI to various energy sources, a comparative analysis of the efficiency to obtain different source can be done in a quantitative way. The increase in energy investment is also accompanied by the increase of manufactures and policies. Further efforts will be necessary in the coming years to provide energy access through smart grids and to determine the efficient areas in terms of cost of production and energy returned on investment. The authors present the EROI as a reliable solution to address the input and output energy relationship and increase the efficiency in energy investment considering the appropriate geospatial variables. The spatialized EROI can be a useful tool to consider by decision makers when designing energy policies and programming energy funds, because it is an objective demonstration of which energy sources are more convenient in terms of costs and efficiency.

  1. Taxonomic minimalism.

    PubMed

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  2. The development and initial validation of a sensitive bedside cognitive screening test.

    PubMed

    Faust, D; Fogel, B S

    1989-01-01

    Brief bedside cognitive examinations such as the Mini-Mental State Examination are designed to detect delirium and dementia but not more subtle or delineated cognitive deficits. Formal neuropsychological evaluation provides greater sensitivity and detects a wider range of cognitive deficits but is too lengthy for efficient use at the bedside or in epidemiological studies. The authors developed the High Sensitivity Cognitive Screen (HSCS), a 20-minute interview-based test, to identify patients who show disorder on formal neuropsychological evaluation. An initial study demonstrated satisfactory test-retest and interrater reliability. The HSCS was then administered to 60 psychiatric and neurological patients with suspected cognitive deficits but without gross impairment, who also completed formal neuropsychological testing. Results of both tests were independently classified as either normal, borderline, or abnormal. The HSCS correctly classified 93% of patients across the normal-abnormal dichotomy and showed promise for characterizing the extent and severity of cognitive dysfunction.

  3. A discriminatory function for prediction of protein-DNA interactions based on alpha shape modeling.

    PubMed

    Zhou, Weiqiang; Yan, Hong

    2010-10-15

    Protein-DNA interaction has significant importance in many biological processes. However, the underlying principle of the molecular recognition process is still largely unknown. As more high-resolution 3D structures of protein-DNA complex are becoming available, the surface characteristics of the complex become an important research topic. In our work, we apply an alpha shape model to represent the surface structure of the protein-DNA complex and developed an interface-atom curvature-dependent conditional probability discriminatory function for the prediction of protein-DNA interaction. The interface-atom curvature-dependent formalism captures atomic interaction details better than the atomic distance-based method. The proposed method provides good performance in discriminating the native structures from the docking decoy sets, and outperforms the distance-dependent formalism in terms of the z-score. Computer experiment results show that the curvature-dependent formalism with the optimal parameters can achieve a native z-score of -8.17 in discriminating the native structure from the highest surface-complementarity scored decoy set and a native z-score of -7.38 in discriminating the native structure from the lowest RMSD decoy set. The interface-atom curvature-dependent formalism can also be used to predict apo version of DNA-binding proteins. These results suggest that the interface-atom curvature-dependent formalism has a good prediction capability for protein-DNA interactions. The code and data sets are available for download on http://www.hy8.com/bioinformatics.htm kenandzhou@hotmail.com.

  4. Efficiency of quantum vs. classical annealing in nonconvex learning problems

    PubMed Central

    Zecchina, Riccardo

    2018-01-01

    Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764

  5. Visualizing Matrix Multiplication

    ERIC Educational Resources Information Center

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  6. Inquiry into the Hidden Curriculum.

    ERIC Educational Resources Information Center

    King, Scott E.

    1986-01-01

    Discusses distinctions between the formal, overt curriculum and the hidden or implicit curriculum that inculcates values and expectations not openly acknowledged. Before 1900, schools stressed homogeneity, efficiency, and obedience to ensure students' smooth transition from childhood to life in an industrialized society. These values became hidden…

  7. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  8. Bose-Einstein condensates form in heuristics learned by ciliates deciding to signal 'social' commitments.

    PubMed

    Clark, Kevin B

    2010-03-01

    Fringe quantum biology theories often adopt the concept of Bose-Einstein condensation when explaining how consciousness, emotion, perception, learning, and reasoning emerge from operations of intact animal nervous systems and other computational media. However, controversial empirical evidence and mathematical formalism concerning decoherence rates of bioprocesses keep these frameworks from satisfactorily accounting for the physical nature of cognitive-like events. This study, inspired by the discovery that preferential attachment rules computed by complex technological networks obey Bose-Einstein statistics, is the first rigorous attempt to examine whether analogues of Bose-Einstein condensation precipitate learned decision making in live biological systems as bioenergetics optimization predicts. By exploiting the ciliate Spirostomum ambiguum's capacity to learn and store behavioral strategies advertising mating availability into heuristics of topologically invariant computational networks, three distinct phases of strategy use were found to map onto statistical distributions described by Bose-Einstein, Fermi-Dirac, and classical Maxwell-Boltzmann behavior. Ciliates that sensitized or habituated signaling patterns to emit brief periods of either deceptive 'harder-to-get' or altruistic 'easier-to-get' serial escape reactions began testing condensed on initially perceived fittest 'courting' solutions. When these ciliates switched from their first strategy choices, Bose-Einstein condensation of strategy use abruptly dissipated into a Maxwell-Boltzmann computational phase no longer dominated by a single fittest strategy. Recursive trial-and-error strategy searches annealed strategy use back into a condensed phase consistent with performance optimization. 'Social' decisions performed by ciliates showing no nonassociative learning were largely governed by Fermi-Dirac statistics, resulting in degenerate distributions of strategy choices. These findings corroborate previous work demonstrating ciliates with improving expertise search grouped 'courting' assurances at quantum efficiencies and verify efficient processing by primitive 'social' intelligences involves network forms of Bose-Einstein condensation coupled to preceding thermodynamic-sensitive computational phases. 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Structured decision making for managing pneumonia epizootics in bighorn sheep

    USGS Publications Warehouse

    Sells, Sarah N.; Mitchell, Michael S.; Edwards, Victoria L.; Gude, Justin A.; Anderson, Neil J.

    2016-01-01

    Good decision-making is essential to conserving wildlife populations. Although there may be multiple ways to address a problem, perfect solutions rarely exist. Managers are therefore tasked with identifying decisions that will best achieve desired outcomes. Structured decision making (SDM) is a method of decision analysis used to identify the most effective, efficient, and realistic decisions while accounting for values and priorities of the decision maker. The stepwise process includes identifying the management problem, defining objectives for solving the problem, developing alternative approaches to achieve the objectives, and formally evaluating which alternative is most likely to accomplish the objectives. The SDM process can be more effective than informal decision-making because it provides a transparent way to quantitatively evaluate decisions for addressing multiple management objectives while incorporating science, uncertainty, and risk tolerance. To illustrate the application of this process to a management need, we present an SDM-based decision tool developed to identify optimal decisions for proactively managing risk of pneumonia epizootics in bighorn sheep (Ovis canadensis) in Montana. Pneumonia epizootics are a major challenge for managers due to long-term impacts to herds, epistemic uncertainty in timing and location of future epizootics, and consequent difficulty knowing how or when to manage risk. The decision tool facilitates analysis of alternative decisions for how to manage herds based on predictions from a risk model, herd-specific objectives, and predicted costs and benefits of each alternative. Decision analyses for 2 example herds revealed that meeting management objectives necessitates specific approaches unique to each herd. The analyses showed how and under what circumstances the alternatives are optimal compared to other approaches and current management. Managers can be confident that these decisions are effective, efficient, and realistic because they explicitly account for important considerations managers implicitly weigh when making decisions, including competing management objectives, uncertainty in potential outcomes, and risk tolerance.

  10. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  11. Modeling Cyber Conflicts Using an Extended Petri Net Formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakrzewska, Anita N; Ferragut, Erik M

    2011-01-01

    When threatened by automated attacks, critical systems that require human-controlled responses have difficulty making optimal responses and adapting protections in real- time and may therefore be overwhelmed. Consequently, experts have called for the development of automatic real-time reaction capabilities. However, a technical gap exists in the modeling and analysis of cyber conflicts to automatically understand the repercussions of responses. There is a need for modeling cyber assets that accounts for concurrent behavior, incomplete information, and payoff functions. Furthermore, we address this need by extending the Petri net formalism to allow real-time cyber conflicts to be modeled in a way thatmore » is expressive and concise. This formalism includes transitions controlled by players as well as firing rates attached to transitions. This allows us to model both player actions and factors that are beyond the control of players in real-time. We show that our formalism is able to represent situational aware- ness, concurrent actions, incomplete information and objective functions. These factors make it well-suited to modeling cyber conflicts in a way that allows for useful analysis. MITRE has compiled the Common Attack Pattern Enumera- tion and Classification (CAPEC), an extensive list of cyber attacks at various levels of abstraction. CAPEC includes factors such as attack prerequisites, possible countermeasures, and attack goals. These elements are vital to understanding cyber attacks and to generating the corresponding real-time responses. We demonstrate that the formalism can be used to extract precise models of cyber attacks from CAPEC. Several case studies show that our Petri net formalism is more expressive than other models, such as attack graphs, for modeling cyber conflicts and that it is amenable to exploring cyber strategies.« less

  12. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  13. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  14. Analysis and optimization of hybrid electric vehicle thermal management systems

    NASA Astrophysics Data System (ADS)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  15. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  17. Topology-optimized broadband surface relief transmission grating

    NASA Astrophysics Data System (ADS)

    Andkjær, Jacob; Ryder, Christian P.; Nielsen, Peter C.; Rasmussen, Thomas; Buchwald, Kristian; Sigmund, Ole

    2014-03-01

    We propose a design methodology for systematic design of surface relief transmission gratings with optimized diffraction efficiency. The methodology is based on a gradient-based topology optimization formulation along with 2D frequency domain finite element simulations for TE and TM polarized plane waves. The goal of the optimization is to find a grating design that maximizes diffraction efficiency for the -1st transmission order when illuminated by unpolarized plane waves. Results indicate that a surface relief transmission grating can be designed with a diffraction efficiency of more than 40% in a broadband range going from the ultraviolet region, through the visible region and into the near-infrared region.

  18. Modeling and optimal designs for dislocation and radiation tolerant single and multijunction solar cells

    NASA Astrophysics Data System (ADS)

    Mehrotra, A.; Alemu, A.; Freundlich, A.

    2011-02-01

    Crystalline defects (e.g. dislocations or grain boundaries) as well as electron and proton induced defects cause reduction of minority carrier diffusion length which in turn results in degradation of efficiency of solar cells. Hetro-epitaxial or metamorphic III-V devices with low dislocation density have high BOL efficiencies but electron-proton radiation causes degradation in EOL efficiencies. By optimizing the device design (emitter-base thickness, doping) we can obtain highly dislocated metamorphic devices that are radiation resistant. Here we have modeled III-V single and multi junction solar cells using drift and diffusion equations considering experimental III-V material parameters, dislocation density, 1 Mev equivalent electron radiation doses, thicknesses and doping concentration. Thinner device thickness leads to increment in EOL efficiency of high dislocation density solar cells. By optimizing device design we can obtain nearly same EOL efficiencies from high dislocation solar cells than from defect free III-V multijunction solar cells. As example defect free GaAs solar cell after optimization gives 11.2% EOL efficiency (under typical 5x1015cm-2 1 MeV electron fluence) while a GaAs solar cell with high dislocation density (108 cm-2) after optimization gives 10.6% EOL efficiency. The approach provides an additional degree of freedom in the design of high efficiency space cells and could in turn be used to relax the need for thick defect filtering buffer in metamorphic devices.

  19. Increasing patient safety and efficiency in transfusion therapy using formal process definitions.

    PubMed

    Henneman, Elizabeth A; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Andrzejewski, Chester; Merrigan, Karen; Cobleigh, Rachel; Frederick, Kimberly; Katz-Bassett, Ethan; Henneman, Philip L

    2007-01-01

    The administration of blood products is a common, resource-intensive, and potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted toward quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (ie, at the point of care) has essentially remained unchanged over the past several decades. Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, although useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions. One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported. The purpose of this article is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.

  20. Traveling-Wave Tube Efficiency Enhancement

    NASA Technical Reports Server (NTRS)

    Dayton, James A., Jr.

    2011-01-01

    Traveling-wave tubes (TWT's) are used to amplify microwave communication signals on virtually all NASA and commercial spacecraft. Because TWT's are a primary power user, increasing their power efficiency is important for reducing spacecraft weight and cost. NASA Glenn Research Center has played a major role in increasing TWT efficiency over the last thirty years. In particular, two types of efficiency optimization algorithms have been developed for coupled-cavity TWT's. The first is the phase-adjusted taper which was used to increase the RF power from 420 to 1000 watts and the RF efficiency from 9.6% to 22.6% for a Ka-band (29.5 GHz) TWT. This was a record efficiency at this frequency level. The second is an optimization algorithm based on simulated annealing. This improved algorithm is more general and can be used to optimize efficiency over a frequency bandwidth and to provide a robust design for very high frequency TWT's in which dimensional tolerance variations are significant.

  1. Load response of shape-changing microswimmers scales with their swimming efficiency

    NASA Astrophysics Data System (ADS)

    Friedrich, Benjamin M.

    2018-04-01

    External forces acting on a microswimmer can feed back on its self-propulsion mechanism. We discuss this load response for a generic microswimmer that swims by cyclic shape changes. We show that the change in cycle frequency is proportional to the Lighthill efficiency of self-propulsion. As a specific example, we consider Najafi's three-sphere swimmer. The force-velocity relation of a microswimmer implies a correction for a formal superposition principle for active and passive motion.

  2. Comment on ‘The most energy efficient way to charge the capacitor in a RC circuit’

    NASA Astrophysics Data System (ADS)

    Oven, R.

    2018-07-01

    In a recent paper, Wang (2017 Phys. Educ. 52 065019), a comparison was made between the efficiency in charging a capacitor (C) in series with a resistor (R) using either a voltage source or a constant current source. The paper concluded that using a current source was more efficient. We show that this is not correct when the energy loss within the current source is considered. It is also shown that the energy loss is not dependent on the charging rate. A formal proof using calculus and simpler graphical arguments are presented.

  3. Formal Verification of Quasi-Synchronous Systems

    DTIC Science & Technology

    2015-07-01

    pg. 215-226, Springer-Verlag: London, UK, 2001. [4] Nicolas Halbwachs and Louis Mandel, Simulation and Verification of Asynchronous Systems by...Huang, S. A. Smolka, W. Tan , and S. Tripakis, Deep Random Search for Efficient Model Checking of Timed Automata, in Proceedings of the 13th Monterey

  4. Vocational Education and Training in Denmark. Short Description

    ERIC Educational Resources Information Center

    Cedefop - European Centre for the Development of Vocational Training, 2012

    2012-01-01

    Vocational education and training in Denmark has embarked on a process of modernisation aiming at, primarily, increasing flexibility, and individualisation, quality and efficiency. Assessment and recognition of informal and non-formal learning, competence-based curricula, innovative approaches to teaching, and increased possibilities for partial…

  5. Modeling Violent Non-State Actors: A Summary of Concepts and Methods

    DTIC Science & Technology

    2004-11-01

    charts, leadership, rules, formal communications and process efficiency to name a few. While a useful aspect of organizational diagnosis , this...and Arie Shirom, Organizational Diagnosis and Assessment: Bridging Theory and Practice (Thousand Oaks, CA: Sage Publications, 1999), 44. 9 Katz and

  6. Use of plan quality degradation to evaluate tradeoffs in delivery efficiency and clinical plan metrics arising from IMRT optimizer and sequencer compromises

    PubMed Central

    Wilkie, Joel R.; Matuszak, Martha M.; Feng, Mary; Moran, Jean M.; Fraass, Benedick A.

    2013-01-01

    Purpose: Plan degradation resulting from compromises made to enhance delivery efficiency is an important consideration for intensity modulated radiation therapy (IMRT) treatment plans. IMRT optimization and/or multileaf collimator (MLC) sequencing schemes can be modified to generate more efficient treatment delivery, but the effect those modifications have on plan quality is often difficult to quantify. In this work, the authors present a method for quantitative assessment of overall plan quality degradation due to tradeoffs between delivery efficiency and treatment plan quality, illustrated using comparisons between plans developed allowing different numbers of intensity levels in IMRT optimization and/or MLC sequencing for static segmental MLC IMRT plans. Methods: A plan quality degradation method to evaluate delivery efficiency and plan quality tradeoffs was developed and used to assess planning for 14 prostate and 12 head and neck patients treated with static IMRT. Plan quality was evaluated using a physician's predetermined “quality degradation” factors for relevant clinical plan metrics associated with the plan optimization strategy. Delivery efficiency and plan quality were assessed for a range of optimization and sequencing limitations. The “optimal” (baseline) plan for each case was derived using a clinical cost function with an unlimited number of intensity levels. These plans were sequenced with a clinical MLC leaf sequencer which uses >100 segments, assuring delivered intensities to be within 1% of the optimized intensity pattern. Each patient's optimal plan was also sequenced limiting the number of intensity levels (20, 10, and 5), and then separately optimized with these same numbers of intensity levels. Delivery time was measured for all plans, and direct evaluation of the tradeoffs between delivery time and plan degradation was performed. Results: When considering tradeoffs, the optimal number of intensity levels depends on the treatment site and on the stage in the process at which the levels are limited. The cost of improved delivery efficiency, in terms of plan quality degradation, increased as the number of intensity levels in the sequencer or optimizer decreased. The degradation was more substantial for the head and neck cases relative to the prostate cases, particularly when fewer than 20 intensity levels were used. Plan quality degradation was less severe when the number of intensity levels was limited in the optimizer rather than the sequencer. Conclusions: Analysis of plan quality degradation allows for a quantitative assessment of the compromises in clinical plan quality as delivery efficiency is improved, in order to determine the optimal delivery settings. The technique is based on physician-determined quality degradation factors and can be extended to other clinical situations where investigation of various tradeoffs is warranted. PMID:23822412

  7. Information Seen as Part of the Development of Living Intelligence: the Five-Leveled Cybersemiotic Framework for FIS

    NASA Astrophysics Data System (ADS)

    Brier, Soren

    2003-06-01

    It is argued that a true transdisciplinary information science going from physical information to phenomenological understanding needs a metaphysical framework. Three different kinds of causality are implied: efficient, formal and final. And at least five different levels of existence are needed: 1. The quantum vacuum fields with entangled causation. 2. The physical level with is energy and force-based efficient causation. 3. The informational-chemical level with its formal causation based on pattern fitting. 4. The biological-semiotic level with its non-conscious final causation and 5. The social-linguistic level of self-consciousness with its conscious goal-oriented final causation. To integrate these consistently in an evolutionary theory as emergent levels, neither mechanical determinism nor complexity theory are sufficient because they cannot be a foundation for a theory of lived meaning. C. S. Peirce's triadic semiotic philosophy combined with a cybernetic and systemic view, like N. Luhmann's, could create the framework I call Cybersemiotics.

  8. The effect of state-level funding on energy efficiency outcomes

    NASA Astrophysics Data System (ADS)

    Downs, Anna

    Increasingly, states are formalizing energy efficiency policies. In 2010, states required utilities to budget $5.5 billion through ratepayer-funded energy efficiency programs, investing in both electricity and natural gas programs. However the size and spread of energy efficiency programs was strikingly different from state to state. This paper examines how far each dollar of state-level energy efficiency funding goes in producing efficiency gains. Many states have also pursued innovative policy actions to conserve electricity. Measures of policy effort are also included in this study, along with average electricity prices. The only variable that is consistently correlated with energy usage intensity across all models is electricity price. As politicians at local, state, and Federal levels continue to push for improved energy efficiency, the models in this paper provide a convincing impetus for focusing on strategies that raise energy prices.

  9. Improvement of energy efficiency via spectrum optimization of excitation sequence for multichannel simultaneously triggered airborne sonar system

    NASA Astrophysics Data System (ADS)

    Meng, Qing-Hao; Yao, Zhen-Jing; Peng, Han-Yang

    2009-12-01

    Both the energy efficiency and correlation characteristics are important in airborne sonar systems to realize multichannel ultrasonic transducers working together. High energy efficiency can increase echo energy and measurement range, and sharp autocorrelation and flat cross correlation can help eliminate cross-talk among multichannel transducers. This paper addresses energy efficiency optimization under the premise that cross-talk between different sonar transducers can be avoided. The nondominated sorting genetic algorithm-II is applied to optimize both the spectrum and correlation characteristics of the excitation sequence. The central idea of the spectrum optimization is to distribute most of the energy of the excitation sequence within the frequency band of the sonar transducer; thus, less energy is filtered out by the transducers. Real experiments show that a sonar system consisting of eight-channel Polaroid 600 series electrostatic transducers excited with 2 ms optimized pulse-position-modulation sequences can work together without cross-talk and can measure distances up to 650 cm with maximal 1% relative error.

  10. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks.

    PubMed

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-10-09

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms.

  11. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks

    PubMed Central

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-01-01

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms. PMID:28991200

  12. Spinor helicity methods in high-energy factorization: Efficient momentum-space calculations in the Color Glass Condensate formalism

    NASA Astrophysics Data System (ADS)

    Ayala, Alejandro; Hentschinski, Martin; Jalilian-Marian, Jamal; Tejeda-Yeomans, Maria Elena

    2017-07-01

    We use the spinor helicity formalism to calculate the cross section for production of three partons of a given polarization in Deep Inelastic Scattering (DIS) off proton and nucleus targets at small Bjorken x. The target proton or nucleus is treated as a classical color field (shock wave) from which the produced partons scatter multiple times. We reported our result for the final expression for the production cross section and studied the azimuthal angular correlations of the produced partons in [1]. Here we provide the full details of the calculation of the production cross section using the spinor helicity methods.

  13. Ontology development for provenance tracing in National Climate Assessment of the US Global Change Research Program

    NASA Astrophysics Data System (ADS)

    Fu, Linyun; Ma, Xiaogang; Zheng, Jin; Goldstein, Justin; Duggan, Brian; West, Patrick; Aulenbach, Steve; Tilmes, Curt; Fox, Peter

    2014-05-01

    This poster will show how we used a case-driven iterative methodology to develop an ontology to represent the content structure and the associated provenance information in a National Climate Assessment (NCA) report of the US Global Change Research Program (USGCRP). We applied the W3C PROV-O ontology to implement a formal representation of provenance. We argue that the use case-driven, iterative development process and the application of a formal provenance ontology help efficiently incorporate domain knowledge from earth and environmental scientists in a well-structured model interoperable in the context of the Web of Data.

  14. The approach to engineering tasks composition on knowledge portals

    NASA Astrophysics Data System (ADS)

    Novogrudska, Rina; Globa, Larysa; Schill, Alexsander; Romaniuk, Ryszard; Wójcik, Waldemar; Karnakova, Gaini; Kalizhanova, Aliya

    2017-08-01

    The paper presents an approach to engineering tasks composition on engineering knowledge portals. The specific features of engineering tasks are highlighted, their analysis makes the basis for partial engineering tasks integration. The formal algebraic system for engineering tasks composition is proposed, allowing to set the context-independent formal structures for engineering tasks elements' description. The method of engineering tasks composition is developed that allows to integrate partial calculation tasks into general calculation tasks on engineering portals, performed on user request demand. The real world scenario «Calculation of the strength for the power components of magnetic systems» is represented, approving the applicability and efficiency of proposed approach.

  15. Unified Approach To Control Of Motions Of Mobile Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Improved computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Present scheme similar to one described in "Coordinated Control of Mobile Robotic Manipulators" (NPO-19109). Both schemes based on configuration-control formalism. Present one incorporates explicit distinction between holonomic and nonholonomic constraints. Several other prior articles in NASA Tech Briefs discussed aspects of configuration-control formalism. These include "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes with Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  16. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  17. Seeding the initial population with feasible solutions in metaheuristic optimization of steel trusses

    NASA Astrophysics Data System (ADS)

    Kazemzadeh Azad, Saeid

    2018-01-01

    In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.

  18. Optimization of single photon detection model based on GM-APD

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Yang, Yi; Hao, Peiyu

    2017-11-01

    One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.

  19. Optimism Bias in Fans and Sports Reporters

    PubMed Central

    Love, Bradley C.

    2015-01-01

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team’s prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve). PMID:26352146

  20. Multi-objective LQR with optimum weight selection to design FOPID controllers for delayed fractional order processes.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2015-09-01

    An optimal trade-off design for fractional order (FO)-PID controller is proposed with a Linear Quadratic Regulator (LQR) based technique using two conflicting time domain objectives. A class of delayed FO systems with single non-integer order element, exhibiting both sluggish and oscillatory open loop responses, have been controlled here. The FO time delay processes are handled within a multi-objective optimization (MOO) formalism of LQR based FOPID design. A comparison is made between two contemporary approaches of stabilizing time-delay systems withinLQR. The MOO control design methodology yields the Pareto optimal trade-off solutions between the tracking performance and total variation (TV) of the control signal. Tuning rules are formed for the optimal LQR-FOPID controller parameters, using median of the non-dominated Pareto solutions to handle delayed FO processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Multi-objective aerodynamic shape optimization of small livestock trailers

    NASA Astrophysics Data System (ADS)

    Gilkeson, C. A.; Toropov, V. V.; Thompson, H. M.; Wilson, M. C. T.; Foxley, N. A.; Gaskell, P. H.

    2013-11-01

    This article presents a formal optimization study of the design of small livestock trailers, within which the majority of animals are transported to market in the UK. The benefits of employing a headboard fairing to reduce aerodynamic drag without compromising the ventilation of the animals' microclimate are investigated using a multi-stage process involving computational fluid dynamics (CFD), optimal Latin hypercube (OLH) design of experiments (DoE) and moving least squares (MLS) metamodels. Fairings are parameterized in terms of three design variables and CFD solutions are obtained at 50 permutations of design variables. Both global and local search methods are employed to locate the global minimum from metamodels of the objective functions and a Pareto front is generated. The importance of carefully selecting an objective function is demonstrated and optimal fairing designs, offering drag reductions in excess of 5% without compromising animal ventilation, are presented.

  2. Optimally combining dynamical decoupling and quantum error correction.

    PubMed

    Paz-Silva, Gerardo A; Lidar, D A

    2013-01-01

    Quantum control and fault-tolerant quantum computing (FTQC) are two of the cornerstones on which the hope of realizing a large-scale quantum computer is pinned, yet only preliminary steps have been taken towards formalizing the interplay between them. Here we explore this interplay using the powerful strategy of dynamical decoupling (DD), and show how it can be seamlessly and optimally integrated with FTQC. To this end we show how to find the optimal decoupling generator set (DGS) for various subspaces relevant to FTQC, and how to simultaneously decouple them. We focus on stabilizer codes, which represent the largest contribution to the size of the DGS, showing that the intuitive choice comprising the stabilizers and logical operators of the code is in fact optimal, i.e., minimizes a natural cost function associated with the length of DD sequences. Our work brings hybrid DD-FTQC schemes, and their potentially considerable advantages, closer to realization.

  3. Optimally combining dynamical decoupling and quantum error correction

    PubMed Central

    Paz-Silva, Gerardo A.; Lidar, D. A.

    2013-01-01

    Quantum control and fault-tolerant quantum computing (FTQC) are two of the cornerstones on which the hope of realizing a large-scale quantum computer is pinned, yet only preliminary steps have been taken towards formalizing the interplay between them. Here we explore this interplay using the powerful strategy of dynamical decoupling (DD), and show how it can be seamlessly and optimally integrated with FTQC. To this end we show how to find the optimal decoupling generator set (DGS) for various subspaces relevant to FTQC, and how to simultaneously decouple them. We focus on stabilizer codes, which represent the largest contribution to the size of the DGS, showing that the intuitive choice comprising the stabilizers and logical operators of the code is in fact optimal, i.e., minimizes a natural cost function associated with the length of DD sequences. Our work brings hybrid DD-FTQC schemes, and their potentially considerable advantages, closer to realization. PMID:23559088

  4. Optimism Bias in Fans and Sports Reporters.

    PubMed

    Love, Bradley C; Kopeć, Łukasz; Guest, Olivia

    2015-01-01

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team's prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve).

  5. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  6. Optimization of blade motion of vertical axis turbine

    NASA Astrophysics Data System (ADS)

    Ma, Yong; Zhang, Liang; Zhang, Zhi-yang; Han, Duan-feng

    2016-04-01

    In this paper, a method is proposed to improve the energy efficiency of the vertical axis turbine. First of all, a single disk multiple stream-tube model is used to calculate individual fitness. Genetic algorithm is adopted to optimize blade pitch motion of vertical axis turbine with the maximum energy efficiency being selected as the optimization objective. Then, a particular data processing method is proposed, fitting the result data into a cosine-like curve. After that, a general formula calculating the blade motion is developed. Finally, CFD simulation is used to validate the blade pitch motion formula. The results show that the turbine's energy efficiency becomes higher after the optimization of blade pitch motion; compared with the fixed pitch turbine, the efficiency of variable-pitch turbine is significantly improved by the active blade pitch control; the energy efficiency declines gradually with the growth of speed ratio; besides, compactness has lager effect on the blade motion while the number of blades has little effect on it.

  7. A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.

    2002-01-01

    The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).

  8. SURVIVABILITY THROUGH OPTIMIZING RESILIENT MECHANISMS (STORM)

    DTIC Science & Technology

    2017-04-01

    STATEMENT Approved for Public Release; Distribution Unlimited. PA# 88ABW-2017-0894 Date Cleared: 07 Mar 2017 13. SUPPLEMENTARY NOTES 14. ABSTRACT Game ...quantitatively about cyber-attacks. Game theory is the branch of applied mathematics that formalizes strategic interaction among intelligent rational agents...mechanism based on game theory. This work has applied game theory to numerous cyber security problems: cloud security, cyber threat information sharing

  9. Improving engineering system design by formal decomposition, sensitivity analysis, and optimization

    NASA Technical Reports Server (NTRS)

    Sobieski, J.; Barthelemy, J. F. M.

    1985-01-01

    A method for use in the design of a complex engineering system by decomposing the problem into a set of smaller subproblems is presented. Coupling of the subproblems is preserved by means of the sensitivity derivatives of the subproblem solution to the inputs received from the system. The method allows for the division of work among many people and computers.

  10. Active inference and epistemic value.

    PubMed

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  11. Demographics of reintroduced populations: estimation, modeling, and decision analysis

    USGS Publications Warehouse

    Converse, Sarah J.; Moore, Clinton T.; Armstrong, Doug P.

    2013-01-01

    Reintroduction can be necessary for recovering populations of threatened species. However, the success of reintroduction efforts has been poorer than many biologists and managers would hope. To increase the benefits gained from reintroduction, management decision making should be couched within formal decision-analytic frameworks. Decision analysis is a structured process for informing decision making that recognizes that all decisions have a set of components—objectives, alternative management actions, predictive models, and optimization methods—that can be decomposed, analyzed, and recomposed to facilitate optimal, transparent decisions. Because the outcome of interest in reintroduction efforts is typically population viability or related metrics, models used in decision analysis efforts for reintroductions will need to include population models. In this special section of the Journal of Wildlife Management, we highlight examples of the construction and use of models for informing management decisions in reintroduced populations. In this introductory contribution, we review concepts in decision analysis, population modeling for analysis of decisions in reintroduction settings, and future directions. Increased use of formal decision analysis, including adaptive management, has great potential to inform reintroduction efforts. Adopting these practices will require close collaboration among managers, decision analysts, population modelers, and field biologists.

  12. 76 FR 56207 - Fiscal Year (FY) 2011 Funding Opportunity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... Solutions, Inc. the current grantee for the National Suicide Prevention Lifeline. This is not a formal... most cost-effective and efficient to supplement the existing grantee for the National Suicide... the National Suicide Prevention Lifeline. As such, Link2Health Solutions has been maintaining the...

  13. Digital Badges--Rewards for Learning?

    ERIC Educational Resources Information Center

    Shields, Rebecca; Chugh, Ritesh

    2017-01-01

    Digital badges are quickly becoming an appropriate, easy and efficient way for educators, community groups and other professional organisations, to exhibit and reward participants for skills obtained in professional development or formal and informal learning. This paper offers an account of digital badges, how they work and the underlying…

  14. Synthesis of Efficient Structures for Concurrent Computation.

    DTIC Science & Technology

    1983-10-01

    formal presentation of these techniques, called virtualisation and aggregation, can be found n [King-83$. 113.2 Census Functions Trees perform broadcast... Functions .. .. .. .. ... .... ... ... .... ... ... ....... 6 4 User-Assisted Aggregation .. .. .. .. ... ... ... .... ... .. .......... 6 5 Parallel...6. Simple Parallel Structure for Broadcasting .. .. .. .. .. . ... .. . .. . .... 4 Figure 7. Internal Structure of a Prefix Computation Network

  15. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  16. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  17. On the use of controls for subsonic transport performance improvement: Overview and future directions

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn; Espana, Martin

    1994-01-01

    Increasing competition among airline manufacturers and operators has highlighted the issue of aircraft efficiency. Fewer aircraft orders have led to an all-out efficiency improvement effort among the manufacturers to maintain if not increase their share of the shrinking number of aircraft sales. Aircraft efficiency is important in airline profitability and is key if fuel prices increase from their current low. In a continuing effort to improve aircraft efficiency and develop an optimal performance technology base, NASA Dryden Flight Research Center developed and flight tested an adaptive performance seeking control system to optimize the quasi-steady-state performance of the F-15 aircraft. The demonstrated technology is equally applicable to transport aircraft although with less improvement. NASA Dryden, in transitioning this technology to transport aircraft, is specifically exploring the feasibility of applying adaptive optimal control techniques to performance optimization of redundant control effectors. A simulation evaluation of a preliminary control law optimizes wing-aileron camber for minimum net aircraft drag. Two submodes are evaluated: one to minimize fuel and the other to maximize velocity. This paper covers the status of performance optimization of the current fleet of subsonic transports. Available integrated controls technologies are reviewed to define approaches using active controls. A candidate control law for adaptive performance optimization is presented along with examples of algorithm operation.

  18. Designing lymphocyte functional structure for optimal signal detection: voilà, T cells.

    PubMed

    Noest, A J

    2000-11-21

    One basic task of immune systems is to detect signals from unknown "intruders" amidst a noisy background of harmless signals. To clarify the functional importance of many observed lymphocyte properties, I ask: What properties would a cell have if one designed it according to the theory of optimal detection, with minimal regard for biological constraints? Sparse and reasonable assumptions about the statistics of available signals prove sufficient for deriving many features of the optimal functional structure, in an incremental and modular design. The use of one common formalism guarantees that all parts of the design collaborate to solve the detection task. Detection performance is computed at several stages of the design. Comparison between design variants reveals e.g. the importance of controlling the signal integration time. This predicts that an appropriate control mechanism should exist. Comparing the design to reality, I find a striking similarity with many features of T cells. For example, the formalism dictates clonal specificity, serial receptor triggering, (grades of) anergy, negative and positive selection, co-stimulation, high-zone tolerance, and clonal production of cytokines. Serious mismatches should be found if T cells were hindered by mechanistic constraints or vestiges of their (co-)evolutionary history, but I have not found clear examples. By contrast, fundamental mismatches abound when comparing the design to immune systems of e.g. invertebrates. The wide-ranging differences seem to hinge on the (in)ability to generate a large diversity of receptors. Copyright 2000 Academic Press.

  19. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  20. Optimization and Validation of the TZM-bl Assay for Standardized Assessments of Neutralizing Antibodies Against HIV-1

    PubMed Central

    Sarzotti-Kelsoe, Marcella; Bailer, Robert T; Turk, Ellen; Lin, Chen-li; Bilska, Miroslawa; Greene, Kelli M.; Gao, Hongmei; Todd, Christopher A.; Ozaki, Daniel A.; Seaman, Michael S.; Mascola, John R.; Montefiori, David C.

    2014-01-01

    The TZM-bl assay measures antibody-mediated neutralization of HIV-1 as a function of reductions in HIV-1 Tat-regulated firefly luciferase (Luc) reporter gene expression after a single round of infection with Env-pseudotyped viruses. This assay has become the main endpoint neutralization assay used for the assessment of preclinical and clinical trial samples by a growing number of laboratories worldwide. Here we present the results of the formal optimization and validation of the TZM-bl assay, performed in compliance with Good Clinical Laboratory Practice (GCLP) guidelines. The assay was evaluated for specificity, accuracy, precision, limits of detection and quantitation, linearity, range and robustness. The validated manual TZM-bl assay was also adapted, optimized and qualified to an automated 384-well format. PMID:24291345

  1. A hierarchical transition state search algorithm

    NASA Astrophysics Data System (ADS)

    del Campo, Jorge M.; Köster, Andreas M.

    2008-07-01

    A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.

  2. The functional connectome of cognitive reserve

    PubMed Central

    Marques, Paulo; Moreira, Pedro; Magalhães, Ricardo; Costa, Patrício; Santos, Nadine; Zihl, Josef; Soares, José

    2016-01-01

    Abstract Cognitive Reserve (CR) designates the brain's capacity to actively cope with insults through a more efficient use of its resources/networks. It was proposed in order to explain the discrepancies between the observed cognitive ability and the expected capacity for an individual. Typical proxies of CR include education and Intelligence Quotient but none totally account for the variability of CR and no study has shown if the brain's greater efficiency associated with CR can be measured. We used a validated model to estimate CR from the residual variance in memory and general executive functioning, accounting for both brain anatomical (i.e., gray matter and white matter signal abnormalities volume) and demographic variables (i.e., years of formal education and sex). Functional connectivity (FC) networks and topological properties were explored for associations with CR. Demographic characteristics, mainly accounted by years of formal education, were associated with higher FC, clustering, local efficiency and strength in parietal and occipital regions and greater network transitivity. Higher CR was associated with a greater FC, local efficiency and clustering of occipital regions, strength and centrality of the inferior temporal gyrus and higher global efficiency. Altogether, these findings suggest that education may facilitate the brain's ability to form segregated functional groups, reinforcing the view that higher education level triggers more specialized use of neural processing. Additionally, this study demonstrated for the first time that CR is associated with more efficient processing of information in the human brain and reinforces the existence of a fine balance between segregation and integration. Hum Brain Mapp 37:3310–3322, 2016.. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27144904

  3. Formalization of the engineering science discipline - knowledge engineering

    NASA Astrophysics Data System (ADS)

    Peng, Xiao

    Knowledge is the most precious ingredient facilitating aerospace engineering research and product development activities. Currently, the most common knowledge retention methods are paper-based documents, such as reports, books and journals. However, those media have innate weaknesses. For example, four generations of flying wing aircraft (Horten, Northrop XB-35/YB-49, Boeing BWB and many others) were mostly developed in isolation. The subsequent engineers were not aware of the previous developments, because these projects were documented such which prevented the next generation of engineers to benefit from the previous lessons learned. In this manner, inefficient knowledge retention methods have become a primary obstacle for knowledge transfer from the experienced to the next generation of engineers. In addition, the quality of knowledge itself is a vital criterion; thus, an accurate measure of the quality of 'knowledge' is required. Although qualitative knowledge evaluation criteria have been researched in other disciplines, such as the AAA criterion by Ernest Sosa stemming from the field of philosophy, a quantitative knowledge evaluation criterion needs to be developed which is capable to numerically determine the qualities of knowledge for aerospace engineering research and product development activities. To provide engineers with a high-quality knowledge management tool, the engineering science discipline Knowledge Engineering has been formalized to systematically address knowledge retention issues. This research undertaking formalizes Knowledge Engineering as follows: 1. Categorize knowledge according to its formats and representations for the first time, which serves as the foundation for the subsequent knowledge management function development. 2. Develop an efficiency evaluation criterion for knowledge management by analyzing the characteristics of both knowledge and the parties involved in the knowledge management processes. 3. Propose and develop an innovative Knowledge-Based System (KBS), AVD KBS, forming a systematic approach facilitating knowledge management. 4. Demonstrate the efficiency advantages of AVDKBS over traditional knowledge management methods via selected design case studies. This research formalizes, for the first time, Knowledge Engineering as a distinct discipline by delivering a robust and high-quality knowledge management and process tool, AVDKBS. Formalizing knowledge proves to significantly impact the effectiveness of aerospace knowledge retention and utilization.

  4. KBSA Project Management Assistant. Volume 1.

    DTIC Science & Technology

    1987-07-01

    perforrim automiated syitim,’i. of j)r)gIiIIm, fm r specified 󈧏,,al-. ti ,. forth. The rationale for and benefits deriving foint tt,- 1. l* ’m m , arc...efficiently, as well as its interface to human user,. It is therefore of priun iiiict an,’," " to employ a language that allows the formalization of...km wledge th It i convenienT . I ;ITw the conceptual level of humans and efficiently manipulable by the PM A.. ’ In order to achieve these somewhat

  5. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.

    1984-01-01

    This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.

  6. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  7. Optimization of insulation of a linear Fresnel collector

    NASA Astrophysics Data System (ADS)

    Ardekani, Mohammad Moghimi; Craig, Ken J.; Meyer, Josua P.

    2017-06-01

    This study presents a simulation based optimization study of insulation around the cavity receiver of a Linear Fresnel Collector. This optimization study focuses on minimizing heat losses from a cavity receiver (maximizing plant thermal efficiency), while minimizing insulation cross-sectional area (minimizing material cost and cavity dead load), which leads to a cheaper and thermally more efficient LFC cavity receiver.

  8. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  9. An efficiency study of the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw

    1995-01-01

    The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.

  10. A linguistic geometry for 3D strategic planning

    NASA Technical Reports Server (NTRS)

    Stilman, Boris

    1995-01-01

    This paper is a new step in the development and application of the Linguistic Geometry. This formal theory is intended to discover the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing Linguistic Geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in this paper on the new pilot example of the solution of the extremely complex 3D optimization problem of strategic planning for the space combat of autonomous vehicles. This example demonstrates deep and highly selective search in comparison with conventional search algorithms.

  11. Lessons from Crew Resource Management for Cardiac Surgeons.

    PubMed

    Marvil, Patrick; Tribble, Curt

    2017-04-30

    Crew resource management (CRM) describes a system developed in the late 1970s in response to a series of deadly commercial aviation crashes. This system has been universally adopted in commercial and military aviation and is now an integral part of aviation culture. CRM is an error mitigation strategy developed to reduce human error in situations in which teams operate in complex, high-stakes environments. Over time, the principles of this system have been applied and utilized in other environments, particularly in medical areas dealing with high-stakes outcomes requiring optimal teamwork and communication. While the data from formal studies on the effectiveness of formal CRM training in medical environments have reported mixed results, it seems clear that some of these principles should have value in the practice of cardiovascular surgery.

  12. Optimized Temporal Monitors for SystemC

    NASA Technical Reports Server (NTRS)

    Tabakov, Deian; Rozier, Kristin Y.; Vardi, Moshe Y.

    2012-01-01

    SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead.

  13. Improving Efficiency of Passive RFID Tag Anti-Collision Protocol Using Dynamic Frame Adjustment and Optimal Splitting.

    PubMed

    Memon, Muhammad Qasim; He, Jingsha; Yasir, Mirza Ammar; Memon, Aasma

    2018-04-12

    Radio frequency identification is a wireless communication technology, which enables data gathering and identifies recognition from any tagged object. The number of collisions produced during wireless communication would lead to a variety of problems including unwanted number of iterations and reader-induced idle slots, computational complexity in terms of estimation as well as recognition of the number of tags. In this work, dynamic frame adjustment and optimal splitting are employed together in the proposed algorithm. In the dynamic frame adjustment method, the length of frames is based on the quantity of tags to yield optimal efficiency. The optimal splitting method is conceived with smaller duration of idle slots using an optimal value for splitting level M o p t , where (M > 2), to vary slot sizes to get the minimal identification time for the idle slots. The application of the proposed algorithm offers the advantages of not going for the cumbersome estimation of the quantity of tags incurred and the size (number) of tags has no effect on its performance efficiency. Our experiment results show that using the proposed algorithm, the efficiency curve remains constant as the number of tags varies from 50 to 450, resulting in an overall theoretical gain in the efficiency of 0.032 compared to system efficiency of 0.441 and thus outperforming both dynamic binary tree slotted ALOHA (DBTSA) and binary splitting protocols.

  14. Efficiency optimization in a correlation ratchet with asymmetric unbiased fluctuations

    NASA Astrophysics Data System (ADS)

    Ai, Bao-Quan; Wang, Xian-Ju; Liu, Guo-Tao; Wen, De-Hua; Xie, Hui-Zhang; Chen, Wei; Liu, Liang-Gang

    2003-12-01

    The efficiency of a Brownian particle moving in a periodic potential in the presence of asymmetric unbiased fluctuations is investigated. We found that even on the quasistatic limit there is a regime where the efficiency can be a peaked function of temperature, which proves that thermal fluctuations facilitate the efficiency of energy transformation, contradicting the earlier findings [H. Kamegawa et al., Phys. Rev. Lett. 80, 5251 (1998)]. It is also found that the mutual interplay between temporal asymmetry and spatial asymmetry may induce optimized efficiency at finite temperatures. The ratchet is not most efficient when it gives maximum current.

  15. Enhanced Ungual Permeation of Terbinafine HCl Delivered Through Liposome-Loaded Nail Lacquer Formulation Optimized by QbD Approach.

    PubMed

    Shah, Viral H; Jobanputra, Amee

    2018-01-01

    The present investigation focused on developing, optimizing, and evaluating a novel liposome-loaded nail lacquer formulation for increasing the transungual permeation flux of terbinafine HCl for efficient treatment of onychomycosis. A three-factor, three-level, Box-Behnken design was employed for optimizing process and formulation parameters of liposomal formulation. Liposomes were formulated by thin film hydration technique followed by sonication. Drug to lipid ratio, sonication amplitude, and sonication time were screened as independent variables while particle size, PDI, entrapment efficiency, and zeta potential were selected as quality attributes for liposomal formulation. Multiple regression analysis was employed to construct a second-order quadratic polynomial equation and contour plots. Design space (overlay plot) was generated to optimize a liposomal system, with software-suggested levels of independent variables that could be transformed to desired responses. The optimized liposome formulation was characterized and dispersed in nail lacquer which was further evaluated for different parameters. Results depicted that the optimized terbinafine HCl-loaded liposome formulation exhibited particle size of 182 nm, PDI of 0.175, zeta potential of -26.8 mV, and entrapment efficiency of 80%. Transungual permeability flux of terbinafine HCl through liposome-dispersed nail lacquer formulation was observed to be significantly higher in comparison to nail lacquer with a permeation enhancer. The developed formulation was also observed to be as efficient as pure drug dispersion in its antifungal activity. Thus, it was concluded that the developed formulation can serve as an efficient tool for enhancing the permeability of terbinafine HCl across human nail plate thereby improving its therapeutic efficiency.

  16. Reexamining Our Bias against Heuristics

    ERIC Educational Resources Information Center

    McLaughlin, Kevin; Eva, Kevin W.; Norman, Geoff R.

    2014-01-01

    Using heuristics offers several cognitive advantages, such as increased speed and reduced effort when making decisions, in addition to allowing us to make decision in situations where missing data do not allow for formal reasoning. But the traditional view of heuristics is that they trade accuracy for efficiency. Here the authors discuss sources…

  17. 78 FR 43220 - Fiscal Year (FY) 2013 Funding Opportunity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-19

    ... Solutions, Inc. the current grantee for the National Suicide Prevention Lifeline. This is not a formal... efficient to supplement the existing grantee for the National Suicide Prevention Lifeline and to build on... agreement to manage the National Suicide Prevention Lifeline. The purpose of this program is to manage...

  18. The Myth behind the Subject Leader as a School Key Player

    ERIC Educational Resources Information Center

    Friedman, Hasia

    2011-01-01

    A school subject leader (SL) is formally considered to make a difference in the educational system as a leader of a professional learning community, being responsible for the efficient and effective performance of the subject department. Since the department entails frequent and significant interactions among teachers, and organizational…

  19. Imagery Teaches Elementary Economics Schema Efficiently.

    ERIC Educational Resources Information Center

    McKenzie, Gary R.

    In a complex domain such as economics, elementary school students' knowledge of formal systems beyond their immediate experience is often too incomplete, superficial, and disorganized to function as schema or model. However, visual imagery is a good technique for teaching young children a network of 10 to 20 propositions and the relationships…

  20. Long-Term Care: Common Issues and Unknowns

    ERIC Educational Resources Information Center

    Swartz, Katherine; Miake, Naoko; Farag, Nadine

    2012-01-01

    All industrialized countries are grappling with a common problem--how to provide assistance of various kinds to their rapidly aging populations. The problem for countries searching for models of efficient and high-quality long-term care (LTC) policies is that fewer than a dozen countries have government-organized, formal LTC policies. Relatively…

  1. A Comparative Survey of Education Systems: Structure, Organization and Development.

    ERIC Educational Resources Information Center

    King, Edmund

    1990-01-01

    Education must disengage from current accountancy concerns and serve learners now and in their real contexts. The massive efficiency of our industrialized apparatus for processing people in formal education prevents us from recognizing that a new approach is needed to satisfy tomorrow's uncertain and unlimited requirements. An international…

  2. How to Kill Creativity--Ten Easy Steps

    ERIC Educational Resources Information Center

    Moore, E. M

    2007-01-01

    A cursory review of academic headlines would suggest that educational institutions can be perceived as formalized, regimented and systematic--intellectual factories that reward those staff and students who conform best to rigid systems which ensure the efficient processing of quantity. Is this the reality? Do prevailing economic and bureaucratic…

  3. Instructional Design: Skills to Benefit the Library Profession

    ERIC Educational Resources Information Center

    Turner, Jennifer

    2016-01-01

    Librarians in many types of libraries frequently find themselves positioned as instructors in formal and informal educational settings. Librarians can help ensure that learner needs are better defined and addressed by gaining basic competency in instructional design (ID), an intentional process used to create effective, efficient educational and…

  4. Optimization techniques applied to spectrum management for communications satellites

    NASA Astrophysics Data System (ADS)

    Ottey, H. R.; Sullivan, T. M.; Zusman, F. S.

    This paper describes user requirements, algorithms and software design features for the application of optimization techniques to the management of the geostationary orbit/spectrum resource. Relevant problems include parameter sensitivity analyses, frequency and orbit position assignment coordination, and orbit position allotment planning. It is shown how integer and nonlinear programming as well as heuristic search techniques can be used to solve these problems. Formalized mathematical objective functions that define the problems are presented. Constraint functions that impart the necessary solution bounds are described. A versatile program structure is outlined, which would allow problems to be solved in stages while varying the problem space, solution resolution, objective function and constraints.

  5. A computable expression of closure to efficient causation.

    PubMed

    Mossio, Matteo; Longo, Giuseppe; Stewart, John

    2009-04-07

    In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.

  6. Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    2001-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.

  7. Increase of Gas-Turbine Plant Efficiency by Optimizing Operation of Compressors

    NASA Astrophysics Data System (ADS)

    Matveev, V.; Goriachkin, E.; Volkov, A.

    2018-01-01

    The article presents optimization method for improving of the working process of axial compressors of gas turbine engines. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.

  8. Optimal design of implants for magnetically mediated hyperthermia: A wireless power transfer approach

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-09-01

    In magnetically mediated hyperthermia (MMH), an externally applied alternating magnetic field interacts with a mediator (such as a magnetic nanoparticle or an implant) inside the body to heat up the tissue in its proximity. Producing heat via induced currents in this manner is strikingly similar to wireless power transfer (WPT) for implants, where power is transferred from a transmitter outside of the body to an implanted receiver, in most cases via magnetic fields as well. Leveraging this analogy, a systematic method to design MMH implants for optimal heating efficiency is introduced, akin to the design of WPT systems for optimal power transfer efficiency. This paper provides analytical formulas for the achievable heating efficiency bounds as well as the optimal operating frequency and the implant material. Multiphysics simulations validate the approach and further demonstrate that optimization with respect to maximum heating efficiency is accompanied by minimizing heat delivery to healthy tissue. This is a property that is highly desirable when considering MMH as a key component or complementary method of cancer treatment and other applications.

  9. A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function

    PubMed Central

    Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078

  10. A robust and effective smart-card-based remote user authentication mechanism using hash function.

    PubMed

    Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.

  11. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  12. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  13. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  14. Structural and process factors affecting the implementation of antimicrobial resistance prevention and control strategies in U.S. hospitals.

    PubMed

    Chou, Ann F; Yano, Elizabeth M; McCoy, Kimberly D; Willis, Deanna R; Doebbeling, Bradley N

    2008-01-01

    To address increases in the incidence of infection with antimicrobial-resistant pathogens, the National Foundation for Infectious Diseases and Centers for Disease Control and Prevention proposed two sets of strategies to (a) optimize antibiotic use and (b) prevent the spread of antimicrobial resistance and control transmission. However, little is known about the implementation of these strategies. Our objective is to explore organizational structural and process factors that facilitate the implementation of National Foundation for Infectious Diseases/Centers for Disease Control and Prevention strategies in U.S. hospitals. We surveyed 448 infection control professionals from a national sample of hospitals. Clinically anchored in the Donabedian model that defines quality in terms of structural and process factors, with the structural domain further informed by a contingency approach, we modeled the degree to which National Foundation for Infectious Diseases and Centers for Disease Control and Prevention strategies were implemented as a function of formalization and standardization of protocols, centralization of decision-making hierarchy, information technology capabilities, culture, communication mechanisms, and interdepartmental coordination, controlling for hospital characteristics. Formalization, standardization, centralization, institutional culture, provider-management communication, and information technology use were associated with optimal antibiotic use and enhanced implementation of strategies that prevent and control antimicrobial resistance spread (all p < .001). However, interdepartmental coordination for patient care was inversely related with antibiotic use in contrast to antimicrobial resistance spread prevention and control (p < .0001). Formalization and standardization may eliminate staff role conflict, whereas centralized authority may minimize ambiguity. Culture and communication likely promote internal trust, whereas information technology use helps integrate and support these organizational processes. These findings suggest concrete strategies for evaluating current capabilities to implement effective practices and foster and sustain a culture of patient safety.

  15. Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism

    NASA Astrophysics Data System (ADS)

    Parish, Eric; Duraisamy, Karthk

    2017-11-01

    The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  16. Quality of haemophilia care in The Netherlands: new standards for optimal care.

    PubMed

    Leebeek, Frank W G; Fischer, Kathelijn

    2014-04-01

    In the Netherlands, the first formal haemophilia comprehensive care centre was established in 1964, and Dutch haemophilia doctors have been organised since 1972. Although several steps were taken to centralise haemophilia care and maintain quality of care, treatment was still delivered in many hospitals, and formal criteria for haemophilia treatment centres as well as a national haemophilia registry were lacking. In collaboration with patients and other stakeholders, Dutch haemophilia doctors have undertaken a formal process to draft new quality standards for the haemophilia treatment centres. First a project group including doctors, nurses, patients and the institute for harmonisation of quality standards undertook a literature study on quality standards and performed explorative visits to several haemophilia treatment centres in the Netherlands. Afterwards concept standards were defined and validated in two treatment centres. Next, the concept standards were evaluated by haemophilia doctors, patients, health insurance representatives and regulators. Finally, the final version of the standards of care was approved by Central body of Experts on quality standards in clinical care and the Dutch Ministry of Health. A team of expert auditors have been trained and, together with an independent auditor, will perform audits in haemophilia centres applying for formal certification. Concomitantly, a national registry for haemophilia and allied disorders is being set up. It is expected that these processes will lead to further concentration and improved quality of haemophilia care in the Netherlands.

  17. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  18. From School to Cafe and Back Again: Responding to the Learning Demands of the Twenty-First Century

    ERIC Educational Resources Information Center

    McWilliam, Erica

    2011-01-01

    This paper traces the historical origins of formal and informal lifelong learning to argue that optimal twenty-first-century education can and should draw on the traditions of both the school and the coffee house or cafe. For some time now, educational policy documents and glossy school brochures have come wrapped in the mantle of lifelong…

  19. A design procedure and handling quality criteria for lateral directional flight control systems

    NASA Technical Reports Server (NTRS)

    Stein, G.; Henke, A. H.

    1972-01-01

    A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments.

  20. Recent Developments: PKI Square Dish for the Soleras Project

    NASA Technical Reports Server (NTRS)

    Rogers, W. E.

    1984-01-01

    The Square Dish solar collectors are subjected to rigorous design attention regarding corrosion at the site, and certification of the collector structure. The microprocessor controls and tracking mechanisms are improved in the areas of fail safe operations, durability, and low parasitic power requirements. Prototype testing demonstrates performance efficiency of approximately 72% at 730 F outlet temperature. Studies are conducted that include developing formal engineering design studies, developing formal engineering design drawing and fabrication details, establishing subcontracts for fabrication of major components, and developing a rigorous quality control system. The improved design is more cost effective to product and the extensive manuals developed for assembly and operation/maintenance result in faster field assembly and ease of operation.

  1. Recent developments: PKI square dish for the Soleras Project

    NASA Astrophysics Data System (ADS)

    Rogers, W. E.

    1984-03-01

    The Square Dish solar collectors are subjected to rigorous design attention regarding corrosion at the site, and certification of the collector structure. The microprocessor controls and tracking mechanisms are improved in the areas of fail safe operations, durability, and low parasitic power requirements. Prototype testing demonstrates performance efficiency of approximately 72% at 730 F outlet temperature. Studies are conducted that include developing formal engineering design studies, developing formal engineering design drawing and fabrication details, establishing subcontracts for fabrication of major components, and developing a rigorous quality control system. The improved design is more cost effective to product and the extensive manuals developed for assembly and operation/maintenance result in faster field assembly and ease of operation.

  2. Gulf Coast Clean Energy Application Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillingham, Gavin

    The Gulf Coast Clean Energy Application Center was initiated to significantly improve market and regulatory conditions for the implementation of combined heat and power technologies. The GC CEAC was responsible for the development of CHP in Texas, Louisiana and Oklahoma. Through this program we employed a variety of outreach and education techniques, developed and deployed assessment tools and conducted market assessments. These efforts resulted in the growth of the combined heat and power market in the Gulf Coast region with a realization of more efficient energy generation, reduced emissions and a more resilient infrastructure. Specific t research, we did notmore » formally investigate any techniques with any formal research design or methodology.« less

  3. An ontology of scientific experiments

    PubMed Central

    Soldatova, Larisa N; King, Ross D

    2006-01-01

    The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science. PMID:17015305

  4. Fully relativistic pseudopotential formalism under an atomic orbital basis: spin-orbit splittings and magnetic anisotropies.

    PubMed

    Cuadrado, R; Cerdá, J I

    2012-02-29

    We present an efficient implementation of the spin-orbit coupling within the density functional theory based SIESTA code (2002 J. Phys.: Condens. Matter 14 2745) using the fully relativistic and totally separable pseudopotential formalism of Hemstreet et al (1993 Phys. Rev. B 47 4238). First, we obtain the spin-orbit splittings for several systems ranging from isolated atoms to bulk metals and semiconductors as well as the Au(111) surface state. Next, and after extensive tests on the accuracy of the formalism, we also demonstrate its capability to yield reliable values for the magnetic anisotropy energy in magnetic systems. In particular, we focus on the L1(0) binary alloys and on two large molecules: Mn(6)O(2)(H -sao)(6)(O(2)CH)(2)(CH(3)OH)(4) and Co(4)(hmp)(4)(CH(3)OH)(4)Cl(4). In all cases our calculated anisotropies are in good agreement with those obtained with full-potential methods, despite the latter being, in general, computationally more demanding.

  5. Modeling and optimization of a concentrated solar supercritical CO2 power plant

    NASA Astrophysics Data System (ADS)

    Osorio, Julian D.

    Renewable energy sources are fundamental alternatives to supply the rising energy demand in the world and to reduce or replace fossil fuel technologies. In order to make renewable-based technologies suitable for commercial and industrial applications, two main challenges need to be solved: the design and manufacture of highly efficient devices and reliable systems to operate under intermittent energy supply conditions. In particular, power generation technologies based on solar energy are one of the most promising alternatives to supply the world energy demand and reduce the dependence on fossil fuel technologies. In this dissertation, the dynamic behavior of a Concentrated Solar Power (CSP) supercritical CO2 cycle is studied under different seasonal conditions. The system analyzed is composed of a central receiver, hot and cold thermal energy storage units, a heat exchanger, a recuperator, and multi-stage compression-expansion subsystems with intercoolers and reheaters between compressors and turbines respectively. The effects of operating and design parameters on the system performance are analyzed. Some of these parameters are the mass flow rate, intermediate pressures, number of compression-expansion stages, heat exchangers' effectiveness, multi-tank thermal energy storage, overall heat transfer coefficient between the solar receiver and the environment and the effective area of the recuperator. Energy and exergy models for each component of the system are developed to optimize operating parameters in order to lead to maximum efficiency. From the exergy analysis, the components with high contribution to exergy destruction were identified. These components, which represent an important potential of improvement, are the recuperator, the hot thermal energy storage tank and the solar receiver. Two complementary alternatives to improve the efficiency of concentrated solar thermal systems are proposed in this dissertation: the optimization of the system's operating parameters and optimization of less efficient components. The parametric optimization is developed for a 1MW reference CSP system with CO2 as the working fluid. The component optimization, focused on the less efficient components, comprises some design modifications to the traditional component configuration for the recuperator, the hot thermal energy storage tank and the solar receiver. The proposed optimization alternatives include the heat exchanger's effectiveness enhancement by optimizing fins shapes, multi-tank thermal energy storage configurations for the hot thermal energy storage tank and the incorporation of a transparent insulation material into the solar receiver. Some of the optimizations are conducted in a generalized way, using dimensionless models to be applicable no only to the CSP but also to other thermal systems. This project is therefore an effort to improve the efficiency of power generation systems based on solar energy in order to make them competitive with conventional fossil fuel power generation devices. The results show that the parametric optimization leads the system to an efficiency of about 21% and a maximum power output close to 1.5 MW. The process efficiencies obtained in this work, of more than 21%, are relatively good for a solar-thermal conversion system and are also comparable with efficiencies of conversion of high performance PV panels. The thermal energy storage allows the system to operate for several hours after sunset. This operating time is approximately increased from 220 to 480 minutes after optimization. The hot and cold thermal energy storage also lessens the temperature fluctuations by providing smooth changes of temperatures at the turbines' and compressors' inlets. Additional improvements in the overall system efficiency are possible by optimizing the less efficient components. In particular, the fin's effectiveness can be improved in more than 5% after its shape is optimized, increments in the efficiency of the thermal energy storage of about 5.7% are possible when the mass is divided into four tanks, and solar receiver efficiencies up to 70% can be maintained for high operating temperatures (~ 1200°C) when a transparent insulation material is incorporated to the receiver. The results obtained in this dissertation indicate that concentrated solar systems using supercritical CO2 could be a viable alternative to satisfying energy needs in desert areas with scarce water and fossil fuel resources.

  6. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  7. Topology-optimized metasurfaces: impact of initial geometric layout.

    PubMed

    Yang, Jianji; Fan, Jonathan A

    2017-08-15

    Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.

  8. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  9. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  10. Full space device optimization for solar cells.

    PubMed

    Baloch, Ahmer A B; Aly, Shahzada P; Hossain, Mohammad I; El-Mellouhi, Fedwa; Tabet, Nouar; Alharbi, Fahhad H

    2017-09-20

    Advances in computational materials have paved a way to design efficient solar cells by identifying the optimal properties of the device layers. Conventionally, the device optimization has been governed by single or double descriptors for an individual layer; mostly the absorbing layer. However, the performance of the device depends collectively on all the properties of the material and the geometry of each layer in the cell. To address this issue of multi-property optimization and to avoid the paradigm of reoccurring materials in the solar cell field, a full space material-independent optimization approach is developed and presented in this paper. The method is employed to obtain an optimized material data set for maximum efficiency and for targeted functionality for each layer. To ensure the robustness of the method, two cases are studied; namely perovskite solar cells device optimization and cadmium-free CIGS solar cell. The implementation determines the desirable optoelectronic properties of transport mediums and contacts that can maximize the efficiency for both cases. The resulted data sets of material properties can be matched with those in materials databases or by further microscopic material design. Moreover, the presented multi-property optimization framework can be extended to design any solid-state device.

  11. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  12. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  13. Integrated topology and shape optimization in structural design

    NASA Technical Reports Server (NTRS)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  14. Donor-acceptor-donor thienyl/bithienyl-benzothiadiazole/quinoxaline model oligomers: experimental and theoretical studies.

    PubMed

    Pina, João; de Melo, J Seixas; Breusov, D; Scherf, Ullrich

    2013-09-28

    A comprehensive spectral and photophysical investigation of four donor-acceptor-donor (DAD) oligomers consisting of electron-deficient 2,1,3-benzothiadiazole or quinoxaline moieties linked to electron-rich thienyl or bithienyl units has been undertaken. Additionally, a bis(dithienyl) substituted naphthalene was also investigated. The D-A-D nature of these oligomers resulted in the presence of an intramolecular charge transfer (ICT) state, which was further substantiated by solvatochromism studies (analysis with the Lippert-Mataga formalism). Hereby, significant differences have been obtained for the fluorescence quantum yields of the oligomers in the non-polar solvent methylcyclohexane vs. the polar ethanol. The study was further complemented with the determination of the optimized ground-state molecular geometries for the oligomers together with the prediction of the lowest vertical one-electron excitation energy and the relevant molecular orbital contours using DFT calculations. The electronic transitions show a clear HOMO to LUMO charge-transfer character. In contrast to the thiophene oligomers (the oligothiophenes with n = 1-7), where the intersystem crossing (ISC) yield decreases with n, the studied DAD oligomers were found to show an increase in the ISC efficiency with the number of (donor) thienyl units.

  15. Using Formal Grammars to Predict I/O Behaviors in HPC: The Omnisc'IO Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Ibrahim, Shadi; Antoniu, Gabriel

    2016-08-01

    The increasing gap between the computation performance of post-petascale machines and the performance of their I/O subsystem has motivated many I/O optimizations including prefetching, caching, and scheduling. In order to further improve these techniques, modeling and predicting spatial and temporal I/O patterns of HPC applications as they run has become crucial. In this paper we present Omnisc'IO, an approach that builds a grammar-based model of the I/O behavior of HPC applications and uses it to predict when future I/O operations will occur, and where and how much data will be accessed. To infer grammars, Omnisc'IO is based on StarSequitur, amore » novel algorithm extending Nevill-Manning's Sequitur algorithm. Omnisc'IO is transparently integrated into the POSIX and MPI I/O stacks and does not require any modification in applications or higher-level I/O libraries. It works without any prior knowledge of the application and converges to accurate predictions of any N future I/O operations within a couple of iterations. Its implementation is efficient in both computation time and memory footprint.« less

  16. Primer on the Implementation of a Pharmacy Intranet Site to Improve Department Communication

    PubMed Central

    Hale, Holly J.

    2013-01-01

    Purpose: The purpose of the article is to describe the experience of selecting, developing, and implementing a pharmacy department intranet site with commentary regarding application to other institutions. Clinical practitioners and supporting staff need an effective, efficient, organized, and user-friendly communication tool to utilize and relay information required to optimize patient care. Summary: To create a functional and user-friendly department intranet site, department leadership and staff should be involved in the process from selection of product through implementation. A product that supports both document storage management and communication delivery and has the capability to be customized to provide varied levels of site access is desirable. The designation of an intranet site owner/developer within the department will facilitate purposeful site design and site maintenance execution. A well-designed and up-to-date site along with formal end-user training are essential for staff adoption and continued utilization. Conclusion: Development of a department intranet site requires a considerable time investment by several members of the department. The implementation of an intranet site can be an important step toward achieving improved communications. Staff utilization of this resource is key to its success. PMID:24421523

  17. Primer on the implementation of a pharmacy intranet site to improve department communication.

    PubMed

    Hale, Holly J

    2013-07-01

    The purpose of the article is to describe the experience of selecting, developing, and implementing a pharmacy department intranet site with commentary regarding application to other institutions. Clinical practitioners and supporting staff need an effective, efficient, organized, and user-friendly communication tool to utilize and relay information required to optimize patient care. To create a functional and user-friendly department intranet site, department leadership and staff should be involved in the process from selection of product through implementation. A product that supports both document storage management and communication delivery and has the capability to be customized to provide varied levels of site access is desirable. The designation of an intranet site owner/developer within the department will facilitate purposeful site design and site maintenance execution. A well-designed and up-to-date site along with formal end-user training are essential for staff adoption and continued utilization. Development of a department intranet site requires a considerable time investment by several members of the department. The implementation of an intranet site can be an important step toward achieving improved communications. Staff utilization of this resource is key to its success.

  18. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator

    NASA Astrophysics Data System (ADS)

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-03-01

    We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).

  19. Formal Professional Relationships Between General Practitioners and Specialists in Shared Care: Possible Associations with Patient Health and Pharmacy Costs.

    PubMed

    Lublóy, Ágnes; Keresztúri, Judit Lilla; Benedek, Gábor

    2016-04-01

    Shared care in chronic disease management aims at improving service delivery and patient outcomes, and reducing healthcare costs. The introduction of shared-care models is coupled with mixed evidence in relation to both patient health status and cost of care. Professional interactions among health providers are critical to a successful and efficient shared-care model. This article investigates whether the strength of formal professional relationships between general practitioners (GPs) and specialists (SPs) in shared care affects either the health status of patients or their pharmacy costs. In strong GP-SP relationships, the patient health status is expected to be high, due to efficient care coordination, and the pharmacy costs low, due to effective use of resources. This article measures the strength of formal professional relationships between GPs and SPs through the number of shared patients and proxies the patient health status by the number of comorbidities diagnosed and treated. To test the hypotheses and compare the characteristics of the strongest GP-SP connections with those of the weakest, this article concentrates on diabetes-a chronic condition where patient care coordination is likely important. Diabetes generates the largest shared patient cohort in Hungary, with the highest frequency of specialist medication prescriptions. This article finds that stronger ties result in lower pharmacy costs, but not in higher patient health status. Overall drug expenditure may be reduced by lowering patient care fragmentation through channelling a GP's patients to a small number of SPs.

  20. Optimization of nanostructured lipid carriers for topical delivery of nimesulide using Box-Behnken design approach.

    PubMed

    Moghddam, Seyedeh Marziyeh Mahdavi; Ahad, Abdul; Aqil, Mohd; Imam, Syed Sarim; Sultana, Yasmin

    2017-05-01

    The aim of the present study was to develop and optimize topically applied nimesulide-loaded nanostructured lipid carriers. Box-Behnken experimental design was applied for optimization of nanostructured lipid carriers. The independent variables were ratio of stearic acid: oleic acid (X 1 ), poloxamer 188 concentration (X 2 ) and lecithin concentration (X 3 ) while particle size (Y 1 ) and entrapment efficiency (Y 2 ) were the chosen responses. Further, skin penetration study, in vitro release, confocal laser scanning microscopy and stability study were also performed. The optimized nanostructured lipid carriers of nimesulide provide reasonable particle size, flux, and entrapment efficiency. Optimized formulation (F9) with mean particle size of 214.4 ± 11 nm showed 89.4 ± 3.40% entrapment efficiency and achieved mean flux 2.66 ± 0.09 μg/cm 2 /h. In vitro release study showed prolonged drug release from the optimized formulation following Higuchi release kinetics with R 2 value of 0.984. Confocal laser scanning microscopy revealed an enhanced penetration of Rhodamine B-loaded nanostructured lipid carriers to the deeper layers of the skin. The stability study confirmed that the optimized formulation was considerably stable at refrigerator temperature as compared to room temperature. Our results concluded that nanostructured lipid carriers are an efficient carrier for topical delivery of nimesulide.

  1. Increased glycosylation efficiency of recombinant proteins in Escherichia coli by auto-induction.

    PubMed

    Ding, Ning; Yang, Chunguang; Sun, Shenxia; Han, Lichi; Ruan, Yao; Guo, Longhua; Hu, Xuejun; Zhang, Jianing

    2017-03-25

    Escherichia coli cells have been considered as promising hosts for producing N-glycosylated proteins since the successful production of N-glycosylated protein in E. coli with the pgl (N-linked protein glycosylation) locus from Campylobacter jejuni. However, one hurdle in producing N-glycosylated proteins in large scale using E. coli is inefficient glycan glycosylation. In this study, we developed a strategy for the production of N-glycosylated proteins with high efficiency via an optimized auto-induction method. The 10th human fibronectin type III domain (FN3) was engineered with native glycosylation sequon DFNRSK and optimized DQNAT sequon in C-terminus with flexible linker as acceptor protein models. The resulting glycosylation efficiencies were confirmed by Western blots with anti-FLAG M1 antibody. Increased efficiency of glycosylation was obtained by changing the conventional IPTG induction to auto-induction method, which increased the glycosylation efficiencies from 60% and 75% up to 90% and 100% respectively. Moreover, in the condition of inserting the glycosylation sequon in the loop of FN3 (the acceptor sequon with local structural conformation), the glycosylation efficiency was increased from 35% to 80% by our optimized auto-induction procedures. To justify the potential for general application of the optimized auto-induction method, the reconstituted lsg locus from Haemophilus influenzae and PglB from C. jejuni were utilized, and this led to 100% glycosylation efficiency. Our studies provided quantitative evidence that the optimized auto-induction method will facilitate the large-scale production of pure exogenous N-glycosylation proteins in E. coli cells. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. The unlikely high efficiency of a molecular motor based on active motion

    NASA Astrophysics Data System (ADS)

    Ebeling, W.

    2015-07-01

    The efficiency of a simple model of a motor converting chemical into mechanical energy is studied analytically. The model motor shows interesting properties corresponding qualitatively to motors investigated in experiments. The efficiency increases with the load and may for low loss reach high values near to 100 percent in a narrow regime of optimal load. It is shown that the optimal load and the maximal efficiency depend by universal power laws on the dimensionless loss parameter. Stochastic effects decrease the stability of motor regimes with high efficiency and make them unlikely. Numerical studies show efficiencies below the theoretical optimum and demonstrate that special ratchet profiles my stabilize efficient regimes.

  3. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    PubMed

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  4. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization

    PubMed Central

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung

    2017-01-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617

  5. Compromise solution in the problem of change state control for the material body exposed to the external medium

    NASA Astrophysics Data System (ADS)

    Malafeyev, O. A.; Redinskikh, N. D.

    2018-05-01

    The problem of finding optimal temperature control of the material body state under the unknown in advance parameters of the external medium is formalized and studied in this paper. The problems of this type arise frequently in the real life. An optimal thermal regime is necessary to apply at the soil thawing or freezing, drying the building materials, heating the concrete to obtain the required strength, and so on. Problems of such type one can analyze making use the apparatus and methods of game theory. For describing the influence of external medium on the characteristics of different materials we make use the many-step two person zero-sum game in this paper. The compromise solution is taken as the optimality principle. The numerical example is given.

  6. Optimization and qualification of an Fc Array assay for assessments of antibodies against HIV-1/SIV.

    PubMed

    Brown, Eric P; Weiner, Joshua A; Lin, Shu; Natarajan, Harini; Normandin, Erica; Barouch, Dan H; Alter, Galit; Sarzotti-Kelsoe, Marcella; Ackerman, Margaret E

    2018-04-01

    The Fc Array is a multiplexed assay that assesses the Fc domain characteristics of antigen-specific antibodies with the potential to evaluate up to 500 antigen specificities simultaneously. Antigen-specific antibodies are captured on antigen-conjugated beads and their functional capacity is probed via an array of Fc-binding proteins including antibody subclassing reagents, Fcγ receptors, complement proteins, and lectins. Here we present the results of the optimization and formal qualification of the Fc Array, performed in compliance with Good Clinical Laboratory Practice (GCLP) guidelines. Assay conditions were optimized for performance and reproducibility, and the final version of the assay was then evaluated for specificity, accuracy, precision, limits of detection and quantitation, linearity, range and robustness. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Thermodynamic Optimization of the Ag-Bi-Cu-Ni Quaternary System: Part I, Binary Subsystems

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cui, Senlin; Rao, Weifeng

    2018-07-01

    A comprehensive literature review and thermodynamic optimization of the phase diagrams and thermodynamic properties of the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary systems are presented. CALculation of PHAse Diagrams (CALPHAD)-type thermodynamic optimization was carried out to reproduce all available and reliable experimental phase equilibrium and thermodynamic data. The modified quasichemical model was used to model the liquid solution. The compound energy formalism was utilized to describe the Gibbs energies of all terminal solid solutions and intermetallic compounds. A self-consistent thermodynamic database for the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary subsystems of the Ag-Bi-Cu-Ni quaternary system was developed. This database can be used as a guide for research and development of lead-free solders.

  8. Thermodynamic Optimization of the Ag-Bi-Cu-Ni Quaternary System: Part I, Binary Subsystems

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cui, Senlin; Rao, Weifeng

    2018-05-01

    A comprehensive literature review and thermodynamic optimization of the phase diagrams and thermodynamic properties of the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary systems are presented. CALculation of PHAse Diagrams (CALPHAD)-type thermodynamic optimization was carried out to reproduce all available and reliable experimental phase equilibrium and thermodynamic data. The modified quasichemical model was used to model the liquid solution. The compound energy formalism was utilized to describe the Gibbs energies of all terminal solid solutions and intermetallic compounds. A self-consistent thermodynamic database for the Ag-Bi, Ag-Cu, Ag-Ni, Bi-Cu, and Bi-Ni binary subsystems of the Ag-Bi-Cu-Ni quaternary system was developed. This database can be used as a guide for research and development of lead-free solders.

  9. Modeling of defect-tolerant thin multi-junction solar cells for space application

    NASA Astrophysics Data System (ADS)

    Mehrotra, A.; Alemu, A.; Freundlich, A.

    2012-02-01

    Using drift-diffusion model and considering experimental III-V material parameters, AM0 efficiencies of lattice-matched multijunction solar cells have been calculated and the effects of dislocations and radiation damage have been analyzed. Ultrathin multi-junction devices perform better in presence of dislocations or/and radiation harsh environment compared to conventional thick multijunction devices. Our results show that device design optimization of Ga0.51In0.49P/GaAs multijunction devices leads to an improvement in EOL efficiency from 4.8%, for the conventional thick device design, to 12.7%, for the EOL optimized thin devices. In addition, an optimized defect free lattice matched Ga0.51In0.49P/GaAs solar cell under 1016cm-2 1Mev equivalent electron fluence is shown to give an EOL efficiency of 12.7%; while a Ga0.51In0.49P/GaAs solar cell with 108 cm-2 dislocation density under 1016cm-2 electron fluence gives an EOL efficiency of 12.3%. The results suggest that by optimizing the device design, we can obtain nearly the same EOL efficiencies for high dislocation metamorphic solar cells and defect filtered metamorphic multijunction solar cells. The findings relax the need for thick or graded buffer used for defect filtering in metamorphic devices. It is found that device design optimization allows highly dislocated devices to be nearly as efficient as defect free devices for space applications.

  10. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  11. Simulation of the transient processes of load rejection under different accident conditions in a hydroelectric generating set

    NASA Astrophysics Data System (ADS)

    Guo, W. C.; Yang, J. D.; Chen, J. P.; Peng, Z. Y.; Zhang, Y.; Chen, C. C.

    2016-11-01

    Load rejection test is one of the essential tests that carried out before the hydroelectric generating set is put into operation formally. The test aims at inspecting the rationality of the design of the water diversion and power generation system of hydropower station, reliability of the equipment of generating set and the dynamic characteristics of hydroturbine governing system. Proceeding from different accident conditions of hydroelectric generating set, this paper presents the transient processes of load rejection corresponding to different accident conditions, and elaborates the characteristics of different types of load rejection. Then the numerical simulation method of different types of load rejection is established. An engineering project is calculated to verify the validity of the method. Finally, based on the numerical simulation results, the relationship among the different types of load rejection and their functions on the design of hydropower station and the operation of load rejection test are pointed out. The results indicate that: The load rejection caused by the accident within the hydroelectric generating set is realized by emergency distributing valve, and it is the basis of the optimization for the closing law of guide vane and the calculation of regulation and guarantee. The load rejection caused by the accident outside the hydroelectric generating set is realized by the governor. It is the most efficient measure to inspect the dynamic characteristics of hydro-turbine governing system, and its closure rate of guide vane set in the governor depends on the optimization result in the former type load rejection.

  12. Orbital dependent functionals: An atom projector augmented wave method implementation

    NASA Astrophysics Data System (ADS)

    Xu, Xiao

    This thesis explores the formulation and numerical implementation of orbital dependent exchange-correlation functionals within electronic structure calculations. These orbital-dependent exchange-correlation functionals have recently received renewed attention as a means to improve the physical representation of electron interactions within electronic structure calculations. In particular, electron self-interaction terms can be avoided. In this thesis, an orbital-dependent functional is considered in the context of Hartree-Fock (HF) theory as well as the Optimized Effective Potential (OEP) method and the approximate OEP method developed by Krieger, Li, and Iafrate, known as the KLI approximation. In this thesis, the Fock exchange term is used as a simple well-defined example of an orbital-dependent functional. The Projected Augmented Wave (PAW) method developed by P. E. Blochl has proven to be accurate and efficient for electronic structure calculations for local and semi-local functions because of its accurate evaluation of interaction integrals by controlling multiple moments. We have extended the PAW method to treat orbital-dependent functionals in Hartree-Fock theory and the Optimized Effective Potential method, particularly in the KLI approximation. In the course of study we develop a frozen-core orbital approximation that accurately treats the core electron contributions for above three methods. The main part of the thesis focuses on the treatment of spherical atoms. We have investigated the behavior of PAW-Hartree Fock and PAW-KLI basis, projector, and pseudopotential functions for several elements throughout the periodic table. We have also extended the formalism to the treatment of solids in a plane wave basis and implemented PWPAW-KLI code, which will appear in future publications.

  13. Optimization of freeform lightpipes for light-emitting-diode projectors.

    PubMed

    Fournier, Florian; Rolland, Jannick

    2008-03-01

    Standard nonimaging components used to collect and integrate light in light-emitting-diode-based projector light engines such as tapered rods and compound parabolic concentrators are compared to optimized freeform shapes in terms of transmission efficiency and spatial uniformity. We show that the simultaneous optimization of the output surface and the profile shape yields transmission efficiency within the étendue limit up to 90% and spatial uniformity higher than 95%, even for compact sizes. The optimization process involves a manual study of the trends for different shapes and the use of an optimization algorithm to further improve the performance of the freeform lightpipe.

  14. Optimization of freeform lightpipes for light-emitting-diode projectors

    NASA Astrophysics Data System (ADS)

    Fournier, Florian; Rolland, Jannick

    2008-03-01

    Standard nonimaging components used to collect and integrate light in light-emitting-diode-based projector light engines such as tapered rods and compound parabolic concentrators are compared to optimized freeform shapes in terms of transmission efficiency and spatial uniformity. We show that the simultaneous optimization of the output surface and the profile shape yields transmission efficiency within the étendue limit up to 90% and spatial uniformity higher than 95%, even for compact sizes. The optimization process involves a manual study of the trends for different shapes and the use of an optimization algorithm to further improve the performance of the freeform lightpipe.

  15. Optimization of output power and transmission efficiency of magnetically coupled resonance wireless power transfer system

    NASA Astrophysics Data System (ADS)

    Yan, Rongge; Guo, Xiaoting; Cao, Shaoqing; Zhang, Changgeng

    2018-05-01

    Magnetically coupled resonance (MCR) wireless power transfer (WPT) system is a promising technology in electric energy transmission. But, if its system parameters are designed unreasonably, output power and transmission efficiency will be low. Therefore, optimized parameters design of MCR WPT has important research value. In the MCR WPT system with designated coil structure, the main parameters affecting output power and transmission efficiency are the distance between the coils, the resonance frequency and the resistance of the load. Based on the established mathematical model and the differential evolution algorithm, the change of output power and transmission efficiency with parameters can be simulated. From the simulation results, it can be seen that output power and transmission efficiency of the two-coil MCR WPT system and four-coil one with designated coil structure are improved. The simulation results confirm the validity of the optimization method for MCR WPT system with designated coil structure.

  16. Efficiency Management in Spaceflight Systems

    NASA Technical Reports Server (NTRS)

    Murphy, Karen

    2016-01-01

    Efficiency in spaceflight is often approached as “faster, better, cheaper – pick two”. The high levels of performance and reliability required for each mission suggest that planners can only control for two of the three. True efficiency comes by optimizing a system across all three parameters. The functional processes of spaceflight become technical requirements on three operational groups during mission planning: payload, vehicle, and launch operations. Given the interrelationships among the functions performed by the operational groups, optimizing function resources from one operational group to the others affects the efficiency of those groups and therefore the mission overall. This paper helps outline this framework and creates a context in which to understand the effects of resource trades on the overall system, improving the efficiency of the operational groups and the mission as a whole. This allows insight into and optimization of the controlling factors earlier in the mission planning stage.

  17. Optimal Control of Induction Machines to Minimize Transient Energy Losses

    NASA Astrophysics Data System (ADS)

    Plathottam, Siby Jose

    Induction machines are electromechanical energy conversion devices comprised of a stator and a rotor. Torque is generated due to the interaction between the rotating magnetic field from the stator, and the current induced in the rotor conductors. Their speed and torque output can be precisely controlled by manipulating the magnitude, frequency, and phase of the three input sinusoidal voltage waveforms. Their ruggedness, low cost, and high efficiency have made them ubiquitous component of nearly every industrial application. Thus, even a small improvement in their energy efficient tend to give a large amount of electrical energy savings over the lifetime of the machine. Hence, increasing energy efficiency (reducing energy losses) in induction machines is a constrained optimization problem that has attracted attention from researchers. The energy conversion efficiency of induction machines depends on both the speed-torque operating point, as well as the input voltage waveform. It also depends on whether the machine is in the transient or steady state. Maximizing energy efficiency during steady state is a Static Optimization problem, that has been extensively studied, with commercial solutions available. On the other hand, improving energy efficiency during transients is a Dynamic Optimization problem that is sparsely studied. This dissertation exclusively focuses on improving energy efficiency during transients. This dissertation treats the transient energy loss minimization problem as an optimal control problem which consists of a dynamic model of the machine, and a cost functional. The rotor field oriented current fed model of the induction machine is selected as the dynamic model. The rotor speed and rotor d-axis flux are the state variables in the dynamic model. The stator currents referred to as d-and q-axis currents are the control inputs. A cost functional is proposed that assigns a cost to both the energy losses in the induction machine, as well as the deviations from desired speed-torque-magnetic flux setpoints. Using Pontryagin's minimum principle, a set of necessary conditions that must be satisfied by the optimal control trajectories are derived. The conditions are in the form a two-point boundary value problem, that can be solved numerically. The conjugate gradient method that was modified using the Hestenes-Stiefel formula was used to obtain the numerical solution of both the control and state trajectories. Using the distinctive shape of the numerical trajectories as inspiration, analytical expressions were derived for the state, and control trajectories. It was shown that the trajectory could be fully described by finding the solution of a one-dimensional optimization problem. The sensitivity of both the optimal trajectory and the optimal energy efficiency to different induction machine parameters were analyzed. A non-iterative solution that can use feedback for generating optimal control trajectories in real time was explored. It was found that an artificial neural network could be trained using the numerical solutions and made to emulate the optimal control trajectories with a high degree of accuracy. Hence a neural network along with a supervisory logic was implemented and used in a real-time simulation to control the Finite Element Method model of the induction machine. The results were compared with three other control regimes and the optimal control system was found to have the highest energy efficiency for the same drive cycle.

  18. Political Response to New Skills: The Conforming and the Deviant

    ERIC Educational Resources Information Center

    Jaros, Dean

    1970-01-01

    Anomie theory holds that the provision of new resources and skills to an inarticulate and non-participatory population can produce two distinct responses: increased conformity or increased and more efficient use of deviant methods. Data on vocational education students suggest that there is such a differential reaction to formal training in the…

  19. Connecting Schools in Ways that Strengthen Learning Supports. A Center Policy Brief

    ERIC Educational Resources Information Center

    Center for Mental Health in Schools at UCLA, 2011

    2011-01-01

    Given dwindling budgets, collaborations that can enhance effective and efficient use of resources increase in importance. This is particularly important with respect to efforts at schools to provide student and learning supports. Schools that formally connect to work together can be more effective, realize economies of scale, and enhance the way…

  20. Assessment Approaches in Massive Open Online Courses: Possibilities, Challenges and Future Directions

    ERIC Educational Resources Information Center

    Xiong, Yao; Suen, Hoi K.

    2018-01-01

    The development of massive open online courses (MOOCs) has launched an era of large-scale interactive participation in education. While massive open enrolment and the advances of learning technology are creating exciting potentials for lifelong learning in formal and informal ways, the implementation of efficient and effective assessment is still…

Top