Sample records for energy minimization framework

  1. Transformation of general binary MRF minimization to the first-order case.

    PubMed

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  2. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  3. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  4. Life cycle optimization model for integrated cogeneration and energy systems applications in buildings

    NASA Astrophysics Data System (ADS)

    Osman, Ayat E.

    Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.

  5. A System of Systems (SoS) Approach to Sustainable Energy Planning

    NASA Astrophysics Data System (ADS)

    Madani, Kaveh; Hadian, Saeed

    2015-04-01

    The general policy of mandating fossil fuel replacement with "green" energies may not be as effective and environmental-friendly as perceived, due to the secondary impacts of renewable energies on different natural resources. An integrated systems analysis framework is essential to developing sustainable energy supply systems with minimal unintended impacts on valuable natural resources such as water, climate, and ecosystem. This presentation discusses how a system of systems (SoS) framework can be developed to quantitatively evaluate the desirability of different energy supply alternatives with respect to different sustainability criteria under uncertainty. Relative Aggregate Footprint (RAF) scores of a range of renewable and nonrenewable energy alternatives are determined using their performance values under four sustainability criteria, namely carbon footprint, water footprint, land footprint, and cost of energy production. Our results suggest that despite their lower emissions, some renewable energy sources are less promising than non-renewable energy sources from a SoS perspective that considers the trade-offs between carbon footprint of energies and their effects on water, ecosystem, and economic resources. A new framework based on the Modern Portfolio Theory (MPT) is also proposed for analyzing the overall sustainability of different energy mixes for different risk of return levels with respect to the trade-offs involved. It is discussed how the proposed finance-based sustainability evaluation method can help policy makers maximize the energy portfolio's expected sustainability for a given amount of portfolio risk, or equivalently minimize risk for a given level of expected sustainability level, by revising the energy mix.

  6. Modeling of electrical and mesoscopic circuits at quantum nanoscale from heat momentum operator

    NASA Astrophysics Data System (ADS)

    El-Nabulsi, Rami Ahmad

    2018-04-01

    We develop a new method to study electrical circuits at quantum nanoscale by introducing a heat momentum operator which reproduces quantum effects similar to those obtained in Suykens's nonlocal-in-time kinetic energy approach for the case of reversible motion. The series expansion of the heat momentum operator is similar to the momentum operator obtained in the framework of minimal length phenomenologies characterized by the deformation of Heisenberg algebra. The quantization of both LC and mesoscopic circuits revealed a number of motivating features like the emergence of a generalized uncertainty relation and a minimal charge similar to those obtained in the framework of minimal length theories. Additional features were obtained and discussed accordingly.

  7. Universal Darwinism As a Process of Bayesian Inference.

    PubMed

    Campbell, John O

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.

  8. Universal Darwinism As a Process of Bayesian Inference

    PubMed Central

    Campbell, John O.

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an “experiment” in the external world environment, and the results of that “experiment” or the “surprise” entailed by predicted and actual outcomes of the “experiment.” Minimization of free energy implies that the implicit measure of “surprise” experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature. PMID:27375438

  9. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  10. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  11. Intelligent and robust optimization frameworks for smart grids

    NASA Astrophysics Data System (ADS)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.

  12. Development of Chemical Process Design and Control for Sustainability

    EPA Science Inventory

    This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy....

  13. Energy minimization of mobile video devices with a hardware H.264/AVC encoder based on energy-rate-distortion optimization

    NASA Astrophysics Data System (ADS)

    Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min

    2014-09-01

    In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.

  14. Symmetron and de Sitter attractor in a teleparallel model of cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadjadi, H. Mohseni, E-mail: mohsenisad@ut.ac.ir

    In the teleparallel framework of cosmology, a quintessence with non-minimal couplings to the scalar torsion and a boundary term is considered. A conformal coupling to matter density is also taken into account. It is shown that the model can describe onset of cosmic acceleration after an epoch of matter dominated era, where dark energy is negligible, via Z {sub 2} symmetry breaking. While the conformal coupling holds the Universe in a state with zero dark energy density in the early epoch, the non-minimal couplings lead the Universe to a stable state with de Sitter expansion at late time.

  15. POWERING AIRPOWER: IS THE AIR FORCES ENERGY SECURE

    DTIC Science & Technology

    2016-02-01

    needs. More on-site renewable energy generation increases AF readiness in crisis times by minimizing the AF’s dependency on fossil fuels. Financing...reducing the need for traditional fossil fuels, and the high investment cost of onsite renewable energy sources is still a serious roadblock in this...help installations better plan holistically. This research will take the form of problem/solution framework. With any complex problem, rarely does a

  16. Metal phosphonate coordination networks and frameworks as precursors of electrocatalysts for the hydrogen and oxygen evolution reactions

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; El-Refaei, Sayed M.; Russo, Patrícia A.; Pinna, Nicola

    2018-05-01

    The hydrogen evolution reaction (HER) and the oxygen evolution reaction (OER) play key roles in the conversion of energy derived from renewable energy sources into chemical energy. Efficient, robust, and inexpensive electrocatalysts are necessary for driving these reactions at high rates at low overpotentials and minimize energetic losses. Recently, electrocatalysts derived from hybrid metal phosphonate compounds have shown high activity for the HER or OER. We review here the utilization of metal phosphonate coordination networks and metal-organic frameworks as precursors/templates for transition-metal phosphides, phosphates, or oxyhydroxides generated in situ in alkaline solutions, and their electrocatalytic performance in HER or OER.

  17. Conformational locking by design: relating strain energy with luminescence and stability in rigid metal-organic frameworks.

    PubMed

    Shustova, Natalia B; Cozzolino, Anthony F; Dincă, Mircea

    2012-12-05

    Minimization of the torsional barrier for phenyl ring flipping in a metal-organic framework (MOF) based on the new ethynyl-extended octacarboxylate ligand H(8)TDPEPE leads to a fluorescent material with a near-dark state. Immobilization of the ligand in the rigid structure also unexpectedly causes significant strain. We used DFT calculations to estimate the ligand strain energies in our and all other topologically related materials and correlated these with empirical structural descriptors to derive general rules for trapping molecules in high-energy conformations within MOFs. These studies portend possible applications of MOFs for studying fundamental concepts related to conformational locking and its effects on molecular reactivity and chromophore photophysics.

  18. Energy, Society, and Education, with Emphasis on Educational Technology Policy for K-12

    NASA Astrophysics Data System (ADS)

    Chedid, Loutfallah Georges

    2005-03-01

    This paper begins by examining the profound impact of energy usage on our lives, and on every major sector of the economy. Then, the anticipated US energy needs by the year 2025 are presented based on the Department of Energy's projections. The paper considers the much-touted National Energy Policy Report, and identifies a major flaw where the policy report neglects education as a contributor to solving future energy problems. The inextricable interaction between energy solutions and education is described, with emphasis on education policy as a potential vehicle for developing economically and commercially sustainable energy systems that have a minimal impact on the environment. With that said, an earnest argument is made as to the need to educate science, technology, engineering, and mathematics (STEM) proficient individuals for the energy technology development workforce, starting with the K-12 level. A framework for the aforementioned STEM education policies is presented that includes a sustained national awareness campaign, address the teacher's salary issues, and addresses teacher quality issues. Moreover, the framework suggests a John Dewey-style "learning-by-doing" shift in pedagogy. Finally, the framework presents specific changes to the current national standards that would be valuable to the 21st century student.

  19. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  20. Systematic exploration of efficient strategies to manage solid waste in U.S. municipalities: perspectives from the solid waste optimization life-cycle framework (SWOLF).

    PubMed

    Levis, James W; Barlaz, Morton A; Decarolis, Joseph F; Ranjithan, S Ranji

    2014-04-01

    Solid waste management (SWM) systems must proactively adapt to changing policy requirements, waste composition, and an evolving energy system to sustainably manage future solid waste. This study represents the first application of an optimizable dynamic life-cycle assessment framework capable of considering these future changes. The framework was used to draw insights by analyzing the SWM system of a hypothetical suburban U.S. city of 100 000 people over 30 years while considering changes to population, waste generation, and energy mix and costs. The SWM system included 3 waste generation sectors, 30 types of waste materials, and 9 processes for waste separation, treatment, and disposal. A business-as-usual scenario (BAU) was compared to three optimization scenarios that (1) minimized cost (Min Cost), (2) maximized diversion (Max Diversion), and (3) minimized greenhouse gas (GHG) emissions (Min GHG) from the system. The Min Cost scenario saved $7.2 million (12%) and reduced GHG emissions (3%) relative to the BAU scenario. Compared to the Max Diversion scenario, the Min GHG scenario cost approximately 27% less and more than doubled the net reduction in GHG emissions. The results illustrate how the timed-deployment of technologies in response to changes in waste composition and the energy system results in more efficient SWM system performance compared to what is possible from static analyses.

  1. Ultrahigh Ionic Conduction in Water-Stable Close-Packed Metal-Carbonate Frameworks.

    PubMed

    Manna, Biplab; Desai, Aamod V; Illathvalappil, Rajith; Gupta, Kriti; Sen, Arunabha; Kurungot, Sreekumar; Ghosh, Sujit K

    2017-08-21

    Utilization of the robust metal-carbonate backbone in a series of water-stable, anionic frameworks has been harnessed for the function of highly efficient solid-state ion-conduction. The compact organization of hydrophilic guest ions facilitates water-assisted ion-conduction in all the compounds. The dense packing of the compounds imparts high ion-conducting ability and minimizes the possibility of fuel crossover, making this approach promising for design and development of compounds as potential components of energy devices. This work presents the first report of evaluating ion-conduction in a purely metal-carbonate framework, which exhibits high ion-conductivity on the order of 10 -2 S cm -1 along with very low activation energy, which is comparable to highly conducting well-known crystalline coordination polymers or commercialized organic polymers like Nafion.

  2. Dark energy cosmology with tachyon field in teleparallel gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motavalli, H., E-mail: Motavalli@Tabrizu.ac.ir; Akbarieh, A. Rezaei; Nasiry, M.

    2016-07-15

    We construct a tachyon teleparallel dark energy model for a homogeneous and isotropic flat universe in which a tachyon as a non-canonical scalar field is non-minimally coupled to gravity in the framework of teleparallel gravity. The explicit form of potential and coupling functions are obtained under the assumption that the Lagrangian admits the Noether symmetry approach. The dynamical behavior of the basic cosmological observables is compared to recent observational data, which implies that the tachyon field may serve as a candidate for dark energy.

  3. Fermion hierarchy from sfermion anarchy

    DOE PAGES

    Altmannshofer, Wolfgang; Frugiuele, Claudia; Harnik, Roni

    2014-12-31

    We present a framework to generate the hierarchical flavor structure of Standard Model quarks and leptons from loops of superpartners. The simplest model consists of the minimal supersymmetric standard model with tree level Yukawa couplings for the third generation only and anarchic squark and slepton mass matrices. Agreement with constraints from low energy flavor observables, in particular Kaon mixing, is obtained for supersymmetric particles with masses at the PeV scale or above. In our framework both the second and the first generation fermion masses are generated at 1-loop. Despite this, a novel mechanism generates a hierarchy among the first andmore » second generations without imposing a symmetry or small parameters. A second-to-first generation mass ratio of order 100 is typical. The minimal supersymmetric standard model thus includes all the necessary ingredients to realize a fermion spectrum that is qualitatively similar to observation, with hierarchical masses and mixing. The minimal framework produces only a few quantitative discrepancies with observation, most notably the muon mass is too low. Furthermore, we discuss simple modifications which resolve this and also investigate the compatibility of our model with gauge and Yukawa coupling Unification.« less

  4. An efficient energy response model for liquid scintillator detectors

    NASA Astrophysics Data System (ADS)

    Lebanowski, Logan; Wan, Linyan; Ji, Xiangpan; Wang, Zhe; Chen, Shaomin

    2018-05-01

    Liquid scintillator detectors are playing an increasingly important role in low-energy neutrino experiments. In this article, we describe a generic energy response model of liquid scintillator detectors that provides energy estimations of sub-percent accuracy. This model fits a minimal set of physically-motivated parameters that capture the essential characteristics of scintillator response and that can naturally account for changes in scintillator over time, helping to avoid associated biases or systematic uncertainties. The model employs a one-step calculation and look-up tables, yielding an immediate estimation of energy and an efficient framework for quantifying systematic uncertainties and correlations.

  5. Cross-platform validation and analysis environment for particle physics

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  6. Optimal Management of DoD Lands for Military Training, Ecosystem Services, and Renewable Energy Generation: Framework and Data Requirements

    DTIC Science & Technology

    2013-01-01

    by at least 25% by 2025. To achieve this ambitious goal, DoD is considering a diverse energy portfolio that includes wind , solar, geothermal...generated power (bioenergy). wind , solar, and bioenergy sources each have significant land-management implications, so this third land-use re- quirement...production, the adverse impacts of conflicting requirements can be minimized. The regional differences in wind , solar, and bioenergy potential

  7. Free-energy functional of the Debye-Hückel model of simple fluids

    NASA Astrophysics Data System (ADS)

    Piron, R.; Blenski, T.

    2016-12-01

    The Debye-Hückel approximation to the free energy of a simple fluid is written as a functional of the pair correlation function. This functional can be seen as the Debye-Hückel equivalent to the functional derived in the hypernetted chain framework by Morita and Hiroike, as well as by Lado. It allows one to obtain the Debye-Hückel integral equation through a minimization with respect to the pair correlation function, leads to the correct form of the internal energy, and fulfills the virial theorem.

  8. On a Minimum Problem in Smectic Elastomers

    NASA Astrophysics Data System (ADS)

    Buonsanti, Michele; Giovine, Pasquale

    2008-07-01

    Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress.

  9. A minimally invasive blood-extraction system: elastic self-recovery actuator integrated with an ultrahigh- aspect-ratio microneedle.

    PubMed

    Li, Cheng Guo; Lee, Kwang; Lee, Chang Yeol; Dangol, Manita; Jung, Hyungil

    2012-08-28

    A minimally invasive blood-extraction system is fabricated by the integration of an elastic self-recovery actuator and an ultrahigh-aspect-ratio microneedle. The simple elastic self-recovery actuator converts finger force to elastic energy to provide power for blood extraction and transport without requiring an external source of power. This device has potential utility in the biomedical field within the framework of complete micro-electromechanical systems. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. An Optimization Framework for Dynamic Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis

    A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less

  11. Inference with minimal Gibbs free energy in information field theory.

    PubMed

    Ensslin, Torsten A; Weig, Cornelius

    2010-11-01

    Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.

  12. Accelerating atomic structure search with cluster regularization

    NASA Astrophysics Data System (ADS)

    Sørensen, K. H.; Jørgensen, M. S.; Bruix, A.; Hammer, B.

    2018-06-01

    We present a method for accelerating the global structure optimization of atomic compounds. The method is demonstrated to speed up the finding of the anatase TiO2(001)-(1 × 4) surface reconstruction within a density functional tight-binding theory framework using an evolutionary algorithm. As a key element of the method, we use unsupervised machine learning techniques to categorize atoms present in a diverse set of partially disordered surface structures into clusters of atoms having similar local atomic environments. Analysis of more than 1000 different structures shows that the total energy of the structures correlates with the summed distances of the atomic environments to their respective cluster centers in feature space, where the sum runs over all atoms in each structure. Our method is formulated as a gradient based minimization of this summed cluster distance for a given structure and alternates with a standard gradient based energy minimization. While the latter minimization ensures local relaxation within a given energy basin, the former enables escapes from meta-stable basins and hence increases the overall performance of the global optimization.

  13. Framework for Identifying Key Environmental Concerns in Marine Renewable Energy Projects- Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, Sharon; Previsic, Mirko; Nelson, Peter

    2010-06-17

    Marine wave and tidal energy technology could interact with marine resources in ways that are not well understood. As wave and tidal energy conversion projects are planned, tested, and deployed, a wide range of stakeholders will be engaged; these include developers, state and federal regulatory agencies, environmental groups, tribal governments, recreational and commercial fishermen, and local communities. Identifying stakeholders’ environmental concerns in the early stages of the industry’s development will help developers address and minimize potential environmental effects. Identifying important concerns will also assist with streamlining siting and associated permitting processes, which are considered key hurdles by the industry inmore » the U.S. today. In September 2008, RE Vision consulting, LLC was selected by the Department of Energy (DoE) to conduct a scenario-based evaluation of emerging hydrokinetic technologies. The purpose of this evaluation is to identify and characterize environmental impacts that are likely to occur, demonstrate a process for analyzing these impacts, identify the “key” environmental concerns for each scenario, identify areas of uncertainty, and describe studies that could address that uncertainty. This process is intended to provide an objective and transparent tool to assist in decision-making for siting and selection of technology for wave and tidal energy development. RE Vision worked with H. T. Harvey & Associates, to develop a framework for identifying key environmental concerns with marine renewable technology. This report describes the results of this study. This framework was applied to varying wave and tidal power conversion technologies, scales, and locations. The following wave and tidal energy scenarios were considered: 4 wave energy generation technologies 3 tidal energy generation technologies 3 sites: Humboldt coast, California (wave); Makapu’u Point, Oahu, Hawaii (wave); and the Tacoma Narrows, Washington (tidal) 3 project sizes: pilot, small commercial, and large commercial The possible combinations total 24 wave technology scenarios and 9 tidal technology scenarios. We evaluated 3 of the 33 scenarios in detail: 1. A small commercial OPT Power Buoy project off the Humboldt County, California coast 2. A small commercial Pelamis Wave Power P-2 project off Makapu’u Point, Oahu, Hawaii 3. A pilot MCT SeaGen tidal project, sited in the Tacoma Narrows, Washington. This framework document used information available from permitting documents that were written to support actual wave or tidal energy projects, but the results obtained here should not be confused with those of the permitting documents1. The main difference between this framework document and permitting documents of currently proposed pilot projects is that this framework identifies key environmental concerns and describes the next steps in addressing those concerns; permitting documents must identify effects, find or declare thresholds of significance, evaluate the effects against the thresholds, and find mitigation measures that will minimize or avoid the effects so they can be considered less-than-significant. Two methodologies, 1) an environmental effects analysis and 2) Raptools, were developed and tested to identify potential environmental effects associated with wave or tidal energy conversion projects. For the environmental effects analysis, we developed a framework based on standard risk assessment techniques. The framework was applied to the three scenarios listed above. The environmental effects analysis addressed questions such as: What is the temporal and spatial exposure of a species at a site? What are the specific potential project effects on that species? What measures could minimize, mitigate, or eliminate negative effects? Are there potential effects of the project, or species’ response to the effect, that are highly uncertain and warrant additional study? The second methodology, Raptools, is a collaborative approach useful for evaluating multiple characteristics of numerous siting or technology alternatives, and it allows us to graphically compare alternatives. We used Raptools to answer these questions: How do the scenarios compare, in terms of exposure, risks, and effects to the ecological and human environments? Are there sites that seem to present the fewest effects regardless of technology and scale? Which attributes account for many or much of the effects associated with wave or tidal energy development?« less

  14. Cumulative biological impacts framework for solar energy projects in the California Desert

    USGS Publications Warehouse

    Davis, Frank W.; Kreitler, Jason R.; Soong, Oliver; Stoms, David M.; Dashiell, Stephanie; Hannah, Lee; Wilkinson, Whitney; Dingman, John

    2013-01-01

    This project developed analytical approaches, tools and geospatial data to support conservation planning for renewable energy development in the California deserts. Research focused on geographical analysis to avoid, minimize and mitigate the cumulative biological effects of utility-scale solar energy development. A hierarchical logic model was created to map the compatibility of new solar energy projects with current biological conservation values. The research indicated that the extent of compatible areas is much greater than the estimated land area required to achieve 2040 greenhouse gas reduction goals. Species distribution models were produced for 65 animal and plant species that were of potential conservation significance to the Desert Renewable Energy Conservation Plan process. These models mapped historical and projected future habitat suitability using 270 meter resolution climate grids. The results were integrated into analytical frameworks to locate potential sites for offsetting project impacts and evaluating the cumulative effects of multiple solar energy projects. Examples applying these frameworks in the Western Mojave Desert ecoregion show the potential of these publicly-available tools to assist regional planning efforts. Results also highlight the necessity to explicitly consider projected land use change and climate change when prioritizing areas for conservation and mitigation offsets. Project data, software and model results are all available online.

  15. Molecular system identification for enzyme directed evolution and design

    NASA Astrophysics Data System (ADS)

    Guan, Xiangying; Chakrabarti, Raj

    2017-09-01

    The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.

  16. Cross-platform validation and analysis environment for particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less

  17. Efficient data communication protocols for wireless networks

    NASA Astrophysics Data System (ADS)

    Zeydan, Engin

    In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.

  18. Environmental siting suitability analysis for commercial scale ocean renewable energy: A southeast Florida case study

    NASA Astrophysics Data System (ADS)

    Mulcan, Amanda

    This thesis aims to facilitate the siting and implementation of Florida Atlantic University Southeast National Marine Renewable Energy Center (FAU SNMREC) ocean current energy (OCE) projects offshore southeastern Florida through the analysis of benthic anchoring conditions. Specifically, a suitability analysis considering all presently available biologic and geologic datasets within the legal framework of OCE policy and regulation was done. OCE related literature sources were consulted to assign suitability levels to each dataset, ArcGIS interpolations generated seafloor substrate maps, and existing submarine cable pathways were considered for OCE power cables. The finalized suitability map highlights the eastern study area as most suitable for OCE siting due to its abundance of sand/sediment substrate, existing underwater cable route access, and minimal biologic presence. Higher resolution datasets are necessary to locate specific OCE development locales, better understand their benthic conditions, and minimize potentially negative OCE environmental impacts.

  19. Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation.

    PubMed

    Gao, Shan; Ye, Qixiang; Xing, Junliang; Kuijper, Arjan; Han, Zhenjun; Jiao, Jianbin; Ji, Xiangyang

    2017-12-01

    Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets' spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.

  20. Quark-lepton flavor democracy and the nonexistence of the fourth generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cvetic, G.; Kim, C.S.

    1995-01-01

    In the standard model with two Higgs doublets (type II), which has a consistent trend to a flavor gauge theory and its related flavor democracy in the quark and the leptonic sectors (unlike the minimal standard model) when the energy of the probes increases, we impose the mixed quark-lepton flavor democracy at high transition'' energy and assume the usual seesaw mechanism, and consequently find out that the existence of the fourth generation of fermions in this framework is practically ruled out.

  1. Mechanics of tunable helices and geometric frustration in biomimetic seashells

    NASA Astrophysics Data System (ADS)

    Guo, Qiaohang; Chen, Zi; Li, Wei; Dai, Pinqiang; Ren, Kun; Lin, Junjie; Taber, Larry A.; Chen, Wenzhe

    2014-03-01

    Helical structures are ubiquitous in nature and engineering, ranging from DNA molecules to plant tendrils, from sea snail shells to nanoribbons. While the helical shapes in natural and engineered systems often exhibit nearly uniform radius and pitch, helical shell structures with changing radius and pitch, such as seashells and some plant tendrils, add to the variety of this family of aesthetic beauty. Here we develop a comprehensive theoretical framework for tunable helical morphologies, and report the first biomimetic seashell-like structure resulting from mechanics of geometric frustration. In previous studies, the total potential energy is everywhere minimized when the system achieves equilibrium. In this work, however, the local energy minimization cannot be realized because of the geometric incompatibility, and hence the whole system deforms into a shape with a global energy minimum whereby the energy in each segment may not necessarily be locally optimized. This novel approach can be applied to develop materials and devices of tunable geometries with a range of applications in nano/biotechnology.

  2. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  3. Non-minimally coupled tachyon field in teleparallel gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fazlpour, Behnaz; Banijamali, Ali, E-mail: b.fazlpour@umz.ac.ir, E-mail: a.banijamali@nit.ac.ir

    2015-04-01

    We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P{sub 4}), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of thismore » point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field.« less

  4. Free-energy minimization and the dark-room problem.

    PubMed

    Friston, Karl; Thornton, Christopher; Clark, Andy

    2012-01-01

    Recent years have seen the emergence of an important new fundamental theory of brain function. This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation). The most comprehensive such treatment is the "free-energy minimization" formulation due to Karl Friston (see e.g., Friston and Stephan, 2007; Friston, 2010a,b - see also Fiorillo, 2010; Thornton, 2010). A recurrent puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the "Dark-Room Problem." Here, we describe the problem and further unpack the issues to which it speaks. Using the same format as the prolog of Eddington's Space, Time, and Gravitation (Eddington, 1920) we present our discussion as a conversation between: an information theorist (Thornton), a physicist (Friston), and a philosopher (Clark).

  5. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  6. Cross-Layer Modeling Framework for Energy-Efficient Resilience

    DTIC Science & Technology

    2014-04-01

    functional block diagram of the software architecture of PEARL, which stands for: Power Efficient and Resilient Embedded Processing with Real - Time ... DVFS ). The goal of the run- time manager is to minimize power consumption, while maintaining system resilience targets (on average) and meeting... real - time performance targets. The integrated performance, power and resilience models are nothing but the analytical modeling toolkit described in

  7. Modeling elasto-viscoplasticity in a consistent phase field framework

    DOE PAGES

    Cheng, Tian -Le; Wen, You -Hai; Hawk, Jeffrey A.

    2017-05-19

    Existing continuum level phase field plasticity theories seek to solve plastic strain by minimizing the shear strain energy. However, rigorously speaking, for thermodynamic consistency it is required to minimize the total strain energy unless there is proof that hydrostatic strain energy is independent of plastic strain which is unfortunately absent. In this work, we extend the phase-field microelasticity theory of Khachaturyan et al. by minimizing the total elastic energy with constraint of incompressibility of plastic strain. We show that the flow rules derived from the Ginzburg-Landau type kinetic equation can be in line with Odqvist's law for viscoplasticity and Prandtl-Reussmore » theory. Free surfaces (external surfaces or internal cracks/voids) are treated in the model. Deformation caused by a misfitting spherical precipitate in an elasto-plastic matrix is studied by large-scale three-dimensional simulations in four different regimes in terms of the matrix: (a) elasto-perfectly-plastic, (b) elastoplastic with linear hardening, (c) elastoplastic with power-law hardening, and (d) elasto-perfectly-plastic with a free surface. The results are compared with analytical/numerical solutions of Lee et al. for (a-c) and analytical solution derived in this work for (d). Additionally, the J integral of a fixed crack is calculated in the phase-field model and discussed in the context of fracture mechanics.« less

  8. Modeling elasto-viscoplasticity in a consistent phase field framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Tian -Le; Wen, You -Hai; Hawk, Jeffrey A.

    Existing continuum level phase field plasticity theories seek to solve plastic strain by minimizing the shear strain energy. However, rigorously speaking, for thermodynamic consistency it is required to minimize the total strain energy unless there is proof that hydrostatic strain energy is independent of plastic strain which is unfortunately absent. In this work, we extend the phase-field microelasticity theory of Khachaturyan et al. by minimizing the total elastic energy with constraint of incompressibility of plastic strain. We show that the flow rules derived from the Ginzburg-Landau type kinetic equation can be in line with Odqvist's law for viscoplasticity and Prandtl-Reussmore » theory. Free surfaces (external surfaces or internal cracks/voids) are treated in the model. Deformation caused by a misfitting spherical precipitate in an elasto-plastic matrix is studied by large-scale three-dimensional simulations in four different regimes in terms of the matrix: (a) elasto-perfectly-plastic, (b) elastoplastic with linear hardening, (c) elastoplastic with power-law hardening, and (d) elasto-perfectly-plastic with a free surface. The results are compared with analytical/numerical solutions of Lee et al. for (a-c) and analytical solution derived in this work for (d). Additionally, the J integral of a fixed crack is calculated in the phase-field model and discussed in the context of fracture mechanics.« less

  9. Modification of Schrödinger-Newton equation due to braneworld models with minimal length

    NASA Astrophysics Data System (ADS)

    Bhat, Anha; Dey, Sanjib; Faizal, Mir; Hou, Chenguang; Zhao, Qin

    2017-07-01

    We study the correction of the energy spectrum of a gravitational quantum well due to the combined effect of the braneworld model with infinite extra dimensions and generalized uncertainty principle. The correction terms arise from a natural deformation of a semiclassical theory of quantum gravity governed by the Schrödinger-Newton equation based on a minimal length framework. The two fold correction in the energy yields new values of the spectrum, which are closer to the values obtained in the GRANIT experiment. This raises the possibility that the combined theory of the semiclassical quantum gravity and the generalized uncertainty principle may provide an intermediate theory between the semiclassical and the full theory of quantum gravity. We also prepare a schematic experimental set-up which may guide to the understanding of the phenomena in the laboratory.

  10. Electroweak phase transition and entropy release in the early universe

    NASA Astrophysics Data System (ADS)

    Chaudhuri, A.; Dolgov, A.

    2018-01-01

    It is shown that the vacuum-like energy of the Higgs potential at non-zero temperatures leads, in the course of the cosmological expansion, to a small but non-negligible rise of the entropy density in the comoving volume. This increase is calculated in the frameworks of the minimal standard model. The result can have a noticeable effect on the outcome of baryo-through-leptogenesis.

  11. Cosmological Constant as a Manifestation of the Hierarchy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; Gu, Je-An

    2007-12-21

    There has been the suggestion that the cosmological constant as implied by the dark energy is related to the well-known hierarchy between the Planck scale, M{sub PI}, and the Standard Model scale, M{sub SM}. Here we further propose that the same framework that addresses this hierarchy problem must also address the smallness problem of the cosmological constant. Specifically, we investigate the minimal supersymmetric (SUSY) extension of the Randall-Sundrum model where SUSY-breaking is induced on the TeV brane and transmitted into the bulk. We show that the Casimir energy density of the system indeed conforms with the observed dark energy scale.

  12. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less

  13. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less

  14. The difference between energy consumption and energy cost: Modelling energy tariff structures for water resource recovery facilities.

    PubMed

    Aymerich, I; Rieger, L; Sobhani, R; Rosso, D; Corominas, Ll

    2015-09-15

    The objective of this paper is to demonstrate the importance of incorporating more realistic energy cost models (based on current energy tariff structures) into existing water resource recovery facilities (WRRFs) process models when evaluating technologies and cost-saving control strategies. In this paper, we first introduce a systematic framework to model energy usage at WRRFs and a generalized structure to describe energy tariffs including the most common billing terms. Secondly, this paper introduces a detailed energy cost model based on a Spanish energy tariff structure coupled with a WRRF process model to evaluate several control strategies and provide insights into the selection of the contracted power structure. The results for a 1-year evaluation on a 115,000 population-equivalent WRRF showed monthly cost differences ranging from 7 to 30% when comparing the detailed energy cost model to an average energy price. The evaluation of different aeration control strategies also showed that using average energy prices and neglecting energy tariff structures may lead to biased conclusions when selecting operating strategies or comparing technologies or equipment. The proposed framework demonstrated that for cost minimization, control strategies should be paired with a specific optimal contracted power. Hence, the design of operational and control strategies must take into account the local energy tariff. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A framework for quantifying the impact of occupant behavior on energy savings of energy conservation measures

    DOE PAGES

    Sun, Kaiyu; Hong, Tianzhen

    2017-04-27

    To improve energy efficiency—during new buildings design or during a building retrofit—evaluating the energy savings potential of energy conservation measures (ECMs) is a critical task. In building retrofits, occupant behavior significantly impacts building energy use and is a leading factor in uncertainty when determining the effectiveness of retrofit ECMs. Current simulation-based assessment methods simplify the representation of occupant behavior by using a standard or representative set of static and homogeneous assumptions ignoring the dynamics, stochastics, and diversity of occupant's energy-related behavior in buildings. The simplification contributes to significant gaps between the simulated and measured actual energy performance of buildings. Thismore » paper presents a framework for quantifying the impact of occupant behaviors on ECM energy savings using building performance simulation. During the first step of the study, three occupant behavior styles (austerity, normal, and wasteful) were defined to represent different levels of energy consciousness of occupants regarding their interactions with building energy systems (HVAC, windows, lights and plug-in equipment). Next, a simulation workflow was introduced to determine a range of the ECM energy savings. Then, guidance was provided to interpret the range of ECM savings to support ECM decision making. Finally, a pilot study was performed in a real building to demonstrate the application of the framework. Simulation results show that the impact of occupant behaviors on ECM savings vary with the type of ECM. Occupant behavior minimally affects energy savings for ECMs that are technology-driven (the relative savings differ by less than 2%) and have little interaction with the occupants; for ECMs with strong occupant interaction, such as the use of zonal control variable refrigerant flow system and natural ventilation, energy savings are significantly affected by occupant behavior (the relative savings differ by up to 20%). Finally, the study framework provides a novel, holistic approach to assessing the uncertainty of ECM energy savings related to occupant behavior, enabling stakeholders to understand and assess the risk of adopting energy efficiency technologies for new and existing buildings.« less

  16. A framework for quantifying the impact of occupant behavior on energy savings of energy conservation measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Kaiyu; Hong, Tianzhen

    To improve energy efficiency—during new buildings design or during a building retrofit—evaluating the energy savings potential of energy conservation measures (ECMs) is a critical task. In building retrofits, occupant behavior significantly impacts building energy use and is a leading factor in uncertainty when determining the effectiveness of retrofit ECMs. Current simulation-based assessment methods simplify the representation of occupant behavior by using a standard or representative set of static and homogeneous assumptions ignoring the dynamics, stochastics, and diversity of occupant's energy-related behavior in buildings. The simplification contributes to significant gaps between the simulated and measured actual energy performance of buildings. Thismore » paper presents a framework for quantifying the impact of occupant behaviors on ECM energy savings using building performance simulation. During the first step of the study, three occupant behavior styles (austerity, normal, and wasteful) were defined to represent different levels of energy consciousness of occupants regarding their interactions with building energy systems (HVAC, windows, lights and plug-in equipment). Next, a simulation workflow was introduced to determine a range of the ECM energy savings. Then, guidance was provided to interpret the range of ECM savings to support ECM decision making. Finally, a pilot study was performed in a real building to demonstrate the application of the framework. Simulation results show that the impact of occupant behaviors on ECM savings vary with the type of ECM. Occupant behavior minimally affects energy savings for ECMs that are technology-driven (the relative savings differ by less than 2%) and have little interaction with the occupants; for ECMs with strong occupant interaction, such as the use of zonal control variable refrigerant flow system and natural ventilation, energy savings are significantly affected by occupant behavior (the relative savings differ by up to 20%). Finally, the study framework provides a novel, holistic approach to assessing the uncertainty of ECM energy savings related to occupant behavior, enabling stakeholders to understand and assess the risk of adopting energy efficiency technologies for new and existing buildings.« less

  17. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  18. A framework for quantifying the impact of occupant behavior on energy savings of energy conservation measures

    DOE PAGES

    Sun, K; Hong, T

    2017-07-01

    © 2017 Elsevier B.V. To improve energy efficiency—during new buildings design or during a building retrofit—evaluating the energy savings potential of energy conservation measures (ECMs) is a critical task. In building retrofits, occupant behavior significantly impacts building energy use and is a leading factor in uncertainty when determining the effectiveness of retrofit ECMs. Current simulation-based assessment methods simplify the representation of occupant behavior by using a standard or representative set of static and homogeneous assumptions ignoring the dynamics, stochastics, and diversity of occupant's energy-related behavior in buildings. The simplification contributes to significant gaps between the simulated and measured actual energymore » performance of buildings. This study presents a framework for quantifying the impact of occupant behaviors on ECM energy savings using building performance simulation. During the first step of the study, three occupant behavior styles (austerity, normal, and wasteful) were defined to represent different levels of energy consciousness of occupants regarding their interactions with building energy systems (HVAC, windows, lights and plug-in equipment). Next, a simulation workflow was introduced to determine a range of the ECM energy savings. Then, guidance was provided to interpret the range of ECM savings to support ECM decision making. Finally, a pilot study was performed in a real building to demonstrate the application of the framework. Simulation results show that the impact of occupant behaviors on ECM savings vary with the type of ECM. Occupant behavior minimally affects energy savings for ECMs that are technology-driven (the relative savings differ by less than 2%) and have little interaction with the occupants; for ECMs with strong occupant interaction, such as the use of zonal control variable refrigerant flow system and natural ventilation, energy savings are significantly affected by occupant behavior (the relative savings differ by up to 20%). The study framework provides a novel, holistic approach to assessing the uncertainty of ECM energy savings related to occupant behavior, enabling stakeholders to understand and assess the risk of adopting energy efficiency technologies for new and existing buildings.« less

  19. Real-space finite-difference approach for multi-body systems: path-integral renormalization group method and direct energy minimization method.

    PubMed

    Sasaki, Akira; Kojo, Masashi; Hirose, Kikuji; Goto, Hidekazu

    2011-11-02

    The path-integral renormalization group and direct energy minimization method of practical first-principles electronic structure calculations for multi-body systems within the framework of the real-space finite-difference scheme are introduced. These two methods can handle higher dimensional systems with consideration of the correlation effect. Furthermore, they can be easily extended to the multicomponent quantum systems which contain more than two kinds of quantum particles. The key to the present methods is employing linear combinations of nonorthogonal Slater determinants (SDs) as multi-body wavefunctions. As one of the noticeable results, the same accuracy as the variational Monte Carlo method is achieved with a few SDs. This enables us to study the entire ground state consisting of electrons and nuclei without the need to use the Born-Oppenheimer approximation. Recent activities on methodological developments aiming towards practical calculations such as the implementation of auxiliary field for Coulombic interaction, the treatment of the kinetic operator in imaginary-time evolutions, the time-saving double-grid technique for bare-Coulomb atomic potentials and the optimization scheme for minimizing the total-energy functional are also introduced. As test examples, the total energy of the hydrogen molecule, the atomic configuration of the methylene and the electronic structures of two-dimensional quantum dots are calculated, and the accuracy, availability and possibility of the present methods are demonstrated.

  20. Rapid sampling of local minima in protein energy surface and effective reduction through a multi-objective filter

    PubMed Central

    2013-01-01

    Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970

  1. Sandia’s Current Energy Conversion module for the Flexible-Mesh Delft3D flow solver v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chartand, Chris; Jagers, Bert

    The DOE has funded Sandia National Labs (SNL) to develop an open-source modeling tool to guide the design and layout of marine hydrokinetic (MHK) arrays to maximize power production while minimizing environmental effects. This modeling framework simulates flows through and around a MHK arrays while quantifying environmental responses. As an augmented version of the Dutch company, Deltares’s, environmental hydrodynamics code, Delft3D, SNL-Delft3D-CEC-FM includes a new module that simulates energy conversion (momentum withdrawal) by MHK current energy conversion devices with commensurate changes in the turbulent kinetic energy and its dissipation rate. SNL-Delft3D-CEC-FM modified the Delft3D flexible mesh flow solver, DFlowFM.

  2. The role of elastic stored energy in controlling the long term rheological behaviour of the lithosphere

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, Klaus; Weinberg, Roberto F.; Rosenbaum, Gideon

    2012-04-01

    The traditional definition of lithospheric strength is derived from the differential stresses required to form brittle and ductile structures at a constant strain rate. This definition is based on dissipative brittle and ductile deformation and does not take into account the ability of the lithosphere to store elastic strain. Here we show the important role of elasticity in controlling the long-term behaviour of the lithosphere. This is particularly evident when describing deformation in a thermodynamic framework, which differentiates between stored (Helmholtz free energy) and dissipative (entropy) energy potentials. In our model calculations we stretch a continental lithosphere with a wide range of crustal thickness (30-60 km) and heat flow (50-80 mW/m2) at a constant velocity. We show that the Helmholtz free energy, which in our simple calculation describes the energy stored elastically, converges for all models within a 25% range, while the dissipated energy varies over an order of magnitude. This variation stems from complex patterns in the local strain distributions of the different models, which together operate to minimize the Helmholtz free energy. This energy minimization is a fundamental material behaviour of the lithosphere, which in our simple case is defined by its elastic properties. We conclude from this result that elasticity (more generally Helmholtz free energy) is an important regulator of the long-term geological strength of the lithosphere.

  3. Variational and perturbative formulations of quantum mechanical/molecular mechanical free energy with mean-field embedding and its analytical gradients.

    PubMed

    Yamamoto, Takeshi

    2008-12-28

    Conventional quantum chemical solvation theories are based on the mean-field embedding approximation. That is, the electronic wavefunction is calculated in the presence of the mean field of the environment. In this paper a direct quantum mechanical/molecular mechanical (QM/MM) analog of such a mean-field theory is formulated based on variational and perturbative frameworks. In the variational framework, an appropriate QM/MM free energy functional is defined and is minimized in terms of the trial wavefunction that best approximates the true QM wavefunction in a statistically averaged sense. Analytical free energy gradient is obtained, which takes the form of the gradient of effective QM energy calculated in the averaged MM potential. In the perturbative framework, the above variational procedure is shown to be equivalent to the first-order expansion of the QM energy (in the exact free energy expression) about the self-consistent reference field. This helps understand the relation between the variational procedure and the exact QM/MM free energy as well as existing QM/MM theories. Based on this, several ways are discussed for evaluating non-mean-field effects (i.e., statistical fluctuations of the QM wavefunction) that are neglected in the mean-field calculation. As an illustration, the method is applied to an S(N)2 Menshutkin reaction in water, NH(3)+CH(3)Cl-->NH(3)CH(3) (+)+Cl(-), for which free energy profiles are obtained at the Hartree-Fock, MP2, B3LYP, and BHHLYP levels by integrating the free energy gradient. Non-mean-field effects are evaluated to be <0.5 kcal/mol using a Gaussian fluctuation model for the environment, which suggests that those effects are rather small for the present reaction in water.

  4. A minimization principle for the description of modes associated with finite-time instabilities

    PubMed Central

    Babaee, H.

    2016-01-01

    We introduce a minimization formulation for the determination of a finite-dimensional, time-dependent, orthonormal basis that captures directions of the phase space associated with transient instabilities. While these instabilities have finite lifetime, they can play a crucial role either by altering the system dynamics through the activation of other instabilities or by creating sudden nonlinear energy transfers that lead to extreme responses. However, their essentially transient character makes their description a particularly challenging task. We develop a minimization framework that focuses on the optimal approximation of the system dynamics in the neighbourhood of the system state. This minimization formulation results in differential equations that evolve a time-dependent basis so that it optimally approximates the most unstable directions. We demonstrate the capability of the method for two families of problems: (i) linear systems, including the advection–diffusion operator in a strongly non-normal regime as well as the Orr–Sommerfeld/Squire operator, and (ii) nonlinear problems, including a low-dimensional system with transient instabilities and the vertical jet in cross-flow. We demonstrate that the time-dependent subspace captures the strongly transient non-normal energy growth (in the short-time regime), while for longer times the modes capture the expected asymptotic behaviour. PMID:27118900

  5. On the Optimization of a Probabilistic Data Aggregation Framework for Energy Efficiency in Wireless Sensor Networks.

    PubMed

    Kafetzoglou, Stella; Aristomenopoulos, Giorgos; Papavassiliou, Symeon

    2015-08-11

    Among the key aspects of the Internet of Things (IoT) is the integration of heterogeneous sensors in a distributed system that performs actions on the physical world based on environmental information gathered by sensors and application-related constraints and requirements. Numerous applications of Wireless Sensor Networks (WSNs) have appeared in various fields, from environmental monitoring, to tactical fields, and healthcare at home, promising to change our quality of life and facilitating the vision of sensor network enabled smart cities. Given the enormous requirements that emerge in such a setting-both in terms of data and energy-data aggregation appears as a key element in reducing the amount of traffic in wireless sensor networks and achieving energy conservation. Probabilistic frameworks have been introduced as operational efficient and performance effective solutions for data aggregation in distributed sensor networks. In this work, we introduce an overall optimization approach that improves and complements such frameworks towards identifying the optimal probability for a node to aggregate packets as well as the optimal aggregation period that a node should wait for performing aggregation, so as to minimize the overall energy consumption, while satisfying certain imposed delay constraints. Primal dual decomposition is employed to solve the corresponding optimization problem while simulation results demonstrate the operational efficiency of the proposed approach under different traffic and topology scenarios.

  6. Analysis of Carbon Policies for Electricity Networks with High Penetration of Green Generation

    NASA Astrophysics Data System (ADS)

    Feijoo, Felipe A.

    In recent decades, climate change has become one of the most crucial challenges for humanity. Climate change has a direct correlation with global warming, caused mainly by the green house gas emissions (GHG). The Environmental Protection Agency in the U.S. (EPA) attributes carbon dioxide to account for approximately 82% of the GHG emissions. Unfortunately, the energy sector is the main producer of carbon dioxide, with China and the U.S. as the highest emitters. Therefore, there is a strong (positive) correlation between energy production, global warming, and climate change. Stringent carbon emissions reduction targets have been established in order to reduce the impacts of GHG. Achieving these emissions reduction goals will require implementation of policies like as cap-and-trade and carbon taxes, together with transformation of the electricity grid into a smarter system with high green energy penetration. However, the consideration of policies solely in view of carbon emissions reduction may adversely impact other market outcomes such as electricity prices and consumption. In this dissertation, a two-layer mathematical-statistical framework is presented, that serves to develop carbon policies to reduce emissions level while minimizing the negative impacts on other market outcomes. The bottom layer of the two layer model comprises a bi-level optimization problem. The top layer comprises a statistical model and a Pareto analysis. Two related but different problems are studied under this methodology. The first problem looks into the design of cap-and-trade policies for deregulated electricity markets that satisfy the interest of different market constituents. Via the second problem, it is demonstrated how the framework can be used to obtain levels of carbon emissions reduction while minimizing the negative impact on electricity demand and maximizing green penetration from microgrids. In the aforementioned studies, forecasts for electricity prices and production cost are considered. This, this dissertation also presents anew forecast model that can be easily integrated in the two-layer framework. It is demonstrated in this dissertation that the proposed framework can be utilized by policy-makers, power companies, consumers, and market regulators in developing emissions policy decisions, bidding strategies, market regulations, and electricity dispatch strategies.

  7. Fermi orbital derivatives in self-interaction corrected density functional theory: Applications to closed shell atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, Mark R., E-mail: mark.pederson@science.doe.gov

    2015-02-14

    A recent modification of the Perdew-Zunger self-interaction-correction to the density-functional formalism has provided a framework for explicitly restoring unitary invariance to the expression for the total energy. The formalism depends upon construction of Löwdin orthonormalized Fermi-orbitals which parametrically depend on variational quasi-classical electronic positions. Derivatives of these quasi-classical electronic positions, required for efficient minimization of the self-interaction corrected energy, are derived and tested, here, on atoms. Total energies and ionization energies in closed-shell singlet atoms, where correlation is less important, using the Perdew-Wang 1992 Local Density Approximation (PW92) functional, are in good agreement with experiment and non-relativistic quantum-Monte-Carlo results albeitmore » slightly too low.« less

  8. A Learning Framework for Control-Oriented Modeling of Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubio-Herrero, Javier; Chandan, Vikas; Siegel, Charles M.

    Buildings consume a significant amount of energy worldwide. Several building optimization and control use cases require models of energy consumption which are control oriented, have high predictive capability, imposes minimal data pre-processing requirements, and have the ability to be adapted continuously to account for changing conditions as new data becomes available. Data driven modeling techniques, that have been investigated so far, while promising in the context of buildings, have been unable to simultaneously satisfy all the requirements mentioned above. In this context, deep learning techniques such as Recurrent Neural Networks (RNNs) hold promise, empowered by advanced computational capabilities and bigmore » data opportunities. In this paper, we propose a deep learning based methodology for the development of control oriented models for building energy management and test in on data from a real building. Results show that the proposed methodology outperforms other data driven modeling techniques significantly. We perform a detailed analysis of the proposed methodology along dimensions such as topology, sensitivity, and downsampling. Lastly, we conclude by envisioning a building analytics suite empowered by the proposed deep framework, that can drive several use cases related to building energy management.« less

  9. Scalar dark matter, type II seesaw and the DAMPE cosmic ray e+ + e- excess

    NASA Astrophysics Data System (ADS)

    Li, Tong; Okada, Nobuchika; Shafi, Qaisar

    2018-04-01

    The DArk Matter Particle Explorer (DAMPE) has reported a measurement of the flux of high energy cosmic ray electrons plus positrons (CREs) in the energy range between 25GeV and 4.6TeV. With unprecedented high energy resolution, the DAMPE data exhibit an excess of the CREs flux at an energy of around 1.4TeV. In this letter, we discuss how the observed excess can be understood in a minimal framework where the Standard Model (SM) is supplemented by a stable SM singlet scalar as dark matter (DM) and type II seesaw for generating the neutrino mass matrix. In our framework, a pair of DM particles annihilates into a pair of the SM SU(2) triplet scalars (Δs) in type II seesaw, and the subsequent Δ decays create the primary source of the excessive CREs around 1.4TeV. The lepton flavor structure of the primary source of CREs has a direct relation with the neutrino oscillation data. We find that the DM interpretation of the DAMPE excess determines the pattern of neutrino mass spectrum to be the inverted hierarchy type, taking into account the constraints from the Fermi-LAT observations of dwarf spheroidal galaxies.

  10. Investigation of α-MnO 2 Tunneled Structures as Model Cation Hosts for Energy Storage

    DOE PAGES

    Housel, Lisa M.; Wang, Lei; Abraham, Alyson; ...

    2018-02-19

    Future advances in energy storage systems rely on identification of appropriate target materials and deliberate synthesis of the target materials with control of their physiochemical properties in order to disentangling the contributions of distinct properties to the functional electrochemistry. Furthermore, this goal demands systematic inquiry using model materials that provide the opportunity for significant synthetic versatility and control. Ideally, a material family that enables direct manipulation of characteristics including composition, defects and crystallite size while remaining within the defined structural framework would be necessary. Accomplishing this through direct synthetic methods is desirable to minimize the complicating effects of secondary processing.

  11. A search for selectrons and squarks at HERA

    NASA Astrophysics Data System (ADS)

    Aid, S.; Andreev, V.; Andrieu, B.; Appuhn, R.-D.; Arpagaus, M.; Babaev, A.; Bähr, J.; Bán, J.; Ban, Y.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Barth, M.; Bassler, U.; Beck, H. P.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bernet, R.; Bertrand-Coremans, G.; Besançon, M.; Beyer, R.; Biddulph, P.; Bispham, P.; Bizot, J. C.; Blobel, V.; Borras, K.; Botterweck, F.; Boudry, V.; Braemer, A.; Braunschweig, W.; Brisson, V.; Bruel, P.; Bruncko, D.; Brune, C.; Buchholz, R.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Burton, M. J.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charlet, M.; Clarke, D.; Clegg, A. B.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Courau, A.; Cousinou, M.-C.; Cozzika, G.; Criegee, L.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; Davis, C. L.; Delcourt, B.; De Roeck, A.; De Wolf, E. A.; Dirkmann, M.; Dixon, P.; Di Nezza, P.; Dlugosz, W.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Düllmann, D.; Dünger, O.; Duhm, H.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellison, R. J.; Elsen, E.; Erdmann, M.; Erdmann, W.; Evrard, E.; Fahr, A. B.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Fominykh, B.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Gellrich, A.; Genzel, H.; Gerhards, R.; Glazov, A.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goldner, D.; Golec-Biernat, K.; Gonzalez-Pineiro, B.; Gorelov, I.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, A.; Gruber, C.; Haack, J.; Hadig, T.; Haidt, D.; Hajduk, L.; Hampel, M.; Haynes, W. J.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herynek, I.; Hess, M. F.; Hildesheim, W.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Hufnagel, H.; Ibbotson, M.; Itterbeck, H.; Jacholkowska, A.; Jacobsson, C.; Jaffre, M.; Janoth, J.; Jansen, T.; Jönsson, L.; Johannsen, K.; Johnson, D. P.; Johnson, L.; Jung, H.; Kalmus, P. I. P.; Kander, M.; Kant, D.; Kaschowitz, R.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kole, F.; Kolya, S. D.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Krüger, U.; Krüner-Marquis, U.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Lacour, D.; Laforge, B.; Lander, R.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Laporte, J.-F.; Lebedev, A.; Lehner, F.; Leverenz, C.; Levonian, S.; Ley, Ch.; Lindström, G.; Lindstroem, M.; Link, J.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Lohmander, H.; Lomas, J. W.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Magnussen, N.; Malinovski, E.; Mani, S.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Merz, T.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, G.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Newman, P. R.; Newton, D.; Neyret, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nisius, R.; Nowak, G.; Noyes, G. W.; Nyberg-Werther, M.; Oakden, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pope, G.; Prell, S.; Prosi, R.; Rabbertz, K.; Rädel, G.; Raupach, F.; Reimer, P.; Reinshagen, S.; Rick, H.; Riech, V.; Riedlberger, J.; Riepenhausen, F.; Riess, S.; Rizvi, E.; Robertson, S. M.; Robmann, P.; Roloff, H. E.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Sahlmann, N.; Sankey, D. P. C.; Schacht, P.; Scharein, S.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schnidt, D.; Schmidt, G.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Sefkow, F.; Seidel, M.; Sell, R.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Smirnov, P.; Smith, J. R.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Squinabol, F.; Starosta, R.; Steenbock, M.; Steffen, P.; Steinberg, R.; Steiner, H.; Stella, B.; Stellberger, A.; Stier, J.; Stiewe, J.; Stößlein, U.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Taševský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thiebaux, C.; Thompson, G.; Truöl, P.; Turnau, J.; Tutas, J.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; Vandenplas, D.; Van Esch, P.; Van Mechelen, P.; Vazdik, Y.; Verrechia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Walther, A.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wengler, T.; Werner, M.; West, L. R.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wünsch, E.; Žáček, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zomer, F.; Zsembery, J.; Zuber, K.; ZurNeden, M.; H1 Collaboration

    1996-02-01

    Data from electron-proton collisions at a center-of-mass energy of 300 GeV are used for a search for selectrons and squarks within the framework of the minimal supersymmetric model. The decays of selectrons and squarks into the lightest supersymmetric particle lead to final states with an electron and hadrons accompanied by large missing energy and transverse momentum. No signal is found and new bounds on the existence of these particles are derived. At 95% confidence level the excluded region extends to 65 GeV for selectron and squark masses, and to 40 GeV for the mass of the lightest supersymmetric particle.

  12. Investigation of α-MnO 2 Tunneled Structures as Model Cation Hosts for Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Housel, Lisa M.; Wang, Lei; Abraham, Alyson

    Future advances in energy storage systems rely on identification of appropriate target materials and deliberate synthesis of the target materials with control of their physiochemical properties in order to disentangling the contributions of distinct properties to the functional electrochemistry. Furthermore, this goal demands systematic inquiry using model materials that provide the opportunity for significant synthetic versatility and control. Ideally, a material family that enables direct manipulation of characteristics including composition, defects and crystallite size while remaining within the defined structural framework would be necessary. Accomplishing this through direct synthetic methods is desirable to minimize the complicating effects of secondary processing.

  13. Relativistic fluid dynamics with spin

    NASA Astrophysics Data System (ADS)

    Florkowski, Wojciech; Friman, Bengt; Jaiswal, Amaresh; Speranza, Enrico

    2018-04-01

    Using the conservation laws for charge, energy, momentum, and angular momentum, we derive hydrodynamic equations for the charge density, local temperature, and fluid velocity, as well as for the polarization tensor, starting from local equilibrium distribution functions for particles and antiparticles with spin 1/2. The resulting set of differential equations extends the standard picture of perfect-fluid hydrodynamics with a conserved entropy current in a minimal way. This framework can be used in space-time analyses of the evolution of spin and polarization in various physical systems including high-energy nuclear collisions. We demonstrate that a stationary vortex, which exhibits vorticity-spin alignment, corresponds to a special solution of the spin-hydrodynamical equations.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu; Phanish, Deepa

    We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic Orbital-Free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas–Fermi–von Weizsacker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-ordermore » finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation are both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.« less

  15. ng: What next-generation languages can teach us about HENP frameworks in the manycore era

    NASA Astrophysics Data System (ADS)

    Binet, Sébastien

    2011-12-01

    Current High Energy and Nuclear Physics (HENP) frameworks were written before multicore systems became widely deployed. A 'single-thread' execution model naturally emerged from that environment, however, this no longer fits into the processing model on the dawn of the manycore era. Although previous work focused on minimizing the changes to be applied to the LHC frameworks (because of the data taking phase) while still trying to reap the benefits of the parallel-enhanced CPU architectures, this paper explores what new languages could bring to the design of the next-generation frameworks. Parallel programming is still in an intensive phase of R&D and no silver bullet exists despite the 30+ years of literature on the subject. Yet, several parallel programming styles have emerged: actors, message passing, communicating sequential processes, task-based programming, data flow programming, ... to name a few. We present the work of the prototyping of a next-generation framework in new and expressive languages (python and Go) to investigate how code clarity and robustness are affected and what are the downsides of using languages younger than FORTRAN/C/C++.

  16. Observer model optimization of a spectral mammography system

    NASA Astrophysics Data System (ADS)

    Fredenberg, Erik; Åslund, Magnus; Cederström, Björn; Lundqvist, Mats; Danielsson, Mats

    2010-04-01

    Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. Contrast-enhanced spectral imaging has been thoroughly investigated, but unenhanced imaging may be more useful because it comes as a bonus to the conventional non-energy-resolved absorption image at screening; there is no additional radiation dose and no need for contrast medium. We have used a previously developed theoretical framework and system model that include quantum and anatomical noise to characterize the performance of a photon-counting spectral mammography system with two energy bins for unenhanced imaging. The theoretical framework was validated with synthesized images. Optimal combination of the energy-resolved images for detecting large unenhanced tumors corresponded closely, but not exactly, to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, deteriorated detectability. For small microcalcifications or tumors on uniform backgrounds, however, energy subtraction was suboptimal whereas energy weighting provided a minute improvement. The performance was largely independent of beam quality, detector energy resolution, and bin count fraction. It is clear that inclusion of anatomical noise and imaging task in spectral optimization may yield completely different results than an analysis based solely on quantum noise.

  17. Powering Up: Assessing the growing municipal energy resilience building efforts in North America

    NASA Astrophysics Data System (ADS)

    Schimmelfing, Kara

    Energy related shortages and price volatilities can impact all levels of society. With coming fossil fuel depletion related to peak oil, it is expected these shortages and volatilities will increase in frequency, duration, and intensity. Resilience building is a strategy to minimize the effects of these events by modifying systems so they are less impacted and/or recover more quickly from disruptive events. Resilience building is being used, particularly at the municipal scale, to prepare for these coming energy related changes. These municipal efforts have only been in existence for five to ten years, and full implementation is still in progress. Little evaluation has been done of these municipal efforts to date, particularly in North America. Despite this, it is important to begin to assess the effectiveness of these efforts now. As a result, future efforts can be redirected to address weak areas and that lessons learned by vanguard communities can be applied in other communities attempting to build energy resilience in the future. This thesis involved the creation of a hybrid framework to evaluate municipal energy resilience building efforts. The framework drew primarily from planning process and factors identified as important to build resilience in social-ecological systems. It consisted of the following categories to group resilience building efforts: Economy, Resource Systems & Infrastructure, Public Awareness, Social Services, Transportation, Built Environment, and Natural Environment. Within these categories the following process steps should be observed: Context, Goals, Needs, Processes, and Outcomes. This framework was then tested through application to four case-study communities (Bloomington, IN, Hamilton, ON, Oakland, CA, Victoria, BC) currently pursuing energy resilience building efforts in North America. This qualitative research involved document analysis primarily of municipal documents related to energy planning efforts. Supplementary interviews were also conducted to verify the findings from the documents and illuminate anything not captured by them. Once data was collected, categorized and analyzed using the framework, comparisons were made between case-studies. Results showed the framework to be a successful, but time consuming, tool for assessing municipal energy resilience building. Four revisions are recommended for the framework before further research. Analysis of the case study communities' efforts also identified five factors for recommended for other communities attempting energy resilience planning at the municipal scale: consistent support from within the municipality, integration and information sharing, presence of key resources, access to information on energy use, and a two-tier planning process. Ultimately, as this is a preliminary attempt to address a young and growing area of municipal effort, there are many avenues for further research to build on the work of this thesis.

  18. Stochastic dark energy from inflationary quantum fluctuations

    NASA Astrophysics Data System (ADS)

    Glavan, Dražen; Prokopec, Tomislav; Starobinsky, Alexei A.

    2018-05-01

    We study the quantum backreaction from inflationary fluctuations of a very light, non-minimally coupled spectator scalar and show that it is a viable candidate for dark energy. The problem is solved by suitably adapting the formalism of stochastic inflation. This allows us to self-consistently account for the backreaction on the background expansion rate of the Universe where its effects are large. This framework is equivalent to that of semiclassical gravity in which matter vacuum fluctuations are included at the one loop level, but purely quantum gravitational fluctuations are neglected. Our results show that dark energy in our model can be characterized by a distinct effective equation of state parameter (as a function of redshift) which allows for testing of the model at the level of the background.

  19. On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies

    PubMed Central

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier

    2013-01-01

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582

  20. Data collection framework for energy efficient privacy preservation in wireless sensor networks having many-to-many structures.

    PubMed

    Bahşi, Hayretdin; Levi, Albert

    2010-01-01

    Wireless sensor networks (WSNs) generally have a many-to-one structure so that event information flows from sensors to a unique sink. In recent WSN applications, many-to-many structures evolved due to the need for conveying collected event information to multiple sinks. Privacy preserved data collection models in the literature do not solve the problems of WSN applications in which network has multiple un-trusted sinks with different level of privacy requirements. This study proposes a data collection framework bases on k-anonymity for preventing record disclosure of collected event information in WSNs. Proposed method takes the anonymity requirements of multiple sinks into consideration by providing different levels of privacy for each destination sink. Attributes, which may identify an event owner, are generalized or encrypted in order to meet the different anonymity requirements of sinks at the same anonymized output. If the same output is formed, it can be multicasted to all sinks. The other trivial solution is to produce different anonymized outputs for each sink and send them to related sinks. Multicasting is an energy efficient data sending alternative for some sensor nodes. Since minimization of energy consumption is an important design criteria for WSNs, multicasting the same event information to multiple sinks reduces the energy consumption of overall network.

  1. On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.

    PubMed

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier

    2013-08-09

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.

  2. Organic semiconductor density of states controls the energy level alignment at electrode interfaces

    PubMed Central

    Oehzelt, Martin; Koch, Norbert; Heimel, Georg

    2014-01-01

    Minimizing charge carrier injection barriers and extraction losses at interfaces between organic semiconductors and metallic electrodes is critical for optimizing the performance of organic (opto-) electronic devices. Here, we implement a detailed electrostatic model, capable of reproducing the alignment between the electrode Fermi energy and the transport states in the organic semiconductor both qualitatively and quantitatively. Covering the full phenomenological range of interfacial energy level alignment regimes within a single, consistent framework and continuously connecting the limiting cases described by previously proposed models allows us to resolve conflicting views in the literature. Our results highlight the density of states in the organic semiconductor as a key factor. Its shape and, in particular, the energy distribution of electronic states tailing into the fundamental gap is found to determine both the minimum value of practically achievable injection barriers as well as their spatial profile, ranging from abrupt interface dipoles to extended band-bending regions. PMID:24938867

  3. Optimizing Wellfield Operation in a Variable Power Price Regime.

    PubMed

    Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus

    2016-01-01

    Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities. © 2015 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  4. Optimal Management and Design of Energy Systems under Atmospheric Uncertainty

    NASA Astrophysics Data System (ADS)

    Anitescu, M.; Constantinescu, E. M.; Zavala, V.

    2010-12-01

    The generation and distpatch of electricity while maintaining high reliability levels are two of the most daunting engineering problems of the modern era. This was demonstrated by the Northeast blackout of August 2003, which resulted in the loss of 6.2 gigawatts that served more than 50 million people and which resulted in economic losses on the order of $10 billion. In addition, there exist strong socioeconomic pressures to improve the efficiency of the grid. The most prominent solution to this problem is a substantial increase in the use of renewable energy such as wind and solar. In turn, its uncertain availability—which is due to the intrinsic weather variability—will increase the likelihood of disruptions. In this endeavors of current and next-generation power systems, forecasting atmospheric conditions with uncertainty can and will play a central role, at both the demand and the generation ends. User demands are strongly correlated to physical conditions such as temperature, humidity, and solar radiation. The reason is that the ambient temperature and solar radiation dictate the amount of air conditioning and lighting needed in residential and commercial buildings. But these potential benefits would come at the expense of increased variability in the dynamics of both production and demand, which would become even more dependent on weather state and its uncertainty. One of the important challenges for energy in our time is how to harness these benefits while “keeping the lights on”—ensuring that the demand is satisfied at all times and that no blackout occurs while all energy sources are optimally used. If we are to meet this challenge, accounting for uncertainty in the atmospheric conditions is essential, since this will allow minimizing the effects of false positives: committing too little baseline power in anticipation of demand that is underestimated or renewable energy levels that fail to materialize. In this work we describe a framework for the optimal management and design of energy systems, such as the power grid or building systems, under atmospheric conditions uncertainty. The framework is defined in terms of a mathematical paradigm called stochastic programming: minimization of the expected value of the decision-makers objective function subject to physical and operational constraints, such as low blackout porbability, that are enforced on each scenario. We report results on testing the framework on the optimal management of power grid systems under high wind penetration scenarios, a problem whose time horizon is in the order of days. We discuss the computational effort of scenario generation which involves running WRF at high spatio-temporal resolution dictated by the operational constraints as well as solving the optimal dispatch problem. We demonstrate that accounting for uncertainty in atmospheric conditions results in blackout prevention, whereas decisions using only mean forecast does not. We discuss issues in using the framework for planning problems, whose time horizon is of several decades and what requirements this problem would entail from climate simulation systems.

  5. Worst-Case Cooperative Jamming for Secure Communications in CIoT Networks.

    PubMed

    Li, Zhen; Jing, Tao; Ma, Liran; Huo, Yan; Qian, Jin

    2016-03-07

    The Internet of Things (IoT) is a significant branch of the ongoing advances in the Internet and mobile communications. The use of a large number of IoT devices makes the spectrum scarcity problem even more serious. The usable spectrum resources are almost entirely occupied, and thus, the increasing radio access demands of IoT devices cannot be met. To tackle this problem, the Cognitive Internet of Things (CIoT) has been proposed. In a CIoT network, secondary users, i.e., sensors and actuators, can access the licensed spectrum bands provided by licensed primary users (such as telephones). Security is a major concern in CIoT networks. However, the traditional encryption method at upper layers (such as symmetric cryptography and asymmetric cryptography) may be compromised in CIoT networks, since these types of networks are heterogeneous. In this paper, we address the security issue in spectrum-leasing-based CIoT networks using physical layer methods. Considering that the CIoT networks are cooperative networks, we propose to employ cooperative jamming to achieve secrecy transmission. In the cooperative jamming scheme, a certain secondary user is employed as the helper to harvest energy transmitted by the source and then uses the harvested energy to generate an artificial noise that jams the eavesdropper without interfering with the legitimate receivers. The goal is to minimize the signal to interference plus noise ratio (SINR) at the eavesdropper subject to the quality of service (QoS) constraints of the primary traffic and the secondary traffic. We formulate the considered minimization problem into a two-stage robust optimization problem based on the worst-case Channel State Information of the Eavesdropper. By using semi-definite programming (SDP), the optimal solutions of the transmit covariance matrices can be obtained. Moreover, in order to build an incentive mechanism for the secondary users, we propose an auction framework based on the cooperative jamming scheme. The proposed auction framework jointly formulates the helper selection and the corresponding energy allocation problems under the constraint of the eavesdropper's SINR. By adopting the Vickrey auction, truthfulness and individual rationality can be guaranteed. Simulation results demonstrate the good performance of the cooperative jamming scheme and the auction framework.

  6. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  7. Green Buildings and Health.

    PubMed

    Allen, Joseph G; MacNaughton, Piers; Laurent, Jose Guillermo Cedeno; Flanigan, Skye S; Eitland, Erika Sita; Spengler, John D

    2015-09-01

    Green building design is becoming broadly adopted, with one green building standard reporting over 3.5 billion square feet certified to date. By definition, green buildings focus on minimizing impacts to the environment through reductions in energy usage, water usage, and minimizing environmental disturbances from the building site. Also by definition, but perhaps less widely recognized, green buildings aim to improve human health through design of healthy indoor environments. The benefits related to reduced energy and water consumption are well-documented, but the potential human health benefits of green buildings are only recently being investigated. The objective of our review was to examine the state of evidence on green building design as it specifically relates to indoor environmental quality and human health. Overall, the initial scientific evidence indicates better indoor environmental quality in green buildings versus non-green buildings, with direct benefits to human health for occupants of those buildings. A limitation of much of the research to date is the reliance on indirect, lagging and subjective measures of health. To address this, we propose a framework for identifying direct, objective and leading "Health Performance Indicators" for use in future studies of buildings and health.

  8. Sigma decomposition: the CP-odd Lagrangian

    NASA Astrophysics Data System (ADS)

    Hierro, I. M.; Merlo, L.; Rigolin, S.

    2016-04-01

    In Alonso et al., JHEP 12 (2014) 034, the CP-even sector of the effective chiral Lagrangian for a generic composite Higgs model with a symmetric coset has been constructed, up to four momenta. In this paper, the CP-odd couplings are studied within the same context. If only the Standard Model bosonic sources of custodial symmetry breaking are considered, then at most six independent operators form a basis. One of them is the weak- θ term linked to non-perturbative sources of CP violation, while the others describe CP-odd perturbative couplings between the Standard Model gauge bosons and an Higgs-like scalar belonging to the Goldstone boson sector. The procedure is then applied to three distinct exemplifying frameworks: the original SU(5)/SO(5) Georgi-Kaplan model, the minimal custodial-preserving SO(5)/SO(4) model and the minimal SU(3)/(SU(2) × U(1)) model, which intrinsically breaks custodial symmetry. Moreover, the projection of the high-energy electroweak effective theory to the low-energy chiral effective Lagrangian for a dynamical Higgs is performed, uncovering strong relations between the operator coefficients and pinpointing the differences with the elementary Higgs scenario.

  9. Supergravity contributions to inflation in models with non-minimal coupling to gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Kumar; Dutta, Koushik; Domcke, Valerie, E-mail: kumar.das@saha.ac.in, E-mail: valerie.domcke@apc.univ-paris7.fr, E-mail: koushik.dutta@saha.ac.in

    2017-03-01

    This paper provides a systematic study of supergravity contributions relevant for inflationary model building in Jordan frame supergravity. In this framework, canonical kinetic terms in the Jordan frame result in the separation of the Jordan frame scalar potential into a tree-level term and a supergravity contribution which is potentially dangerous for sustaining inflation. We show that if the vacuum energy necessary for driving inflation originates dominantly from the F-term of an auxiliary field (i.e. not the inflaton), the supergravity corrections to the Jordan frame scalar potential are generically suppressed. Moreover, these supergravity contributions identically vanish if the superpotential vanishes alongmore » the inflationary trajectory. On the other hand, if the F-term associated with the inflaton dominates the vacuum energy, the supergravity contributions are generically comparable to the globally supersymmetric contributions. In addition, the non-minimal coupling to gravity inherent to Jordan frame supergravity significantly impacts the inflationary model depending on the size and sign of this coupling. We discuss the phenomenology of some representative inflationary models, and point out the relation to the recently much discussed cosmological 'attractor' models.« less

  10. Semiautomatic tumor segmentation with multimodal images in a conditional random field framework.

    PubMed

    Hu, Yu-Chi; Grossberg, Michael; Mageras, Gikas

    2016-04-01

    Volumetric medical images of a single subject can be acquired using different imaging modalities, such as computed tomography, magnetic resonance imaging (MRI), and positron emission tomography. In this work, we present a semiautomatic segmentation algorithm that can leverage the synergies between different image modalities while integrating interactive human guidance. The algorithm provides a statistical segmentation framework partly automating the segmentation task while still maintaining critical human oversight. The statistical models presented are trained interactively using simple brush strokes to indicate tumor and nontumor tissues and using intermediate results within a patient's image study. To accomplish the segmentation, we construct the energy function in the conditional random field (CRF) framework. For each slice, the energy function is set using the estimated probabilities from both user brush stroke data and prior approved segmented slices within a patient study. The progressive segmentation is obtained using a graph-cut-based minimization. Although no similar semiautomated algorithm is currently available, we evaluated our method with an MRI data set from Medical Image Computing and Computer Assisted Intervention Society multimodal brain segmentation challenge (BRATS 2012 and 2013) against a similar fully automatic method based on CRF and a semiautomatic method based on grow-cut, and our method shows superior performance.

  11. Energy contribution of NOVA food groups and sociodemographic determinants of ultra-processed food consumption in the Mexican population.

    PubMed

    Marrón-Ponce, Joaquín A; Sánchez-Pimienta, Tania G; Louzada, Maria Laura da Costa; Batis, Carolina

    2018-01-01

    To identify the energy contributions of NOVA food groups in the Mexican diet and the associations between individual sociodemographic characteristics and the energy contribution of ultra-processed foods (UPF). We classified foods and beverages reported in a 24 h recall according to the NOVA food framework into: (i) unprocessed or minimally processed foods; (ii) processed culinary ingredients; (iii) processed foods; and (iv) UPF. We estimated the energy contribution of each food group and ran a multiple linear regression to identify the associations between sociodemographic characteristics and UPF energy contribution. Mexican National Health and Nutrition Survey 2012. Individuals ≥1 years old (n 10 087). Unprocessed or minimally processed foods had the highest dietary energy contribution (54·0 % of energy), followed by UPF (29·8 %), processed culinary ingredients (10·2 %) and processed foods (6·0 %). The energy contribution of UPF was higher in: pre-school-aged children v. other age groups (3·8 to 12·5 percentage points difference (pp)); urban areas v. rural (5·6 pp); the Central and North regions v. the South (2·7 and 8·4 pp, respectively); medium and high socio-economic status v. low (4·5 pp, in both); and with higher head of household educational level v. without education (3·4 to 7·8 pp). In 2012, about 30 % of energy in the Mexican diet came from UPF. Our results showed that younger ages, urbanization, living in the North region, high socio-economic status and high head of household educational level are sociodemographic factors related to higher consumption of UPF in Mexico.

  12. Catching the right wave: evaluating wave energy resources and potential compatibility with existing marine and coastal uses.

    PubMed

    Kim, Choong-Ki; Toft, Jodie E; Papenfus, Michael; Verutes, Gregory; Guerry, Anne D; Ruckelshaus, Marry H; Arkema, Katie K; Guannel, Gregory; Wood, Spencer A; Bernhardt, Joanna R; Tallis, Heather; Plummer, Mark L; Halpern, Benjamin S; Pinsky, Malin L; Beck, Michael W; Chan, Francis; Chan, Kai M A; Levin, Phil S; Polasky, Stephen

    2012-01-01

    Many hope that ocean waves will be a source for clean, safe, reliable and affordable energy, yet wave energy conversion facilities may affect marine ecosystems through a variety of mechanisms, including competition with other human uses. We developed a decision-support tool to assist siting wave energy facilities, which allows the user to balance the need for profitability of the facilities with the need to minimize conflicts with other ocean uses. Our wave energy model quantifies harvestable wave energy and evaluates the net present value (NPV) of a wave energy facility based on a capital investment analysis. The model has a flexible framework and can be easily applied to wave energy projects at local, regional, and global scales. We applied the model and compatibility analysis on the west coast of Vancouver Island, British Columbia, Canada to provide information for ongoing marine spatial planning, including potential wave energy projects. In particular, we conducted a spatial overlap analysis with a variety of existing uses and ecological characteristics, and a quantitative compatibility analysis with commercial fisheries data. We found that wave power and harvestable wave energy gradually increase offshore as wave conditions intensify. However, areas with high economic potential for wave energy facilities were closer to cable landing points because of the cost of bringing energy ashore and thus in nearshore areas that support a number of different human uses. We show that the maximum combined economic benefit from wave energy and other uses is likely to be realized if wave energy facilities are sited in areas that maximize wave energy NPV and minimize conflict with existing ocean uses. Our tools will help decision-makers explore alternative locations for wave energy facilities by mapping expected wave energy NPV and helping to identify sites that provide maximal returns yet avoid spatial competition with existing ocean uses.

  13. Catching the Right Wave: Evaluating Wave Energy Resources and Potential Compatibility with Existing Marine and Coastal Uses

    PubMed Central

    Kim, Choong-Ki; Toft, Jodie E.; Papenfus, Michael; Verutes, Gregory; Guerry, Anne D.; Ruckelshaus, Marry H.; Arkema, Katie K.; Guannel, Gregory; Wood, Spencer A.; Bernhardt, Joanna R.; Tallis, Heather; Plummer, Mark L.; Halpern, Benjamin S.; Pinsky, Malin L.; Beck, Michael W.; Chan, Francis; Chan, Kai M. A.; Levin, Phil S.; Polasky, Stephen

    2012-01-01

    Many hope that ocean waves will be a source for clean, safe, reliable and affordable energy, yet wave energy conversion facilities may affect marine ecosystems through a variety of mechanisms, including competition with other human uses. We developed a decision-support tool to assist siting wave energy facilities, which allows the user to balance the need for profitability of the facilities with the need to minimize conflicts with other ocean uses. Our wave energy model quantifies harvestable wave energy and evaluates the net present value (NPV) of a wave energy facility based on a capital investment analysis. The model has a flexible framework and can be easily applied to wave energy projects at local, regional, and global scales. We applied the model and compatibility analysis on the west coast of Vancouver Island, British Columbia, Canada to provide information for ongoing marine spatial planning, including potential wave energy projects. In particular, we conducted a spatial overlap analysis with a variety of existing uses and ecological characteristics, and a quantitative compatibility analysis with commercial fisheries data. We found that wave power and harvestable wave energy gradually increase offshore as wave conditions intensify. However, areas with high economic potential for wave energy facilities were closer to cable landing points because of the cost of bringing energy ashore and thus in nearshore areas that support a number of different human uses. We show that the maximum combined economic benefit from wave energy and other uses is likely to be realized if wave energy facilities are sited in areas that maximize wave energy NPV and minimize conflict with existing ocean uses. Our tools will help decision-makers explore alternative locations for wave energy facilities by mapping expected wave energy NPV and helping to identify sites that provide maximal returns yet avoid spatial competition with existing ocean uses. PMID:23144824

  14. Minimal but non-minimal inflation and electroweak symmetry breaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzola, Luca; Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu; Racioppi, Antonio

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  15. Joint Entropy Minimization for Learning in Nonparametric Framework

    DTIC Science & Technology

    2006-06-09

    Tibshirani, G. Sherlock , W. C. Chan, T. C. Greiner, D. D. Weisenburger, J. O. Armitage, R. Warnke, R. Levy, W. Wilson, M. R. Grever, J. C. Byrd, D. Botstein, P...Entropy Minimization for Learning in Nonparametric Framework 33 [11] D.L. Collins, A.P. Zijdenbos, J.G. Kollokian, N.J. Sled, C.J. Kabani, C.J. Holmes

  16. Critical Watersheds: Climate Change, Tipping Points, and Energy-Water Impacts

    NASA Astrophysics Data System (ADS)

    Middleton, R. S.; Brown, M.; Coon, E.; Linn, R.; McDowell, N. G.; Painter, S. L.; Xu, C.

    2014-12-01

    Climate change, extreme climate events, and climate-induced disturbances will have a substantial and detrimental impact on terrestrial ecosystems. How ecosystems respond to these impacts will, in turn, have a significant effect on the quantity, quality, and timing of water supply for energy security, agriculture, industry, and municipal use. As a community, we lack sufficient quantitative and mechanistic understanding of the complex interplay between climate extremes (e.g., drought, floods), ecosystem dynamics (e.g., vegetation succession), and disruptive events (e.g., wildfire) to assess ecosystem vulnerabilities and to design mitigation strategies that minimize or prevent catastrophic ecosystem impacts. Through a combination of experimental and observational science and modeling, we are developing a unique multi-physics ecohydrologic framework for understanding and quantifying feedbacks between novel climate and extremes, surface and subsurface hydrology, ecosystem dynamics, and disruptive events in critical watersheds. The simulation capability integrates and advances coupled surface-subsurface hydrology from the Advanced Terrestrial Simulator (ATS), dynamic vegetation succession from the Ecosystem Demography (ED) model, and QUICFIRE, a novel wildfire behavior model developed from the FIRETEC platform. These advances are expected to make extensive contributions to the literature and to earth system modeling. The framework is designed to predict, quantify, and mitigate the impacts of climate change on vulnerable watersheds, with a focus on the US Mountain West and the energy-water nexus. This emerging capability is used to identify tipping points in watershed ecosystems, quantify impacts on downstream users, and formally evaluate mitigation efforts including forest (e.g., thinning, prescribed burns) and watershed (e.g., slope stabilization). The framework is being trained, validated, and demonstrated using field observations and remote data collections in the Valles Caldera National Preserve, including pre- and post-wildfire and infestation observations. Ultimately, the framework will be applied to the upper Colorado River basin. Here, we present an overview of the framework development strategy and latest field and modeling results.

  17. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  18. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection.

    PubMed

    Ding, Hong; Dwaraknath, Shyam S; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.

  19. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    DOE PAGES

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; ...

    2016-05-04

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO 2 compounds which provides a rich chemical and structural polymorph space. Here, we find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO 2 substrates, where the VO 2 brookite phase would be preferentially grown on the a-c TiO 2 brookite plane whilemore » the columbite and anatase structures favor the a-b plane on the respective TiO 2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO 2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. Our criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.« less

  20. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structuresmore » favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.« less

  1. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO 2 compounds which provides a rich chemical and structural polymorph space. Here, we find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO 2 substrates, where the VO 2 brookite phase would be preferentially grown on the a-c TiO 2 brookite plane whilemore » the columbite and anatase structures favor the a-b plane on the respective TiO 2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO 2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. Our criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.« less

  2. Energy aware path planning in complex four dimensional environments

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Anjan

    This dissertation addresses the problem of energy-aware path planning for small autonomous vehicles. While small autonomous vehicles can perform missions that are too risky (or infeasible) for larger vehicles, the missions are limited by the amount of energy that can be carried on board the vehicle. Path planning techniques that either minimize energy consumption or exploit energy available in the environment can thus increase range and endurance. Path planning is complicated by significant spatial (and potentially temporal) variations in the environment. While the main focus is on autonomous aircraft, this research also addresses autonomous ground vehicles. Range and endurance of small unmanned aerial vehicles (UAVs) can be greatly improved by utilizing energy from the atmosphere. Wind can be exploited to minimize energy consumption of a small UAV. But wind, like any other atmospheric component , is a space and time varying phenomenon. To effectively use wind for long range missions, both exploration and exploitation of wind is critical. This research presents a kinematics based tree algorithm which efficiently handles the four dimensional (three spatial and time) path planning problem. The Kinematic Tree algorithm provides a sequence of waypoints, airspeeds, heading and bank angle commands for each segment of the path. The planner is shown to be resolution complete and computationally efficient. Global optimality of the cost function cannot be claimed, as energy is gained from the atmosphere, making the cost function inadmissible. However the Kinematic Tree is shown to be optimal up to resolution if the cost function is admissible. Simulation results show the efficacy of this planning method for a glider in complex real wind data. Simulation results verify that the planner is able to extract energy from the atmosphere enabling long range missions. The Kinematic Tree planning framework, developed to minimize energy consumption of UAVs, is applied for path planning in ground robots. In traditional path planning problem the focus is on obstacle avoidance and navigation. The optimal Kinematic Tree algorithm named Kinematic Tree* is shown to find optimal paths to reach the destination while avoiding obstacles. A more challenging path planning scenario arises for planning in complex terrain. This research shows how the Kinematic Tree* algorithm can be extended to find minimum energy paths for a ground vehicle in difficult mountainous terrain.

  3. Infusing Climate and Energy Literacy Throughout the Curriculum: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    McCaffrey, M. S.

    2012-12-01

    Climate change and human activities, particularly fossil fuel energy consumption-- both related and crosscutting concepts vital to addressing 21st century societal challenges-- are largely missing from traditional science education curriculum and standards. Whether due to deliberate misinformation, efforts to "teach the controversy", lack of teacher training and professional development or availability of engaging resources, students have for decades graduated from high school and even college without learning the basics of how human activities, particularly our reliance on fossil fuels, impact the environment in general and climate system in particular. The Climate Literacy, Energy Literacy and related frameworks and curriculum, as well as the Next Generation Science Standards (NGSS) and other innovative initiatives, provide new tools for educators and learners that hold strong potential for helping infuse these important topics across the curriculum and thereby better prepare society to minimize human impacts on the planet and prepare for changes that are already well underway.

  4. Echo of interactions in the dark sector

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh; Nunes, Rafael C.

    2017-11-01

    We investigate the observational constraints on an interacting vacuum energy scenario with two different neutrino schemes (with and without a sterile neutrino) using the most recent data from cosmic microwave background (CMB) temperature and polarization anisotropy, baryon acoustic oscillations (BAO), type Ia supernovae from JLA sample and structure growth inferred from cluster counts. We find that inclusion of the galaxy clusters data with the minimal data combination CMB +BAO +JLA suggests an interaction in the dark sector, implying the decay of dark matter particles into dark energy, since the constraints obtained by including the galaxy clusters data yield a non-null and negative coupling parameter between the dark components at 99% confidence level. We deduce that the current tensions on the parameters H0 and σ8 can be alleviated within the framework of the interacting as well as noninteracting vacuum energy models with sterile neutrinos.

  5. Generalized virial theorem for massless electrons in graphene and other Dirac materials

    NASA Astrophysics Data System (ADS)

    Sokolik, A. A.; Zabolotskiy, A. D.; Lozovik, Yu. E.

    2016-05-01

    The virial theorem for a system of interacting electrons in a crystal, which is described within the framework of the tight-binding model, is derived. We show that, in the particular case of interacting massless electrons in graphene and other Dirac materials, the conventional virial theorem is violated. Starting from the tight-binding model, we derive the generalized virial theorem for Dirac electron systems, which contains an additional term associated with a momentum cutoff at the bottom of the energy band. Additionally, we derive the generalized virial theorem within the Dirac model using the minimization of the variational energy. The obtained theorem is illustrated by many-body calculations of the ground-state energy of an electron gas in graphene carried out in Hartree-Fock and self-consistent random-phase approximations. Experimental verification of the theorem in the case of graphene is discussed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquez, Andres; Manzano Franco, Joseph B.; Song, Shuaiwen

    With Exascale performance and its challenges in mind, one ubiquitous concern among architects is energy efficiency. Petascale systems projected to Exascale systems are unsustainable at current power consumption rates. One major contributor to system-wide power consumption is the number of memory operations leading to data movement and management techniques applied by the runtime system. To address this problem, we present the concept of the Architected Composite Data Types (ACDT) framework. The framework is made aware of data composites, assigning them a specific layout, transformations and operators. Data manipulation overhead is amortized over a larger number of elements and program performancemore » and power efficiency can be significantly improved. We developed the fundamentals of an ACDT framework on a massively multithreaded adaptive runtime system geared towards Exascale clusters. Showcasing the capability of ACDT, we exercised the framework with two representative processing kernels - Matrix Vector Multiply and the Cholesky Decomposition – applied to sparse matrices. As transformation modules, we applied optimized compress/decompress engines and configured invariant operators for maximum energy/performance efficiency. Additionally, we explored two different approaches based on transformation opaqueness in relation to the application. Under the first approach, the application is agnostic to compression and decompression activity. Such approach entails minimal changes to the original application code, but leaves out potential applicationspecific optimizations. The second approach exposes the decompression process to the application, hereby exposing optimization opportunities that can only be exploited with application knowledge. The experimental results show that the two approaches have their strengths in HW and SW respectively, where the SW approach can yield performance and power improvements that are an order of magnitude better than ACDT-oblivious, hand-optimized implementations.We consider the ACDT runtime framework an important component of compute nodes that will lead towards power efficient Exascale clusters.« less

  7. Justifying quasiparticle self-consistent schemes via gradient optimization in Baym-Kadanoff theory.

    PubMed

    Ismail-Beigi, Sohrab

    2017-09-27

    The question of which non-interacting Green's function 'best' describes an interacting many-body electronic system is both of fundamental interest as well as of practical importance in describing electronic properties of materials in a realistic manner. Here, we study this question within the framework of Baym-Kadanoff theory, an approach where one locates the stationary point of a total energy functional of the one-particle Green's function in order to find the total ground-state energy as well as all one-particle properties such as the density matrix, chemical potential, or the quasiparticle energy spectrum and quasiparticle wave functions. For the case of the Klein functional, our basic finding is that minimizing the length of the gradient of the total energy functional over non-interacting Green's functions yields a set of self-consistent equations for quasiparticles that is identical to those of the quasiparticle self-consistent GW (QSGW) (van Schilfgaarde et al 2006 Phys. Rev. Lett. 96 226402-4) approach, thereby providing an a priori justification for such an approach to electronic structure calculations. In fact, this result is general, applies to any self-energy operator, and is not restricted to any particular approximation, e.g., the GW approximation for the self-energy. The approach also shows that, when working in the basis of quasiparticle states, solving the diagonal part of the self-consistent Dyson equation is of primary importance while the off-diagonals are of secondary importance, a common observation in the electronic structure literature of self-energy calculations. Finally, numerical tests and analytical arguments show that when the Dyson equation produces multiple quasiparticle solutions corresponding to a single non-interacting state, minimizing the length of the gradient translates into choosing the solution with largest quasiparticle weight.

  8. Collaboration across disciplines for sustainability: green chemistry as an emerging multistakeholder community.

    PubMed

    Iles, Alastair; Mulvihill, Martin J

    2012-06-05

    Sustainable solutions to our nation's material and energy needs must consider environmental, health, and social impacts while developing new technologies. Building a framework to support interdisciplinary interactions and incorporate sustainability goals into the research and development process will benefit green chemistry and other sciences. This paper explores the contributions that diverse disciplines can provide to the design of greener technologies. These interactions have the potential to create technologies that simultaneously minimize environmental and health impacts by drawing on the combined expertise of students and faculty in chemical sciences, engineering, environmental health, social sciences, public policy, and business.

  9. Production of a Scalar Boson and a Fermion Pair in Arbitrarily Polarized e - e + Beams

    NASA Astrophysics Data System (ADS)

    Abdullayev, S. K.; Gojayev, M. Sh.; Nasibova, N. A.

    2018-05-01

    Within the framework of the Standard Model (Minimal Supersymmetric Standard Model) we consider the production of the scalar boson HSM (h; H) and a fermion pair ff- in arbitrarily polarized, counterpropagating electron-positron beams e - e + ⇒ HSM (h; H) ff-. Characteristic features of the behavior of the cross sections and polarization characteristics (right-left spin asymmetry, degree of longitudinal polarization of the fermion, and transverse spin asymmetry) are investigated and elucidated as functions of the energy of the electron-positron beams and the mass of the scalar boson.

  10. Integrating atlas and graph cut methods for right ventricle blood-pool segmentation from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Dangi, Shusil; Linte, Cristian A.

    2017-03-01

    Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  11. [Can the local energy minimization refine the PDB structures of different resolution universally?].

    PubMed

    Godzi, M G; Gromova, A P; Oferkin, I V; Mironov, P V

    2009-01-01

    The local energy minimization was statistically validated as the refinement strategy for PDB structure pairs of different resolution. Thirteen pairs of structures with the only difference in resolution were extracted from PDB, and the structures of 11 identical proteins obtained by different X-ray diffraction techniques were represented. The distribution of RMSD value was calculated for these pairs before and after the local energy minimization of each structure. The MMFF94 field was used for energy calculations, and the quasi-Newton method was used for local energy minimization. By comparison of these two RMSD distributions, the local energy minimization was proved to statistically increase the structural differences in pairs so that it cannot be used for refinement purposes. To explore the prospects of complex refinement strategies based on energy minimization, randomized structures were obtained by moving the initial PDB structures as far as the minimized structures had been moved in a multidimensional space of atomic coordinates. For these randomized structures, the RMSD distribution was calculated and compared with that for minimized structures. The significant differences in their mean values proved the energy surface of the protein to have only few minima near the conformations of different resolution obtained by X-ray diffraction for PDB. Some other results obtained by exploring the energy surface near these conformations are also presented. These results are expected to be very useful for the development of new protein refinement strategies based on energy minimization.

  12. A Framework for Multi-Stakeholder Decision-Making and ...

    EPA Pesticide Factsheets

    This contribution describes the implementation of the conditional-value-at-risk (CVaR) metric to create a general multi-stakeholder decision-making framework. It is observed that stakeholder dissatisfactions (distance to their individual ideal solutions) can be interpreted as random variables. We thus shape the dissatisfaction distribution and find an optimal compromise solution by solving a CVaR minimization problem parameterized in the probability level. This enables us to generalize multi-stakeholder settings previously proposed in the literature that minimizes average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework. We demonstrate the framework in a bio-waste processing facility location case study, where we seek compromise solutions (facility locations) that balance stakeholder priorities on transportation, safety, water quality, and capital costs. This conference presentation abstract explains a new decision-making framework that computes compromise solution alternatives (reach consensus) by mitigating dissatisfactions among stakeholders as needed for SHC Decision Science and Support Tools project.

  13. Barriers to Building Energy Efficiency (BEE) promotion: A transaction costs perspective

    NASA Astrophysics Data System (ADS)

    Qian Kun, Queena

    Worldwide, buildings account for a surprisingly high 40% of global energy consumption, and the resulting carbon footprint significantly exceeds that of all forms of transportation combined. Large and attractive opportunities exist to reduce buildings' energy use at lower costs and higher returns than in other sectors. This thesis analyzes the concerns of the market stakeholders, mainly real estate developers and end-users, in terms of transaction costs as they make decisions about investing in Building Energy Efficiency (BEE). It provides a detailed analysis of the current situation and future prospects for BEE adoption by the market's stakeholders. It delineates the market and lays out the economic and institutional barriers to the large-scale deployment of energy-efficient building techniques. The aim of this research is to investigate the barriers raised by transaction costs that hinder market stakeholders from investing in BEES. It explains interactions among stakeholders in general and in the specific case of Hong Kong as they consider transaction costs. It focuses on the influence of transaction costs on the decision-making of the stakeholders during the entire process of real estate development. The objectives are: 1) To establish an analytical framework for understanding the barriers to BEE investment with consideration of transaction costs; 2) To build a theoretical game model of decision making among the BEE market stakeholders; 3) To study the empirical data from questionnaire surveys of building designers and from focused interviews with real estate developers in Hong Kong; 4) To triangulate the study's empirical findings with those of the theoretical model and analytical framework. The study shows that a coherent institutional framework needs to be established to ensure that the design and implementation of BEE policies acknowledge the concerns of market stakeholders by taking transaction costs into consideration. Regulatory and incentive options should be integrated into BEE policies to minimize efficiency gaps and to realize a sizeable increase in the number of energy-efficient buildings in the next decades. Specifically, the analysis shows that a thorough understanding of the transaction costs borne by particular stakeholders could improve the energy efficiency of buildings, even without improvements in currently available technology.

  14. Multi-objective optimal dispatch of distributed energy resources

    NASA Astrophysics Data System (ADS)

    Longe, Ayomide

    This thesis is composed of two papers which investigate the optimal dispatch for distributed energy resources. In the first paper, an economic dispatch problem for a community microgrid is studied. In this microgrid, each agent pursues an economic dispatch for its personal resources. In addition, each agent is capable of trading electricity with other agents through a local energy market. In this paper, a simple market structure is introduced as a framework for energy trades in a small community microgrid such as the Solar Village. It was found that both sellers and buyers benefited by participating in this market. In the second paper, Semidefinite Programming (SDP) for convex relaxation of power flow equations is used for optimal active and reactive dispatch for Distributed Energy Resources (DER). Various objective functions including voltage regulation, reduced transmission line power losses, and minimized reactive power charges for a microgrid are introduced. Combinations of these goals are attained by solving a multiobjective optimization for the proposed ORPD problem. Also, both centralized and distributed versions of this optimal dispatch are investigated. It was found that SDP made the optimal dispatch faster and distributed solution allowed for scalability.

  15. Electric vehicle (EV) storage supply chain risk and the energy market: A micro and macroeconomic risk management approach

    NASA Astrophysics Data System (ADS)

    Aguilar, Susanna D.

    As a cost effective storage technology for renewable energy sources, Electric Vehicles can be integrated into energy grids. Integration must be optimized to ascertain that renewable energy is available through storage when demand exists so that cost of electricity is minimized. Optimization models can address economic risks associated with the EV supply chain- particularly the volatility in availability and cost of critical materials used in the manufacturing of EV motors and batteries. Supply chain risk can reflect itself in a shortage of storage, which can increase the price of electricity. We propose a micro-and macroeconomic framework for managing supply chain risk through utilization of a cost optimization model in combination with risk management strategies at the microeconomic and macroeconomic level. The study demonstrates how risk from the EVs vehicle critical material supply chain affects manufacturers, smart grid performance, and energy markets qualitatively and quantitatively. Our results illustrate how risk in the EV supply chain affects EV availability and the cost of ancillary services, and how EV critical material supply chain risk can be mitigated through managerial strategies and policy.

  16. Self-organization, free energy minimization, and optimal grip on a field of affordances

    PubMed Central

    Bruineberg, Jelle; Rietveld, Erik

    2014-01-01

    In this paper, we set out to develop a theoretical and conceptual framework for the new field of Radical Embodied Cognitive Neuroscience. This framework should be able to integrate insights from several relevant disciplines: theory on embodied cognition, ecological psychology, phenomenology, dynamical systems theory, and neurodynamics. We suggest that the main task of Radical Embodied Cognitive Neuroscience is to investigate the phenomenon of skilled intentionality from the perspective of the self-organization of the brain-body-environment system, while doing justice to the phenomenology of skilled action. In previous work, we have characterized skilled intentionality as the organism's tendency toward an optimal grip on multiple relevant affordances simultaneously. Affordances are possibilities for action provided by the environment. In the first part of this paper, we introduce the notion of skilled intentionality and the phenomenon of responsiveness to a field of relevant affordances. Second, we use Friston's work on neurodynamics, but embed a very minimal version of his Free Energy Principle in the ecological niche of the animal. Thus amended, this principle is helpful for understanding the embeddedness of neurodynamics within the dynamics of the system “brain-body-landscape of affordances.” Next, we show how we can use this adjusted principle to understand the neurodynamics of selective openness to the environment: interacting action-readiness patterns at multiple timescales contribute to the organism's selective openness to relevant affordances. In the final part of the paper, we emphasize the important role of metastable dynamics in both the brain and the brain-body-environment system for adequate affordance-responsiveness. We exemplify our integrative approach by presenting research on the impact of Deep Brain Stimulation on affordance responsiveness of OCD patients. PMID:25161615

  17. Self-organization, free energy minimization, and optimal grip on a field of affordances.

    PubMed

    Bruineberg, Jelle; Rietveld, Erik

    2014-01-01

    In this paper, we set out to develop a theoretical and conceptual framework for the new field of Radical Embodied Cognitive Neuroscience. This framework should be able to integrate insights from several relevant disciplines: theory on embodied cognition, ecological psychology, phenomenology, dynamical systems theory, and neurodynamics. We suggest that the main task of Radical Embodied Cognitive Neuroscience is to investigate the phenomenon of skilled intentionality from the perspective of the self-organization of the brain-body-environment system, while doing justice to the phenomenology of skilled action. In previous work, we have characterized skilled intentionality as the organism's tendency toward an optimal grip on multiple relevant affordances simultaneously. Affordances are possibilities for action provided by the environment. In the first part of this paper, we introduce the notion of skilled intentionality and the phenomenon of responsiveness to a field of relevant affordances. Second, we use Friston's work on neurodynamics, but embed a very minimal version of his Free Energy Principle in the ecological niche of the animal. Thus amended, this principle is helpful for understanding the embeddedness of neurodynamics within the dynamics of the system "brain-body-landscape of affordances." Next, we show how we can use this adjusted principle to understand the neurodynamics of selective openness to the environment: interacting action-readiness patterns at multiple timescales contribute to the organism's selective openness to relevant affordances. In the final part of the paper, we emphasize the important role of metastable dynamics in both the brain and the brain-body-environment system for adequate affordance-responsiveness. We exemplify our integrative approach by presenting research on the impact of Deep Brain Stimulation on affordance responsiveness of OCD patients.

  18. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  19. Common origin of 3.55 keV x-ray line and gauge coupling unification with left-right dark matter

    NASA Astrophysics Data System (ADS)

    Borah, Debasish; Dasgupta, Arnab; Patra, Sudhanwa

    2017-12-01

    We present a minimal left-right dark matter framework that can simultaneously explain the recently observed 3.55 keV x-ray line from several galaxy clusters and gauge coupling unification at high energy scale. Adopting a minimal dark matter strategy, we consider both left and right handed triplet fermionic dark matter candidates which are stable by virtue of a remnant Z2≃(-1 )B -L symmetry arising after the spontaneous symmetry breaking of left-right gauge symmetry to that of the standard model. A scalar bitriplet field is incorporated whose first role is to allow radiative decay of right handed triplet dark matter into the left handed one and a photon with energy 3.55 keV. The other role this bitriplet field at TeV scale plays is to assist in achieving gauge coupling unification at a high energy scale within a nonsupersymmetric S O (10 ) model while keeping the scale of left-right gauge symmetry around the TeV corner. Apart from solving the neutrino mass problem and giving verifiable new contributions to neutrinoless double beta decay and charged lepton flavor violation, the model with TeV scale gauge bosons can also give rise to interesting collider signatures like diboson excess, dilepton plus two jets excess reported recently in the large hadron collider data.

  20. Trends in entropy production during ecosystem development in the Amazon Basin.

    PubMed

    Holdaway, Robert J; Sparrow, Ashley D; Coomes, David A

    2010-05-12

    Understanding successional trends in energy and matter exchange across the ecosystem-atmosphere boundary layer is an essential focus in ecological research; however, a general theory describing the observed pattern remains elusive. This paper examines whether the principle of maximum entropy production could provide the solution. A general framework is developed for calculating entropy production using data from terrestrial eddy covariance and micrometeorological studies. We apply this framework to data from eight tropical forest and pasture flux sites in the Amazon Basin and show that forest sites had consistently higher entropy production rates than pasture sites (0.461 versus 0.422 W m(-2) K(-1), respectively). It is suggested that during development, changes in canopy structure minimize surface albedo, and development of deeper root systems optimizes access to soil water and thus potential transpiration, resulting in lower surface temperatures and increased entropy production. We discuss our results in the context of a theoretical model of entropy production versus ecosystem developmental stage. We conclude that, although further work is required, entropy production could potentially provide a much-needed theoretical basis for understanding the effects of deforestation and land-use change on the land-surface energy balance.

  1. "Martinizing" the Variational Implicit Solvent Method (VISM): Solvation Free Energy for Coarse-Grained Proteins.

    PubMed

    Ricci, Clarisse G; Li, Bo; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J Andrew

    2017-07-13

    Solvation is a fundamental driving force in many biological processes including biomolecular recognition and self-assembly, not to mention protein folding, dynamics, and function. The variational implicit solvent method (VISM) is a theoretical tool currently developed and optimized to estimate solvation free energies for systems of very complex topology, such as biomolecules. VISM's theoretical framework makes it unique because it couples hydrophobic, van der Waals, and electrostatic interactions as a functional of the solvation interface. By minimizing this functional, VISM produces the solvation interface as an output of the theory. In this work, we push VISM to larger scale applications by combining it with coarse-grained solute Hamiltonians adapted from the MARTINI framework, a well-established mesoscale force field for modeling large-scale biomolecule assemblies. We show how MARTINI-VISM ( M VISM) compares with atomistic VISM ( A VISM) for a small set of proteins differing in size, shape, and charge distribution. We also demonstrate M VISM's suitability to study the solvation properties of an interesting encounter complex, barnase-barstar. The promising results suggest that coarse-graining the protein with the MARTINI force field is indeed a valuable step to broaden VISM's and MARTINI's applications in the near future.

  2. Development of Chemical Process Design and Control for ...

    EPA Pesticide Factsheets

    This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy. The implemented control strategy combines a biologically inspired method with optimal control concepts for finding more sustainable operating trajectories. The sustainability assessment of process operating points is carried out by using the U.S. E.P.A.’s Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Objective Process Evaluator (GREENSCOPE) tool that provides scores for the selected indicators in the economic, material efficiency, environmental and energy areas. The indicator scores describe process performance on a sustainability measurement scale, effectively determining which operating point is more sustainable if there are more than several steady states for one specific product manufacturing. Through comparisons between a representative benchmark and the optimal steady-states obtained through implementation of the proposed controller, a systematic decision can be made in terms of whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous fermentation process for fuel production, whose materi

  3. An experimental and theoretical investigation into the electronically excited states of para-benzoquinone

    NASA Astrophysics Data System (ADS)

    Jones, D. B.; Limão-Vieira, P.; Mendes, M.; Jones, N. C.; Hoffmann, S. V.; da Costa, R. F.; Varella, M. T. do N.; Bettega, M. H. F.; Blanco, F.; García, G.; Ingólfsson, O.; Lima, M. A. P.; Brunger, M. J.

    2017-05-01

    We report on a combination of experimental and theoretical investigations into the structure of electronically excited para-benzoquinone (pBQ). Here synchrotron photoabsorption measurements are reported over the 4.0-10.8 eV range. The higher resolution obtained reveals previously unresolved pBQ spectral features. Time-dependent density functional theory calculations are used to interpret the spectrum and resolve discrepancies relating to the interpretation of the Rydberg progressions. Electron-impact energy loss experiments are also reported. These are combined with elastic electron scattering cross section calculations performed within the framework of the independent atom model-screening corrected additivity rule plus interference (IAM-SCAR + I) method to derive differential cross sections for electronic excitation of key spectral bands. A generalized oscillator strength analysis is also performed, with the obtained results demonstrating that a cohesive and reliable quantum chemical structure and cross section framework has been established. Within this context, we also discuss some issues associated with the development of a minimal orbital basis for the single configuration interaction strategy to be used for our high-level low-energy electron scattering calculations that will be carried out as a subsequent step in this joint experimental and theoretical investigation.

  4. Energy Efficient Medium Access Control Protocol for Clustered Wireless Sensor Networks with Adaptive Cross-Layer Scheduling.

    PubMed

    Sefuba, Maria; Walingo, Tom; Takawira, Fambirai

    2015-09-18

    This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols.

  5. Energy Efficient Medium Access Control Protocol for Clustered Wireless Sensor Networks with Adaptive Cross-Layer Scheduling

    PubMed Central

    Sefuba, Maria; Walingo, Tom; Takawira, Fambirai

    2015-01-01

    This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols. PMID:26393608

  6. Application of Framework for Integrating Safety, Security and Safeguards (3Ss) into the Design Of Used Nuclear Fuel Storage Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badwan, Faris M.; Demuth, Scott F

    Department of Energy’s Office of Nuclear Energy, Fuel Cycle Research and Development develops options to the current commercial fuel cycle management strategy to enable the safe, secure, economic, and sustainable expansion of nuclear energy while minimizing proliferation risks by conducting research and development focused on used nuclear fuel recycling and waste management to meet U.S. needs. Used nuclear fuel is currently stored onsite in either wet pools or in dry storage systems, with disposal envisioned in interim storage facility and, ultimately, in a deep-mined geologic repository. The safe management and disposition of used nuclear fuel and/or nuclear waste is amore » fundamental aspect of any nuclear fuel cycle. Integrating safety, security, and safeguards (3Ss) fully in the early stages of the design process for a new nuclear facility has the potential to effectively minimize safety, proliferation, and security risks. The 3Ss integration framework could become the new national and international norm and the standard process for designing future nuclear facilities. The purpose of this report is to develop a framework for integrating the safety, security and safeguards concept into the design of Used Nuclear Fuel Storage Facility (UNFSF). The primary focus is on integration of safeguards and security into the UNFSF based on the existing Nuclear Regulatory Commission (NRC) approach to addressing the safety/security interface (10 CFR 73.58 and Regulatory Guide 5.73) for nuclear power plants. The methodology used for adaptation of the NRC safety/security interface will be used as the basis for development of the safeguards /security interface and later will be used as the basis for development of safety and safeguards interface. Then this will complete the integration cycle of safety, security, and safeguards. The overall methodology for integration of 3Ss will be proposed, but only the integration of safeguards and security will be applied to the design of the UNFSF. The framework for integration of safeguards and security into the UNFSF will include 1) identification of applicable regulatory requirements, 2) selection of a common system that share dual safeguard and security functions, 3) development of functional design criteria and design requirements for the selected system, 4) identification and integration of the dual safeguards and security design requirements, and 5) assessment of the integration and potential benefit.« less

  7. A lightweight sensor network management system design

    USGS Publications Warehouse

    Yuan, F.; Song, W.-Z.; Peterson, N.; Peng, Y.; Wang, L.; Shirazi, B.; LaHusen, R.

    2008-01-01

    In this paper, we propose a lightweight and transparent management framework for TinyOS sensor networks, called L-SNMS, which minimizes the overhead of management functions, including memory usage overhead, network traffic overhead, and integration overhead. We accomplish this by making L-SNMS virtually transparent to other applications hence requiring minimal integration. The proposed L-SNMS framework has been successfully tested on various sensor node platforms, including TelosB, MICAz and IMote2. ?? 2008 IEEE.

  8. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    PubMed

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  9. Understanding the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions

    NASA Astrophysics Data System (ADS)

    Dahms, Rainer N.

    2016-04-01

    A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized which determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. The significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.

  10. Fine-resolution Modeling of Urban-Energy Systems' Water Footprint in River Networks

    NASA Astrophysics Data System (ADS)

    McManamay, R.; Surendran Nair, S.; Morton, A.; DeRolph, C.; Stewart, R.

    2015-12-01

    Characterizing the interplay between urbanization, energy production, and water resources is essential for ensuring sustainable population growth. In order to balance limited water supplies, competing users must account for their realized and virtual water footprint, i.e. the total direct and indirect amount of water used, respectively. Unfortunately, publicly reported US water use estimates are spatially coarse, temporally static, and completely ignore returns of water to rivers after use. These estimates are insufficient to account for the high spatial and temporal heterogeneity of water budgets in urbanizing systems. Likewise, urbanizing areas are supported by competing sources of energy production, which also have heterogeneous water footprints. Hence, a fundamental challenge of planning for sustainable urban growth and decision-making across disparate policy sectors lies in characterizing inter-dependencies among urban systems, energy producers, and water resources. A modeling framework is presented that provides a novel approach to integrate urban-energy infrastructure into a spatial accounting network that accurately measures water footprints as changes in the quantity and quality of river flows. River networks (RNs), i.e. networks of branching tributaries nested within larger rivers, provide a spatial structure to measure water budgets by modeling hydrology and accounting for use and returns from urbanizing areas and energy producers. We quantify urban-energy water footprints for Atlanta, GA and Knoxville, TN (USA) based on changes in hydrology in RNs. Although water intakes providing supply to metropolitan areas were proximate to metropolitan areas, power plants contributing to energy demand in Knoxville and Atlanta, occurred 30 and 90km outside the metropolitan boundary, respectively. Direct water footprints from urban landcover primarily comprised smaller streams whereas indirect footprints from water supply reservoirs and energy producers included larger river systems. By using projections in urban populations for 2030 and 2050, we estimated scenarios of expansion in water footprints depending on urban growth policies and energy production technology. We provide examples of how this framework can be used to minimize water footprints and impacts to aquatic biodiversity.

  11. Machine learning techniques for energy optimization in mobile embedded systems

    NASA Astrophysics Data System (ADS)

    Donohoo, Brad Kyoshi

    Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.

  12. Interpretation of ALARA in the Canadian regulatory framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utting, R.

    1995-03-01

    The Atomic Energy Control Board (AECB) is responsible for the regulation of all aspects of atomic energy in Canada. This includes the complete nuclear fuel cycle from uranium mining to long-term disposal of nuclear fuel, as well as the medical and industrial utilization of radioisotopes. Clearly, the regulatory approach will differ from practice to practice but, as far as possible, the AECB has attempted to minimize the degree of prescription of regulatory requirements. The traditional modus operandi of the AECB has been to have broad general principles enshrined in regulations with the requirement that licensees submit specific operating policies andmore » procedures to the AECB for approval. In the large nuclear facilities with their sophisticated technical infrastructures, this policy has been largely successful although in a changing legal and political milieu the AECB is finding that a greater degree of proactive regulation is becoming necessary. With the smaller users, the AECB has for a long time found it necessary to have a greater degree of prescription in its regulatory function. Forthcoming General Amendments to the Atomic Energy Control Regulations will, amongst other things, formally incorporate the concept of ALARA into the Canadian regulatory framework. Within the broad range of practices licensed by the AECB it is not practical to provide detailed guidance on optimization that will be relevant and appropriate to all licensees, however the following general principles are proposed.« less

  13. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  14. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  15. What energy functions can be minimized via graph cuts?

    PubMed

    Kolmogorov, Vladimir; Zabih, Ramin

    2004-02-01

    In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.

  16. Evaluating the effectiveness of risk minimisation measures: the application of a conceptual framework to Danish real-world dabigatran data.

    PubMed

    Nyeland, Martin Erik; Laursen, Mona Vestergaard; Callréus, Torbjörn

    2017-06-01

    For both marketing authorization holders and regulatory authorities, evaluating the effectiveness of risk minimization measures is now an integral part of pharmacovigilance in the European Union. The overall aim of activities in this area is to assess the performance of risk minimization measures implemented in order to ensure a positive benefit-risk balance in patients treated with a medicinal product. Following a review of the relevant literature, we developed a conceptual framework consisting of four domains (data, knowledge, behaviour and outcomes) intended for the evaluation of risk minimization measures put into practice in the Danish health-care system. For the implementation of the framework, four classes of monitoring variables can be named and defined: patient descriptors, performance-related indicators of knowledge, behaviour and outcomes. We reviewed the features of the framework when applied to historical, real-world data following the introduction of dabigatran in Denmark for the prophylactic treatment of patients with non-valvular atrial fibrillation. The application of the framework provided useful graphical displays and an opportunity for a statistical evaluation (interrupted time series analysis) of a regulatory intervention. © 2017 Commonwealth of Australia. Pharmacoepidemiology & Drug Safety © 2017 John Wiley & Sons, Ltd. © 2017 Commonwealth of Australia. Pharmacoepidemiology & Drug Safety © 2017 John Wiley & Sons, Ltd.

  17. Monitoring of the stability of underground workings in Polish copper mines conditions

    NASA Astrophysics Data System (ADS)

    Fuławka, Krzysztof; Mertuszka, Piotr; Pytel, Witold

    2018-01-01

    One of the problems associated with the excavation of deposit in underground mines is the local disturbance in a state of unstable equilibrium results in the sudden release of energy, mainly in the form of roof falls. The scale and intensity of this type of events depends on a number of factors. To minimize the risk of instability occurrence, continuous observations of the roof strata condition are recommended. Different roof strata observation methods used in the Polish copper mines have been analysed within the framework of presented paper. In addition, selected prospective methods, which could significantly increase efficiency of rock fall prevention are presented.

  18. Life cycle assessment of urban waste management: Energy performances and environmental impacts. The case of Rome, Italy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherubini, Francesco; Bargigli, Silvia; Ulgiati, Sergio

    2008-12-15

    Landfilling is nowadays the most common practice of waste management in Italy in spite of enforced regulations aimed at increasing waste pre-sorting as well as energy and material recovery. In this work we analyse selected alternative scenarios aimed at minimizing the unused material fraction to be delivered to the landfill. The methodological framework of the analysis is the life cycle assessment, in a multi-method form developed by our research team. The approach was applied to the case of municipal solid waste (MSW) management in Rome, with a special focus on energy and material balance, including global and local scale airbornemore » emissions. Results, provided in the form of indices and indicators of efficiency, effectiveness and environmental impacts, point out landfill activities as the worst waste management strategy at a global scale. On the other hand, the investigated waste treatments with energy and material recovery allow important benefits of greenhouse gas emission reduction (among others) but are still affected by non-negligible local emissions. Furthermore, waste treatments leading to energy recovery provide an energy output that, in the best case, is able to meet 15% of the Rome electricity consumption.« less

  19. Life cycle assessment of urban waste management: energy performances and environmental impacts. The case of Rome, Italy.

    PubMed

    Cherubini, Francesco; Bargigli, Silvia; Ulgiati, Sergio

    2008-12-01

    Landfilling is nowadays the most common practice of waste management in Italy in spite of enforced regulations aimed at increasing waste pre-sorting as well as energy and material recovery. In this work we analyse selected alternative scenarios aimed at minimizing the unused material fraction to be delivered to the landfill. The methodological framework of the analysis is the life cycle assessment, in a multi-method form developed by our research team. The approach was applied to the case of municipal solid waste (MSW) management in Rome, with a special focus on energy and material balance, including global and local scale airborne emissions. Results, provided in the form of indices and indicators of efficiency, effectiveness and environmental impacts, point out landfill activities as the worst waste management strategy at a global scale. On the other hand, the investigated waste treatments with energy and material recovery allow important benefits of greenhouse gas emission reduction (among others) but are still affected by non-negligible local emissions. Furthermore, waste treatments leading to energy recovery provide an energy output that, in the best case, is able to meet 15% of the Rome electricity consumption.

  20. A Unified Framework for Street-View Panorama Stitching

    PubMed Central

    Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei

    2016-01-01

    In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481

  1. Shifts in wind energy potential following land-use driven vegetation dynamics in complex terrain.

    PubMed

    Fang, Jiannong; Peringer, Alexander; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Buttler, Alexandre; Golay, Francois; Porté-Agel, Fernando

    2018-10-15

    Many mountainous regions with high wind energy potential are characterized by multi-scale variabilities of vegetation in both spatial and time dimensions, which strongly affect the spatial distribution of wind resource and its time evolution. To this end, we developed a coupled interdisciplinary modeling framework capable of assessing the shifts in wind energy potential following land-use driven vegetation dynamics in complex mountain terrain. It was applied to a case study area in the Romanian Carpathians. The results show that the overall shifts in wind energy potential following the changes of vegetation pattern due to different land-use policies can be dramatic. This suggests that the planning of wind energy project should be integrated with the land-use planning at a specific site to ensure that the expected energy production of the planned wind farm can be reached over its entire lifetime. Moreover, the changes in the spatial distribution of wind and turbulence under different scenarios of land-use are complex, and they must be taken into account in the micro-siting of wind turbines to maximize wind energy production and minimize fatigue loads (and associated maintenance costs). The proposed new modeling framework offers, for the first time, a powerful tool for assessing long-term variability in local wind energy potential that emerges from land-use change driven vegetation dynamics over complex terrain. Following a previously unexplored pathway of cause-effect relationships, it demonstrates a new linkage of agro- and forest policies in landscape development with an ultimate trade-off between renewable energy production and biodiversity targets. Moreover, it can be extended to study the potential effects of micro-climatic changes associated with wind farms on vegetation development (growth and patterning), which could in turn have a long-term feedback effect on wind resource distribution in mountainous regions. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  3. Minimal excitation states for heat transport in driven quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Vannucci, Luca; Ronetti, Flavio; Rech, Jérôme; Ferraro, Dario; Jonckheere, Thibaut; Martin, Thierry; Sassetti, Maura

    2017-06-01

    We investigate minimal excitation states for heat transport into a fractional quantum Hall system driven out of equilibrium by means of time-periodic voltage pulses. A quantum point contact allows for tunneling of fractional quasiparticles between opposite edge states, thus acting as a beam splitter in the framework of the electron quantum optics. Excitations are then studied through heat and mixed noise generated by the random partitioning at the barrier. It is shown that levitons, the single-particle excitations of a filled Fermi sea recently observed in experiments, represent the cleanest states for heat transport since excess heat and mixed shot noise both vanish only when Lorentzian voltage pulses carrying integer electric charge are applied to the conductor. This happens in the integer quantum Hall regime and for Laughlin fractional states as well, with no influence of fractional physics on the conditions for clean energy pulses. In addition, we demonstrate the robustness of such excitations to the overlap of Lorentzian wave packets. Even though mixed and heat noise have nonlinear dependence on the voltage bias, and despite the noninteger power-law behavior arising from the fractional quantum Hall physics, an arbitrary superposition of levitons always generates minimal excitation states.

  4. Wave Energy Converter (WEC) Array Effects on Wave Current and Sediment Circulation: Monterey Bay CA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Jesse D.; Jones, Craig; Magalen, Jason

    2014-09-01

    The goal s of this study were to develop tools to quantitatively characterize environments where wave energy converter ( WEC ) devices may be installed and to assess e ffects on hydrodynamics and lo cal sediment transport. A large hypothetical WEC array was investigated using wave, hydrodynamic, and sediment transport models and site - specific average and storm conditions as input. The results indicated that there were significant changes in sediment s izes adjacent to and in the lee of the WEC array due to reduced wave energy. The circulation in the lee of the array was also altered; moremore » intense onshore currents were generated in the lee of the WECs . In general, the storm case and the average case show ed the same qualitative patterns suggesting that these trends would be maintained throughout the year. The framework developed here can be used to design more efficient arrays while minimizing impacts on nearshore environmen ts.« less

  5. Multifold paths of neutrons in the three-beam interferometer detected by a tiny energy kick

    NASA Astrophysics Data System (ADS)

    Geppert-Kleinrath, Hermann; Denkmayr, Tobias; Sponar, Stephan; Lemmel, Hartmut; Jenke, Tobias; Hasegawa, Yuji

    2018-05-01

    A neutron optical experiment is presented to investigate the paths taken by neutrons in a three-beam interferometer. In various beam paths of the interferometer, the energy of the neutrons is partially shifted so that the faint traces are left along the beam path. By ascertaining an operational meaning to "the particle's path," which-path information is extracted from these faint traces with minimal perturbations. Theory is derived by simply following the time evolution of the wave function of the neutrons, which clarifies the observation in the framework of standard quantum mechanics. Which-way information is derived from the intensity, sinusoidally oscillating in time at different frequencies, which is considered to result from the interfering cross terms between stationary main component and the energy-shifted which-way signals. Final results give experimental evidence that the (partial) wave functions of the neutrons in each beam path are superimposed and present in multiple locations in the interferometer.

  6. The impact of long-range electron-hole interaction on the charge separation yield of molecular photocells

    NASA Astrophysics Data System (ADS)

    Nemati Aram, Tahereh; Ernzerhof, Matthias; Asgari, Asghar; Mayou, Didier

    2017-01-01

    We discuss the effects of charge carrier interaction and recombination on the operation of molecular photocells. Molecular photocells are devices where the energy conversion process takes place in a single molecular donor-acceptor complex attached to electrodes. Our investigation is based on the quantum scattering theory, in particular on the Lippmann-Schwinger equation; this minimizes the complexity of the problem while providing useful and non-trivial insight into the mechanism governing photocell operation. In this study, both exciton pair creation and dissociation are treated in the energy domain, and therefore there is access to detailed spectral information, which can be used as a framework to interpret the charge separation yield. We demonstrate that the charge carrier separation is a complex process that is affected by different parameters, such as the strength of the electron-hole interaction and the non-radiative recombination rate. Our analysis helps to optimize the charge separation process and the energy transfer in organic solar cells and in molecular photocells.

  7. Active edge maps for medical image registration

    NASA Astrophysics Data System (ADS)

    Kerwin, William; Yuan, Chun

    2001-07-01

    Applying edge detection prior to performing image registration yields several advantages over raw intensity- based registration. Advantages include the ability to register multicontrast or multimodality images, immunity to intensity variations, and the potential for computationally efficient algorithms. In this work, a common framework for edge-based image registration is formulated as an adaptation of snakes used in boundary detection. Called active edge maps, the new formulation finds a one-to-one transformation T(x) that maps points in a source image to corresponding locations in a target image using an energy minimization approach. The energy consists of an image component that is small when edge features are well matched in the two images, and an internal term that restricts T(x) to allowable configurations. The active edge map formulation is illustrated here with a specific example developed for affine registration of carotid artery magnetic resonance images. In this example, edges are identified using a magnitude of gradient operator, image energy is determined using a Gaussian weighted distance function, and the internal energy includes separate, adjustable components that control volume preservation and rigidity.

  8. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  9. QuickFF: A program for a quick and easy derivation of force fields for metal-organic frameworks from ab initio input.

    PubMed

    Vanduyfhuys, Louis; Vandenbrande, Steven; Verstraelen, Toon; Schmid, Rochus; Waroquier, Michel; Van Speybroeck, Veronique

    2015-05-15

    QuickFF is a software package to derive accurate force fields for isolated and complex molecular systems in a quick and easy manner. Apart from its general applicability, the program has been designed to generate force fields for metal-organic frameworks in an automated fashion. The force field parameters for the covalent interaction are derived from ab initio data. The mathematical expression of the covalent energy is kept simple to ensure robustness and to avoid fitting deficiencies as much as possible. The user needs to produce an equilibrium structure and a Hessian matrix for one or more building units. Afterward, a force field is generated for the system using a three-step method implemented in QuickFF. The first two steps of the methodology are designed to minimize correlations among the force field parameters. In the last step, the parameters are refined by imposing the force field parameters to reproduce the ab initio Hessian matrix in Cartesian coordinate space as accurate as possible. The method is applied on a set of 1000 organic molecules to show the easiness of the software protocol. To illustrate its application to metal-organic frameworks (MOFs), QuickFF is used to determine force fields for MIL-53(Al) and MOF-5. For both materials, accurate force fields were already generated in literature but they requested a lot of manual interventions. QuickFF is a tool that can easily be used by anyone with a basic knowledge of performing ab initio calculations. As a result, accurate force fields are generated with minimal effort. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. Cutting Materials in Half: A Graph Theory Approach for Generating Crystal Surfaces and Its Prediction of 2D Zeolites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witman, Matthew; Ling, Sanliang; Boyd, Peter

    Scientific interest in two-dimensional (2D) materials, ranging from graphene and other single layer materials to atomically thin crystals, is quickly increasing for a large variety of technological applications. While in silico design approaches have made a large impact in the study of 3D crystals, algorithms designed to discover atomically thin 2D materials from their parent 3D materials are by comparison more sparse. Here, we hypothesize that determining how to cut a 3D material in half (i.e., which Miller surface is formed) by severing a minimal number of bonds or a minimal amount of total bond energy per unit area canmore » yield insight into preferred crystal faces. We answer this question by implementing a graph theory technique to mathematically formalize the enumeration of minimum cut surfaces of crystals. While the algorithm is generally applicable to different classes of materials, we focus on zeolitic materials due to their diverse structural topology and because 2D zeolites have promising catalytic and separation performance compared to their 3D counterparts. We report here a simple descriptor based only on structural information that predicts whether a zeolite is likely to be synthesizable in the 2D form and correctly identifies the expressed surface in known layered 2D zeolites. The discovery of this descriptor allows us to highlight other zeolites that may also be synthesized in the 2D form that have not been experimentally realized yet. Finally, our method is general since the mathematical formalism can be applied to find the minimum cut surfaces of other crystallographic materials such as metal-organic frameworks, covalent-organic frameworks, zeolitic-imidazolate frameworks, metal oxides, etc.« less

  11. Cutting Materials in Half: A Graph Theory Approach for Generating Crystal Surfaces and Its Prediction of 2D Zeolites.

    PubMed

    Witman, Matthew; Ling, Sanliang; Boyd, Peter; Barthel, Senja; Haranczyk, Maciej; Slater, Ben; Smit, Berend

    2018-02-28

    Scientific interest in two-dimensional (2D) materials, ranging from graphene and other single layer materials to atomically thin crystals, is quickly increasing for a large variety of technological applications. While in silico design approaches have made a large impact in the study of 3D crystals, algorithms designed to discover atomically thin 2D materials from their parent 3D materials are by comparison more sparse. We hypothesize that determining how to cut a 3D material in half (i.e., which Miller surface is formed) by severing a minimal number of bonds or a minimal amount of total bond energy per unit area can yield insight into preferred crystal faces. We answer this question by implementing a graph theory technique to mathematically formalize the enumeration of minimum cut surfaces of crystals. While the algorithm is generally applicable to different classes of materials, we focus on zeolitic materials due to their diverse structural topology and because 2D zeolites have promising catalytic and separation performance compared to their 3D counterparts. We report here a simple descriptor based only on structural information that predicts whether a zeolite is likely to be synthesizable in the 2D form and correctly identifies the expressed surface in known layered 2D zeolites. The discovery of this descriptor allows us to highlight other zeolites that may also be synthesized in the 2D form that have not been experimentally realized yet. Finally, our method is general since the mathematical formalism can be applied to find the minimum cut surfaces of other crystallographic materials such as metal-organic frameworks, covalent-organic frameworks, zeolitic-imidazolate frameworks, metal oxides, etc.

  12. Cutting Materials in Half: A Graph Theory Approach for Generating Crystal Surfaces and Its Prediction of 2D Zeolites

    PubMed Central

    2018-01-01

    Scientific interest in two-dimensional (2D) materials, ranging from graphene and other single layer materials to atomically thin crystals, is quickly increasing for a large variety of technological applications. While in silico design approaches have made a large impact in the study of 3D crystals, algorithms designed to discover atomically thin 2D materials from their parent 3D materials are by comparison more sparse. We hypothesize that determining how to cut a 3D material in half (i.e., which Miller surface is formed) by severing a minimal number of bonds or a minimal amount of total bond energy per unit area can yield insight into preferred crystal faces. We answer this question by implementing a graph theory technique to mathematically formalize the enumeration of minimum cut surfaces of crystals. While the algorithm is generally applicable to different classes of materials, we focus on zeolitic materials due to their diverse structural topology and because 2D zeolites have promising catalytic and separation performance compared to their 3D counterparts. We report here a simple descriptor based only on structural information that predicts whether a zeolite is likely to be synthesizable in the 2D form and correctly identifies the expressed surface in known layered 2D zeolites. The discovery of this descriptor allows us to highlight other zeolites that may also be synthesized in the 2D form that have not been experimentally realized yet. Finally, our method is general since the mathematical formalism can be applied to find the minimum cut surfaces of other crystallographic materials such as metal–organic frameworks, covalent-organic frameworks, zeolitic-imidazolate frameworks, metal oxides, etc. PMID:29532024

  13. Cutting Materials in Half: A Graph Theory Approach for Generating Crystal Surfaces and Its Prediction of 2D Zeolites

    DOE PAGES

    Witman, Matthew; Ling, Sanliang; Boyd, Peter; ...

    2018-02-06

    Scientific interest in two-dimensional (2D) materials, ranging from graphene and other single layer materials to atomically thin crystals, is quickly increasing for a large variety of technological applications. While in silico design approaches have made a large impact in the study of 3D crystals, algorithms designed to discover atomically thin 2D materials from their parent 3D materials are by comparison more sparse. Here, we hypothesize that determining how to cut a 3D material in half (i.e., which Miller surface is formed) by severing a minimal number of bonds or a minimal amount of total bond energy per unit area canmore » yield insight into preferred crystal faces. We answer this question by implementing a graph theory technique to mathematically formalize the enumeration of minimum cut surfaces of crystals. While the algorithm is generally applicable to different classes of materials, we focus on zeolitic materials due to their diverse structural topology and because 2D zeolites have promising catalytic and separation performance compared to their 3D counterparts. We report here a simple descriptor based only on structural information that predicts whether a zeolite is likely to be synthesizable in the 2D form and correctly identifies the expressed surface in known layered 2D zeolites. The discovery of this descriptor allows us to highlight other zeolites that may also be synthesized in the 2D form that have not been experimentally realized yet. Finally, our method is general since the mathematical formalism can be applied to find the minimum cut surfaces of other crystallographic materials such as metal-organic frameworks, covalent-organic frameworks, zeolitic-imidazolate frameworks, metal oxides, etc.« less

  14. A constrained registration problem based on Ciarlet-Geymonat stored energy

    NASA Astrophysics Data System (ADS)

    Derfoul, Ratiba; Le Guyader, Carole

    2014-03-01

    In this paper, we address the issue of designing a theoretically well-motivated registration model capable of handling large deformations and including geometrical constraints, namely landmark points to be matched, in a variational framework. The theory of linear elasticity being unsuitable in this case, since assuming small strains and the validity of Hooke's law, the introduced functional is based on nonlinear elasticity principles. More precisely, the shapes to be matched are viewed as Ciarlet-Geymonat materials. We demonstrate the existence of minimizers of the related functional minimization problem and prove a convergence result when the number of geometric constraints increases. We then describe and analyze a numerical method of resolution based on the introduction of an associated decoupled problem under inequality constraint in which an auxiliary variable simulates the Jacobian matrix of the deformation field. A theoretical result of 􀀀-convergence is established. We then provide preliminary 2D results of the proposed matching model for the registration of mouse brain gene expression data to a neuroanatomical mouse atlas.

  15. Wireless Power Transfer for Distributed Estimation in Sensor Networks

    NASA Astrophysics Data System (ADS)

    Mai, Vien V.; Shin, Won-Yong; Ishibashi, Koji

    2017-04-01

    This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.

  16. Responsible gambling: general principles and minimal requirements.

    PubMed

    Blaszczynski, Alex; Collins, Peter; Fong, Davis; Ladouceur, Robert; Nower, Lia; Shaffer, Howard J; Tavares, Hermano; Venisse, Jean-Luc

    2011-12-01

    Many international jurisdictions have introduced responsible gambling programs. These programs intend to minimize negative consequences of excessive gambling, but vary considerably in their aims, focus, and content. Many responsible gambling programs lack a conceptual framework and, in the absence of empirical data, their components are based only on general considerations and impressions. This paper outlines the consensus viewpoint of an international group of researchers suggesting fundamental responsible gambling principles, roles of key stakeholders, and minimal requirements that stakeholders can use to frame and inform responsible gambling programs across jurisdictions. Such a framework does not purport to offer value statements regarding the legal status of gambling or its expansion. Rather, it proposes gambling-related initiatives aimed at government, industry, and individuals to promote responsible gambling and consumer protection. This paper argues that there is a set of basic principles and minimal requirements that should form the basis for every responsible gambling program.

  17. Charging, power management, and battery degradation mitigation in plug-in hybrid electric vehicles: A unified cost-optimal approach

    NASA Astrophysics Data System (ADS)

    Hu, Xiaosong; Martinez, Clara Marina; Yang, Yalian

    2017-03-01

    Holistic energy management of plug-in hybrid electric vehicles (PHEVs) in smart grid environment constitutes an enormous control challenge. This paper responds to this challenge by investigating the interactions among three important control tasks, i.e., charging, on-road power management, and battery degradation mitigation, in PHEVs. Three notable original contributions distinguish our work from existing endeavors. First, a new convex programming (CP)-based cost-optimal control framework is constructed to minimize the daily operational expense of a PHEV, which seamlessly integrates costs of the three tasks. Second, a straightforward but useful sensitivity assessment of the optimization outcome is executed with respect to price changes of battery and energy carriers. The potential impact of vehicle-to-grid (V2G) power flow on the PHEV economy is eventually analyzed through a multitude of comparative studies.

  18. Method and apparatus for minimizing multiple degree of freedom vibration transmission between two regions of a structure

    NASA Technical Reports Server (NTRS)

    Silcox, Richard J. (Inventor); Fuller, Chris R. (Inventor); Gibbs, Gary P. (Inventor)

    1992-01-01

    Arrays of actuators are affixed to structural elements to impede the transmission of vibrational energy. A single pair is used to provide control of bending and extensional waves and two pairs are used to control torsional motion. The arrays are applied to a wide variety of structural elements such as a beam structure that is part of a larger framework that may or may not support a rigid or non-rigid skin. Electrical excitation is applied to the actuators that generate forces on the structure. These electrical inputs may be adjusted in their amplitude and phase by a controller in communication with appropriate vibrational wave sensors to impede the flow of vibrational power in all of the above mentioned wave forms beyond the actuator location. Additional sensor elements can be used to monitor the performance and adjust the electrical inputs to maximize the attenuation of vibrational energy.

  19. Regional allocation of biomass to U.S. energy demands under a portfolio of policy scenarios.

    PubMed

    Mullins, Kimberley A; Venkatesh, Aranya; Nagengast, Amy L; Kocoloski, Matt

    2014-01-01

    The potential for widespread use of domestically available energy resources, in conjunction with climate change concerns, suggest that biomass may be an essential component of U.S. energy systems in the near future. Cellulosic biomass in particular is anticipated to be used in increasing quantities because of policy efforts, such as federal renewable fuel standards and state renewable portfolio standards. Unfortunately, these independently designed biomass policies do not account for the fact that cellulosic biomass can equally be used for different, competing energy demands. An integrated assessment of multiple feedstocks, energy demands, and system costs is critical for making optimal decisions about a unified biomass energy strategy. This study develops a spatially explicit, best-use framework to optimally allocate cellulosic biomass feedstocks to energy demands in transportation, electricity, and residential heating sectors, while minimizing total system costs and tracking greenhouse gas emissions. Comparing biomass usage across three climate policy scenarios suggests that biomass used for space heating is a low cost emissions reduction option, while biomass for liquid fuel or for electricity becomes attractive only as emissions reduction targets or carbon prices increase. Regardless of the policy approach, study results make a strong case for national and regional coordination in policy design and compliance pathways.

  20. Origin of high Li⁺ conduction in doped Li₇La₃Zr₂O₁₂ garnets

    DOE PAGES

    Chen, Yan; Rangasamy, Ezhiylmurugan; Liang, Chengdu; ...

    2015-08-06

    Substitution of a native ion in the crystals with a foreign ion that differs in valence ( aliovalent doping) has been widely attempted to upgrade solid-state ionic conductors for various charge carriers including O²⁻, H⁺, Li⁺, Na⁺, etc. The doping helps promote the high-conductive framework and dredge the tunnel for fast ion transport. The garnet-type Li₇La₃Zr₂O₁₂ (LLZO) is a fast Li⁺ solid conductor, which received much attention as an electrolyte candidate for all-solid-state lithium ion batteries, showing great potential to offer high energy density and minimize battery safety concerns to meet extensive applications in large energy storage systems such asmore » those for electric vehicles and aerospace. In the Li-stuffed garnet framework of LLZO, the 3D pathway formed by the incompletely occupied tetrahedral sites bridged by a single octahedron enables the superior Li⁺ conductivity. For optimal performance, many aliovalent-doping efforts have been made throughout metal elements (Al³⁺, Ta⁵⁺) and metalloid elements (Ga³⁺, Te⁶⁺) in the periodic table with various valences to stabilize the high-conductive phase and increase the Li vacancy concentration.« less

  1. “Martinizing” the Variational Implicit Solvent Method (VISM): Solvation Free Energy for Coarse-Grained Proteins

    PubMed Central

    2017-01-01

    Solvation is a fundamental driving force in many biological processes including biomolecular recognition and self-assembly, not to mention protein folding, dynamics, and function. The variational implicit solvent method (VISM) is a theoretical tool currently developed and optimized to estimate solvation free energies for systems of very complex topology, such as biomolecules. VISM’s theoretical framework makes it unique because it couples hydrophobic, van der Waals, and electrostatic interactions as a functional of the solvation interface. By minimizing this functional, VISM produces the solvation interface as an output of the theory. In this work, we push VISM to larger scale applications by combining it with coarse-grained solute Hamiltonians adapted from the MARTINI framework, a well-established mesoscale force field for modeling large-scale biomolecule assemblies. We show how MARTINI-VISM (MVISM) compares with atomistic VISM (AVISM) for a small set of proteins differing in size, shape, and charge distribution. We also demonstrate MVISM’s suitability to study the solvation properties of an interesting encounter complex, barnase–barstar. The promising results suggest that coarse-graining the protein with the MARTINI force field is indeed a valuable step to broaden VISM’s and MARTINI’s applications in the near future. PMID:28613904

  2. A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.

    PubMed

    Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K

    2018-04-21

    Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.

  3. Guest–host interactions of a rigid organic molecule in porous silica frameworks

    PubMed Central

    Wu, Di; Hwang, Son-Jong; Zones, Stacey I.; Navrotsky, Alexandra

    2014-01-01

    Molecular-level interactions at organic–inorganic interfaces play crucial roles in many fields including catalysis, drug delivery, and geological mineral precipitation in the presence of organic matter. To seek insights into organic–inorganic interactions in porous framework materials, we investigated the phase evolution and energetics of confinement of a rigid organic guest, N,N,N-trimethyl-1-adamantammonium iodide (TMAAI), in inorganic porous silica frameworks (SSZ-24, MCM-41, and SBA-15) as a function of pore size (0.8 nm to 20.0 nm). We used hydrofluoric acid solution calorimetry to obtain the enthalpies of interaction between silica framework materials and TMAAI, and the values range from −56 to −177 kJ per mole of TMAAI. The phase evolution as a function of pore size was investigated by X-ray diffraction, IR, thermogravimetric differential scanning calorimetry, and solid-state NMR. The results suggest the existence of three types of inclusion depending on the pore size of the framework: single-molecule confinement in a small pore, multiple-molecule confinement/adsorption of an amorphous and possibly mobile assemblage of molecules near the pore walls, and nanocrystal confinement in the pore interior. These changes in structure probably represent equilibrium and minimize the free energy of the system for each pore size, as indicated by trends in the enthalpy of interaction and differential scanning calorimetry profiles, as well as the reversible changes in structure and mobility seen by variable temperature NMR. PMID:24449886

  4. Minimal energy configurations of gravitationally interacting rigid bodies

    NASA Astrophysics Data System (ADS)

    Moeckel, Richard

    2017-05-01

    Consider a collection of n rigid, massive bodies interacting according to their mutual gravitational attraction. A relative equilibrium motion is one where the entire configuration rotates rigidly and uniformly about a fixed axis in R^3. Such a motion is possible only for special positions and orientations of the bodies. A minimal energy motion is one which has the minimum possible energy in its fixed angular momentum level. While every minimal energy motion is a relative equilibrium motion, the main result here is that a relative equilibrium motion of n≥3 disjoint rigid bodies is never an energy minimizer. This generalizes a known result about point masses to the case of rigid bodies.

  5. Search for selectron and squark production in collisions at HERA

    NASA Astrophysics Data System (ADS)

    ZEUS Collaboration; Breitweg, J.; Derrick, M.; Krakauer, D.; Magill, S.; Mikunas, D.; Musgrave, B.; Repond, J.; Stanek, R.; Talaga, R. L.; Yoshida, R.; Zhang, H.; Mattingly, M. C. K.; Anselmo, F.; Antonioli, P.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, A.; Bruni, G.; Cara Romeo, G.; Castellini, G.; Cifarelli, L.; Cindolo, F.; Contin, A.; Coppola, N.; Corradi, M.; de Pasquale, S.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Margotti, A.; Massam, T.; Nania, R.; Palmonari, F.; Pesci, A.; Polini, A.; Sartorelli, G.; Zamora Garcia, Y.; Zichichi, A.; Amelung, C.; Bornheim, A.; Brock, I.; Coböken, K.; Crittenden, J.; Deffner, R.; Eckert, M.; Grothe, M.; Hartmann, H.; Heinloth, K.; Heinz, L.; Hilger, E.; Jakob, H.-P.; Kappes, A.; Katz, U. F.; Kerger, R.; Paul, E.; Pfeiffer, M.; Stamm, J.; Wieber, H.; Bailey, D. S.; Campbell-Robson, S.; Cottingham, W. N.; Foster, B.; Hall-Wilton, R.; Heath, G. P.; Heath, H. F.; McFall, J. D.; Piccioni, D.; Roff, D. G.; Tapper, R. J.; Capua, M.; Iannotti, L.; Schioppa, M.; Susinno, G.; Kim, J. Y.; Lee, J. H.; Lim, I. T.; Pac, M. Y.; Caldwell, A.; Cartiglia, N.; Jing, Z.; Liu, W.; Mellado, B.; Parsons, J. A.; Ritz, S.; Sampson, S.; Sciulli, F.; Straub, P. B.; Zhu, Q.; Borzemski, P.; Chwastowski, J.; Eskreys, A.; Figiel, J.; Klimek, K.; Przybycień , M. B.; Zawiejski, L.; Adamczyk, L.; Bednarek, B.; Bukowy, M.; Czermak, A. M.; Jeleń , K.; Kisielewska, D.; Kowalski, T.; Przybycień , M.; Rulikowska-Zarȩ Bska, E.; Suszycki, L.; Zaja C, J.; Duliń Ski, Z.; Kotań Ski, A.; Abbiendi, G.; Bauerdick, L. A. T.; Behrens, U.; Beier, H.; Bienlein, J. K.; Desler, K.; Drews, G.; Fricke, U.; Gialas, I.; Goebel, F.; Göttlicher, P.; Graciani, R.; Haas, T.; Hain, W.; Hartner, G. F.; Hasell, D.; Hebbel, K.; Johnson, K. F.; Kasemann, M.; Koch, W.; Kötz, U.; Kowalski, H.; Lindemann, L.; Löhr, B.; Martínez, M.; Milewski, J.; Milite, M.; Monteiro, T.; Notz, D.; Pellegrino, A.; Pelucchi, F.; Piotrzkowski, K.; Rohde, M.; Roldán, J.; Ryan, J. J.; Saull, P. R. B.; Savin, A. A.; Schneekloth, U.; Schwarzer, O.; Selonke, F.; Stonjek, S.; Surrow, B.; Tassi, E.; Westphal, D.; Wolf, G.; Wollmer, U.; Youngman, C.; Zeuner, W.; Burow, B. D.; Coldewey, C.; Grabosch, H. J.; Meyer, A.; Schlenstedt, S.; Barbagli, G.; Gallo, E.; Pelfer, P.; Maccarrone, G.; Votano, L.; Bamberger, A.; Eisenhardt, S.; Markun, P.; Raach, H.; Trefzger, T.; Wölfle, S.; Bromley, J. T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Lee, S. W.; MacDonald, N.; McCance, G. J.; Saxon, D. H.; Sinclair, L. E.; Skillicorn, I. O.; Strickland, E.; Waugh, R.; Bohnet, I.; Gendner, N.; Holm, U.; Meyer-Larsen, A.; Salehi, H.; Wick, K.; Garfagnini, A.; Gladilin, L. K.; Kçira, D.; Klanner, R.; Lohrmann, E.; Poelz, G.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Cole, J. E.; Howell, G.; Lamberti, L.; Long, K. R.; Miller, D. B.; Pavel, N.; Prinias, A.; Sedgbeer, J. K.; Sideris, D.; Walker, R.; Mallik, U.; Wang, S. M.; Wu, J. T.; Cloth, P.; Filges, D.; Fleck, J. I.; Ishii, T.; Kuze, M.; Suzuki, I.; Tokushuku, K.; Yamada, S.; Yamauchi, K.; Yamazaki, Y.; Hong, S. J.; Lee, S. B.; Nam, S. W.; Park, S. K.; Lim, H.; Park, I. H.; Son, D.; Barreiro, F.; Fernández, J. P.; García, G.; Glasman, C.; Hernández, J. M.; Hervás, L.; Labarga, L.; del Peso, J.; Puga, J.; Terrón, J.; de Trocóniz, J. F.; Corriveau, F.; Hanna, D. S.; Hartmann, J.; Hung, L. W.; Murray, W. N.; Ochs, A.; Riveline, M.; Stairs, D. G.; St-Laurent, M.; Ullmann, R.; Tsurugai, T.; Bashkirov, V.; Dolgoshein, B. A.; Stifutkin, A.; Bashindzhagyan, G. L.; Ermolov, P. F.; Golubkov, Yu. A.; Khein, L. A.; Korotkova, N. A.; Korzhavina, I. A.; Kuzmin, V. A.; Lukina, O. Yu.; Proskuryakov, A. S.; Shcheglova, L. M.; Solomin, A. N.; Zotkin, S. A.; Bokel, C.; Botje, M.; Brümmer, N.; Engelen, J.; Koffeman, E.; Kooijman, P.; van Sighem, A.; Tiecke, H.; Tuning, N.; Verkerke, W.; Vossebeld, J.; Wiggers, L.; de Wolf, E.; Acosta, D.; Bylsma, B.; Durkin, L. S.; Gilmore, J.; Ginsburg, C. M.; Kim, C. L.; Ling, T. Y.; Nylander, P.; Romanowski, T. A.; Blaikley, H. E.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Devenish, R. C. E.; Edmonds, J. K.; Große-Knetter, J.; Harnew, N.; Nath, C.; Noyes, V. A.; Quadt, A.; Ruske, O.; Tickner, J. R.; Walczak, R.; Waters, D. S.; Bertolin, A.; Brugnera, R.; Carlin, R.; dal Corso, F.; Dosselli, U.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Bulmahn, J.; Oh, B. Y.; Okrasiń Ski, J. R.; Toothacker, W. S.; Whitmore, J. J.; Iga, Y.; D'Agostini, G.; Marini, G.; Nigro, A.; Raso, M.; Hart, J. C.; McCubbin, N. A.; Shah, T. P.; Epperson, D.; Heusch, C.; Rahn, J. T.; Sadrozinski, H. F.-W.; Seiden, A.; Wichmann, R.; Williams, D. C.; Abramowicz, H.; Briskin, G.; Dagan, S.; Kananov, S.; Levy, A.; Abe, T.; Fusayasu, T.; Inuzuka, M.; Nagano, K.; Umemori, K.; Yamashita, T.; Hamatsu, R.; Hirose, T.; Homma, K.; Kitamura, S.; Matsushita, T.; Arneodo, M.; Cirio, R.; Costa, M.; Ferrero, M. I.; Maselli, S.; Monaco, V.; Peroni, C.; Petrucci, M. C.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Fagerstroem, C.-P.; Galea, R.; Joo, K. K.; Levman, G. M.; Martin R. S. Orr, J. F.; Polenz, S.; Sabetfakhri, A.; Simmons, D.; Butterworth, J. M.; Catterall, C. D.; Hayes, M. E.; Jones, T. W.; Lane, J. B.; Saunders, R. L.; Sutton, M. R.; Wing, M.; Ciborowski, J.; Grzelak, G.; Kasprzak, M.; Nowak, R. J.; Pawlak, J. M.; Pawlak, R.; Smalska, B.; Tymieniecka, T.; Wróblewski, A. K.; Zakrzewski, J. A.; Zsolararnecki, A. F.; Adamus, M.; Deppe, O.; Eisenberg, Y.; Hochman, D.; Karshon, U.; Badgett, W. F.; Chapin, D.; Cross, R.; Dasu, S.; Foudas, C.; Loveless, R. J.; Mattingly, S.; Reeder, D. D.; Smith, W. H.; Vaiciulis, A.; Wodarczyk, M.; Deshpande, A.; Dhawan, S.; Hughes, V. W.; Bhadra, S.; Frisken, W. R.; Khakzad, M.; Schmidke, W. B.

    1998-08-01

    We have searched for the production of a selectron and a squark in collisions at a center-of-mass energy of 300 GeV using the ZEUS detector at HERA. The selectron and squark are sought in the direct decay into the lightest neutralino in the framework of supersymmetric extensions to the Standard Model which conserve R-parity. No evidence for the production of supersymmetric particles has been found in a data sample corresponding to 46.6 pb of integrated luminosity. We express upper limits on the product of the cross section times the decay branching ratios as excluded regions in the parameter space of the Minimal Supersymmetric Standard Model.

  6. Liver vessels segmentation using a hybrid geometrical moments/graph cuts method

    PubMed Central

    Esneault, Simon; Lafon, Cyril; Dillenseger, Jean-Louis

    2010-01-01

    This paper describes a fast and fully-automatic method for liver vessel segmentation on CT scan pre-operative images. The basis of this method is the introduction of a 3-D geometrical moment-based detector of cylindrical shapes within the min-cut/max-flow energy minimization framework. This method represents an original way to introduce a data term as a constraint into the widely used Boykov’s graph cuts algorithm and hence, to automate the segmentation. The method is evaluated and compared with others on a synthetic dataset. Finally, the relevancy of our method regarding the planning of a -necessarily accurate- percutaneous high intensity focused ultrasound surgical operation is demonstrated with some examples. PMID:19783500

  7. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  8. Light clusters and pasta phases in warm and dense nuclear matter

    NASA Astrophysics Data System (ADS)

    Avancini, Sidney S.; Ferreira, Márcio; Pais, Helena; Providência, Constança; Röpke, Gerd

    2017-04-01

    The pasta phases are calculated for warm stellar matter in a framework of relativistic mean-field models, including the possibility of light cluster formation. Results from three different semiclassical approaches are compared with a quantum statistical calculation. Light clusters are considered as point-like particles, and their abundances are determined from the minimization of the free energy. The couplings of the light clusters to mesons are determined from experimental chemical equilibrium constants and many-body quantum statistical calculations. The effect of these light clusters on the chemical potentials is also discussed. It is shown that, by including heavy clusters, light clusters are present up to larger nucleonic densities, although with smaller mass fractions.

  9. Initial Results from the Miniature Imager for Neutral Ionospheric Atoms and Magnetospheric Electrons (MINI-ME) on the FASTSAT Spacecraft

    NASA Technical Reports Server (NTRS)

    Collier, Michael R.; Rowland, Douglas; Keller, John W.; Chornay, Dennis; Khazanov, George; Herrero, Federico; Moore, Thomas E.; Kujawski, Joseph; Casas, Joseph C.; Wilson, Gordon

    2011-01-01

    The MINI-ME instrument is a collaborative effort between NASA's Goddard Space Flight Center (GSFC) and the U.S. Naval Academy, funded solely through GSFC Internal Research and Development (IRAD) awards. It detects neutral atoms from about 10 eV to about 700 eV (in 30 energy steps) in its current operating configuration with an approximately 10 degree by 360 degree field-of-view, divided into six sectors. The instrument was delivered on August 3, 2009 to Marshall Space Flight Center (MSFC) for integration with the FASTSAT-HSV01 small spacecraft bus developed by MSFC and a commercial partner, one of six Space Experiment Review Board (SERB) experiments on FASTSAT and one of three GSFC instruments (PISA and TTI being the other two). The FASTSAT spacecraft was launched on November 21, 2010 from Kodiak, Alaska on a Minotaur IV as a secondary payload and inserted into a 650 km, 72 degree inclination orbit, very nearly circular. MINI-ME has been collecting science data, as spacecraft resources would permit, in "optimal science mode" since January 20, 2011. In this presentation, we report initial science results including the potential first observations of neutral molecular ionospheric outflow. At the time of this abstract, we have identified 15 possible molecular outflow events. All these events occur between about 65 and 82 degrees geomagnetic latitude and most map to the auroral oval. The MINI-ME results provide an excellent framework for interpretation of the MILENA data, two instruments almost identical to MINI-ME that will launch on the VISIONS suborbital mission

  10. Initial Results from the Miniature Imager for Neutral Ionospheric atoms and Magnetospheric Electrons (MINI-ME) on the FASTSAT Spacecraft

    NASA Astrophysics Data System (ADS)

    Collier, M. R.; Rowland, D. E.; Keller, J. W.; Chornay, D. J.; Khazanov, G. V.; Herrero, F.; Moore, T. E.; Kujawski, J. T.; Casas, J. C.; Wilson, G. R.

    2011-12-01

    The MINI-ME instrument is a collaborative effort between NASA's Goddard Space Flight Center (GSFC) and the U.S. Naval Academy, funded solely through GSFC Internal Research and Development (IRAD) awards. It detects neutral atoms from about 10 eV to about 700 eV (in 30 energy steps) in its current operating configuration with an approximately 10 degree by 360 degree field-of-view, divided into six sectors. The instrument was delivered on August 3, 2009 to Marshall Space Flight Center (MSFC) for integration with the FASTSAT-HSV01 small spacecraft bus developed by MSFC and a commercial partner, one of six Space Experiments Review Board (SERB) experiments on FASTSAT and one of three GSFC instruments (PISA and TTI being the other two). The FASTSAT spacecraft was launched on November 21, 2010 from Kodiak, Alaska on a Minotaur IV as a secondary payload and inserted into a 650 km, 72 degree inclination orbit, very nearly circular. MINI-ME has been collecting science data, as spacecraft resources would permit, in "optimal science mode" since January 20, 2011. In this presentation, we report initial science results including the potential first observations of neutral molecular ionospheric outflow. At the time of this abstract, we have identified 15 possible molecular outflow events. All these events occur between about 65 and 82 degrees geomagnetic latitude and most map to the auroral oval. The MINI-ME results provide an excellent framework for interpretation of the MILENA data, two instruments almost identical to MINI-ME that will launch on the VISIONS suborbital mission (PI: Douglas Rowland).

  11. Casual Set Approach to a Minimal Invariant Length

    NASA Astrophysics Data System (ADS)

    Raut, Usha

    2007-04-01

    Any attempt to quantize gravity would necessarily introduce a minimal observable length scale of the order of the Planck length. This conclusion is based on several different studies and thought experiments and appears to be an inescapable feature of all quantum gravity theories, irrespective of the method used to quantize gravity. Over the last few years there has been growing concern that such a minimal length might lead to a contradiction with the basic postulates of special relativity, in particular the Lorentz-Fitzgerald contraction. A few years ago, Rovelli et.al, attempted to reconcile an invariant minimal length with Special Relativity, using the framework of loop quantum gravity. However, the inherently canonical formalism of the loop quantum approach is plagued by a variety of problems, many brought on by separation of space and time co-ordinates. In this paper we use a completely different approach. Using the framework of the causal set paradigm, along with a statistical measure of closeness between Lorentzian manifolds, we re-examine the issue of introducing a minimal observable length that is not at odds with Special Relativity postulates.

  12. Methods for associating or dissociating guest materials with a metal organic framework, systems for associating or dissociating guest materials within a series of metal organic frameworks, thermal energy transfer assemblies, and methods for transferring thermal energy

    DOEpatents

    McGrail, B. Peter; Brown, Daryl R.; Thallapally, Praveen K.

    2016-08-02

    Methods for releasing associated guest materials from a metal organic framework are provided. Methods for associating guest materials with a metal organic framework are also provided. Methods are provided for selectively associating or dissociating guest materials with a metal organic framework. Systems for associating or dissociating guest materials within a series of metal organic frameworks are provided. Thermal energy transfer assemblies are provided. Methods for transferring thermal energy are also provided.

  13. Methods for associating or dissociating guest materials with a metal organic framework, systems for associating or dissociating guest materials within a series of metal organic frameworks, thermal energy transfer assemblies, and methods for transferring thermal energy

    DOEpatents

    McGrail, B. Peter; Brown, Daryl R.; Thallapally, Praveen K.

    2014-08-05

    Methods for releasing associated guest materials from a metal organic framework are provided. Methods for associating guest materials with a metal organic framework are also provided. Methods are provided for selectively associating or dissociating guest materials with a metal organic framework. Systems for associating or dissociating guest materials within a series of metal organic frameworks are provided. Thermal energy transfer assemblies are provided. Methods for transferring thermal energy are also provided.

  14. A segmentation editing framework based on shape change statistics

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen

    2017-02-01

    Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.

  15. On the thermomechanical coupling in dissipative materials: A variational approach for generalized standard materials

    NASA Astrophysics Data System (ADS)

    Bartels, A.; Bartel, T.; Canadija, M.; Mosler, J.

    2015-09-01

    This paper deals with the thermomechanical coupling in dissipative materials. The focus lies on finite strain plasticity theory and the temperature increase resulting from plastic deformation. For this type of problem, two fundamentally different modeling approaches can be found in the literature: (a) models based on thermodynamical considerations and (b) models based on the so-called Taylor-Quinney factor. While a naive straightforward implementation of thermodynamically consistent approaches usually leads to an over-prediction of the temperature increase due to plastic deformation, models relying on the Taylor-Quinney factor often violate fundamental physical principles such as the first and the second law of thermodynamics. In this paper, a thermodynamically consistent framework is elaborated which indeed allows the realistic prediction of the temperature evolution. In contrast to previously proposed frameworks, it is based on a fully three-dimensional, finite strain setting and it naturally covers coupled isotropic and kinematic hardening - also based on non-associative evolution equations. Considering a variationally consistent description based on incremental energy minimization, it is shown that the aforementioned problem (thermodynamical consistency and a realistic temperature prediction) is essentially equivalent to correctly defining the decomposition of the total energy into stored and dissipative parts. Interestingly, this decomposition shows strong analogies to the Taylor-Quinney factor. In this respect, the Taylor-Quinney factor can be well motivated from a physical point of view. Furthermore, certain intervals for this factor can be derived in order to guarantee that fundamental physically principles are fulfilled a priori. Representative examples demonstrate the predictive capabilities of the final constitutive modeling framework.

  16. Ecosystem service tradeoff analysis reveals the value of marine spatial planning for multiple ocean uses

    PubMed Central

    White, Crow; Halpern, Benjamin S.; Kappel, Carrie V.

    2012-01-01

    Marine spatial planning (MSP) is an emerging responsibility of resource managers around the United States and elsewhere. A key proposed advantage of MSP is that it makes tradeoffs in resource use and sector (stakeholder group) values explicit, but doing so requires tools to assess tradeoffs. We extended tradeoff analyses from economics to simultaneously assess multiple ecosystem services and the values they provide to sectors using a robust, quantitative, and transparent framework. We used the framework to assess potential conflicts among offshore wind energy, commercial fishing, and whale-watching sectors in Massachusetts and identify and quantify the value from choosing optimal wind farm designs that minimize conflicts among these sectors. Most notably, we show that using MSP over conventional planning could prevent >$1 million dollars in losses to the incumbent fishery and whale-watching sectors and could generate >$10 billion in extra value to the energy sector. The value of MSP increased with the greater the number of sectors considered and the larger the area under management. Importantly, the framework can be applied even when sectors are not measured in dollars (e.g., conservation). Making tradeoffs explicit improves transparency in decision-making, helps avoid unnecessary conflicts attributable to perceived but weak tradeoffs, and focuses debate on finding the most efficient solutions to mitigate real tradeoffs and maximize sector values. Our analysis demonstrates the utility, feasibility, and value of MSP and provides timely support for the management transitions needed for society to address the challenges of an increasingly crowded ocean environment. PMID:22392996

  17. Computational methods for reactive transport modeling: An extended law of mass-action, xLMA, method for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.

    2016-10-01

    We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a unified open-source framework for modeling chemically reactive systems.

  18. Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potentialmore » composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.« less

  19. A framework for multi-stakeholder decision-making and ...

    EPA Pesticide Factsheets

    We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study

  20. 77 FR 18963 - Energy Conservation Program: Public Meeting and Availability of the Framework Document for High...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-29

    ... the Framework Document for High-Intensity Discharge Lamps AGENCY: Office of Energy Efficiency and... availability of framework document for high-intensity discharge (HID) lamps, initiating the rulemaking and data... Availability of Framework Document Regarding Energy Conservation Standards for High-Intensity Discharge (HID...

  1. Minimal Reduplication

    ERIC Educational Resources Information Center

    Kirchner, Jesse Saba

    2010-01-01

    This dissertation introduces Minimal Reduplication, a new theory and framework within generative grammar for analyzing reduplication in human language. I argue that reduplication is an emergent property in multiple components of the grammar. In particular, reduplication occurs independently in the phonology and syntax components, and in both cases…

  2. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernán A.

    2015-08-01

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  3. Influence maximization in complex networks through optimal percolation.

    PubMed

    Morone, Flaviano; Makse, Hernán A

    2015-08-06

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  4. Understanding the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahms, Rainer N.

    A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized whichmore » determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. As a result, the significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.« less

  5. Understanding the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahms, Rainer N., E-mail: Rndahms@sandia.gov

    A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized whichmore » determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. The significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.« less

  6. Understanding the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions

    DOE PAGES

    Dahms, Rainer N.

    2016-04-26

    A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized whichmore » determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. As a result, the significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.« less

  7. Free energy minimization to predict RNA secondary structures and computational RNA design.

    PubMed

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  8. 75 FR 24824 - Energy Efficiency Program for Consumer Products: Public Meeting and Availability of the Framework...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-06

    ... Availability of the Framework Document for Commercial Refrigeration Equipment AGENCY: Office of Energy... data collection process to consider amended energy conservation standards for commercial refrigeration... Energy, Building Technologies Program, Mailstop EE-2J, Framework Document for Commercial Refrigeration...

  9. [Possible changes in energy-minimizer mechanisms of locomotion due to chronic low back pain - a literature review].

    PubMed

    de Carvalho, Alberito Rodrigo; Andrade, Alexandro; Peyré-Tartaruga, Leonardo Alexandre

    2015-01-01

    One goal of the locomotion is to move the body in the space at the most economical way possible. However, little is known about the mechanical and energetic aspects of locomotion that are affected by low back pain. And in case of occurring some damage, little is known about how the mechanical and energetic characteristics of the locomotion are manifested in functional activities, especially with respect to the energy-minimizer mechanisms during locomotion. This study aimed: a) to describe the main energy-minimizer mechanisms of locomotion; b) to check if there are signs of damage on the mechanical and energetic characteristics of the locomotion due to chronic low back pain (CLBP) which may endanger the energy-minimizer mechanisms. This study is characterized as a narrative literature review. The main theory that explains the minimization of energy expenditure during the locomotion is the inverted pendulum mechanism, by which the energy-minimizer mechanism converts kinetic energy into potential energy of the center of mass and vice-versa during the step. This mechanism is strongly influenced by spatio-temporal gait (locomotion) parameters such as step length and preferred walking speed, which, in turn, may be severely altered in patients with chronic low back pain. However, much remains to be understood about the effects of chronic low back pain on the individual's ability to practice an economic locomotion, because functional impairment may compromise the mechanical and energetic characteristics of this type of gait, making it more costly. Thus, there are indications that such changes may compromise the functional energy-minimizer mechanisms. Copyright © 2014 Elsevier Editora Ltda. All rights reserved.

  10. Smart City Energy Interconnection Technology Framework Preliminary Research

    NASA Astrophysics Data System (ADS)

    Zheng, Guotai; Zhao, Baoguo; Zhao, Xin; Li, Hao; Huo, Xianxu; Li, Wen; Xia, Yu

    2018-01-01

    to improve urban energy efficiency, improve the absorptive ratio of new energy resources and renewable energy sources, and reduce environmental pollution and other energy supply and consumption technology framework matched with future energy restriction conditions and applied technology level are required to be studied. Relative to traditional energy supply system, advanced information technology-based “Energy Internet” technical framework may give play to energy integrated application and load side interactive technology advantages, as a whole optimize energy supply and consumption and improve the overall utilization efficiency of energy.

  11. Partners | Integrated Energy Solutions | NREL

    Science.gov Websites

    Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL has teamed with the Africa to develop a Quality Assurance Framework for isolated mini-grids. NREL Enhances Energy Resiliency Partnership Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL has teamed

  12. NREL Partnership Develops Off-Grid Energy Access through Quality Assurance

    Science.gov Websites

    Framework for Mini-Grids | Integrated Energy Solutions | NREL Partnership Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL Partnership Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL has teamed with the Global Lighting

  13. Analytical solution of Schrödinger equation in minimal length formalism for trigonometric potential using hypergeometry method

    NASA Astrophysics Data System (ADS)

    Nurhidayati, I.; Suparmi, A.; Cari, C.

    2018-03-01

    The Schrödinger equation has been extended by applying the minimal length formalism for trigonometric potential. The wave function and energy spectra were used to describe the behavior of subatomic particle. The wave function and energy spectra were obtained by using hypergeometry method. The result showed that the energy increased by the increasing both of minimal length parameter and the potential parameter. The energy were calculated numerically using MatLab.

  14. Energy Efficiency Optimization in Relay-Assisted MIMO Systems With Perfect and Statistical CSI

    NASA Astrophysics Data System (ADS)

    Zappone, Alessio; Cao, Pan; Jorswieck, Eduard A.

    2014-01-01

    A framework for energy-efficient resource allocation in a single-user, amplify-and-forward relay-assisted MIMO system is devised in this paper. Previous results in this area have focused on rate maximization or sum power minimization problems, whereas fewer results are available when bits/Joule energy efficiency (EE) optimization is the goal. The performance metric to optimize is the ratio between the system's achievable rate and the total consumed power. The optimization is carried out with respect to the source and relay precoding matrices, subject to QoS and power constraints. Such a challenging non-convex problem is tackled by means of fractional programming and and alternating maximization algorithms, for various CSI assumptions at the source and relay. In particular the scenarios of perfect CSI and those of statistical CSI for either the source-relay or the relay-destination channel are addressed. Moreover, sufficient conditions for beamforming optimality are derived, which is useful in simplifying the system design. Numerical results are provided to corroborate the validity of the theoretical findings.

  15. Modeling Hydrodynamic Changes Due to Marine Hydrokinetic Power Production: Community Outreach and Education

    NASA Astrophysics Data System (ADS)

    James, S. C.; Jones, C.; Roberts, J.

    2013-12-01

    Power generation with marine hydrokinetic (MHK) turbines is receiving growing global interest. Because of reasonable investment, maintenance, reliability, and environmental friendliness, this technology can contribute to national (and global) energy markets and is worthy of research investment. Furthermore, in remote areas, small-scale MHK energy from river, tidal, or ocean currents can provide a local power supply. The power-generating capacity of MHK turbines will depend, among other factors, upon the turbine type and number and the local flow velocities. There is an urgent need for deployment of practical, accessible tools and techniques to help the industry optimize MHK array layouts while establishing best sitting and design practices that minimize environmental impacts. Sandia National Laboratories (SNL) has modified the open-source flow and transport Environmental Fluid Dynamics Code (EFDC) to include the capability of simulating the effects of MHK power production. Upon removing energy (momentum) from the system, changes to the local and far-field flow dynamics can be estimated (e.g., flow speeds, tidal ranges, flushing rates, etc.). The effects of these changes on sediment dynamics and water quality can also be simulated using this model. Moreover, the model can be used to optimize MHK array layout to maximize power capture and minimize environmental impacts. Both a self-paced tutorial and in-depth training course have been developed as part of an outreach program to train academics, technology developers, and regulators in the use and application of this software. This work outlines SNL's outreach efforts using this modeling framework as applied to two specific sites where MHK turbines have been deployed.

  16. Massively parallel GPU-accelerated minimization of classical density functional theory

    NASA Astrophysics Data System (ADS)

    Stopper, Daniel; Roth, Roland

    2017-08-01

    In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.

  17. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    PubMed

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  18. 10 CFR 20.1406 - Minimization of contamination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Minimization of contamination. 20.1406 Section 20.1406 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Radiological Criteria for License Termination § 20.1406 Minimization of contamination. (a) Applicants for licenses, other than early...

  19. 10 CFR 20.1406 - Minimization of contamination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Minimization of contamination. 20.1406 Section 20.1406 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Radiological Criteria for License Termination § 20.1406 Minimization of contamination. (a) Applicants for licenses, other than early...

  20. Mobile Edge Computing Empowers Internet of Things

    NASA Astrophysics Data System (ADS)

    Ansari, Nirwan; Sun, Xiang

    In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in real-time. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods are validated via extensive simulations.

  1. DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Han; Zhang, Linfeng; Han, Jiequn; E, Weinan

    2018-07-01

    Recent developments in many-body potential energy representation via deep learning have brought new hopes to addressing the accuracy-versus-efficiency dilemma in molecular simulations. Here we describe DeePMD-kit, a package written in Python/C++ that has been designed to minimize the effort required to build deep learning based representation of potential energy and force field and to perform molecular dynamics. Potential applications of DeePMD-kit span from finite molecules to extended systems and from metallic systems to chemically bonded systems. DeePMD-kit is interfaced with TensorFlow, one of the most popular deep learning frameworks, making the training process highly automatic and efficient. On the other end, DeePMD-kit is interfaced with high-performance classical molecular dynamics and quantum (path-integral) molecular dynamics packages, i.e., LAMMPS and the i-PI, respectively. Thus, upon training, the potential energy and force field models can be used to perform efficient molecular simulations for different purposes. As an example of the many potential applications of the package, we use DeePMD-kit to learn the interatomic potential energy and forces of a water model using data obtained from density functional theory. We demonstrate that the resulted molecular dynamics model reproduces accurately the structural information contained in the original model.

  2. MRF energy minimization and beyond via dual decomposition.

    PubMed

    Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios

    2011-03-01

    This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.

  3. Text-line extraction in handwritten Chinese documents based on an energy minimization framework.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2012-03-01

    Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.

  4. Origin of the spike-timing-dependent plasticity rule

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won; Choi, M. Y.

    2016-08-01

    A biological synapse changes its efficacy depending on the difference between pre- and post-synaptic spike timings. Formulating spike-timing-dependent interactions in terms of the path integral, we establish a neural-network model, which makes it possible to predict relevant quantities rigorously by means of standard methods in statistical mechanics and field theory. In particular, the biological synaptic plasticity rule is shown to emerge as the optimal form for minimizing the free energy. It is further revealed that maximization of the entropy of neural activities gives rise to the competitive behavior of biological learning. This demonstrates that statistical mechanics helps to understand rigorously key characteristic behaviors of a neural network, thus providing the possibility of physics serving as a useful and relevant framework for probing life.

  5. Representations in Dynamical Embodied Agents: Re-Analyzing a Minimally Cognitive Model Agent

    ERIC Educational Resources Information Center

    Mirolli, Marco

    2012-01-01

    Understanding the role of "representations" in cognitive science is a fundamental problem facing the emerging framework of embodied, situated, dynamical cognition. To make progress, I follow the approach proposed by an influential representational skeptic, Randall Beer: building artificial agents capable of minimally cognitive behaviors and…

  6. A framework for energy use indicators and their reporting in life cycle assessment.

    PubMed

    Arvidsson, Rickard; Svanström, Magdalena

    2016-07-01

    Energy use is a common impact category in life cycle assessment (LCA). Many different energy use indicators are used in LCA studies, accounting for energy use in different ways. Often, however, the choice behind which energy use indicator is applied is poorly described and motivated. To contribute to a more purposeful selection of energy use indicators and to ensure consistent and transparent reporting of energy use in LCA, a general framework for energy use indicator construction and reporting in LCA studies will be presented in this article. The framework differentiates between 1) renewable and nonrenewable energies, 2) primary and secondary energies, and 3) energy intended for energy purposes versus energy intended for material purposes. This framework is described both graphically and mathematically. Furthermore, the framework is illustrated through application to a number of energy use indicators that are frequently used in LCA studies: cumulative energy demand (CED), nonrenewable cumulative energy demand (NRCED), fossil energy use (FEU), primary fossil energy use (PFEU), and secondary energy use (SEU). To illustrate how the application of different energy use indicators may lead to different results, cradle-to-gate energy use of the bionanomaterial cellulose nanofibrils (CNF) is assessed using 5 different indicators and showing a factor of 3 differences between the highest and lowest results. The relevance of different energy use indicators to different actors and contexts will be discussed, and further developments of the framework are then suggested. Integr Environ Assess Manag 2016;12:429-436. © 2015 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of SETAC. © 2015 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of SETAC.

  7. Evaluating and optimizing the operation of the hydropower system in the Upper Yellow River: A general LINGO-based integrated framework.

    PubMed

    Si, Yuan; Li, Xiang; Yin, Dongqin; Liu, Ronghua; Wei, Jiahua; Huang, Yuefei; Li, Tiejian; Liu, Jiahong; Gu, Shenglong; Wang, Guangqian

    2018-01-01

    The hydropower system in the Upper Yellow River (UYR), one of the largest hydropower bases in China, plays a vital role in the energy structure of the Qinghai Power Grid. Due to management difficulties, there is still considerable room for improvement in the joint operation of this system. This paper presents a general LINGO-based integrated framework to study the operation of the UYR hydropower system. The framework is easy to use for operators with little experience in mathematical modeling, takes full advantage of LINGO's capabilities (such as its solving capacity and multi-threading ability), and packs its three layers (the user layer, the coordination layer, and the base layer) together into an integrated solution that is robust and efficient and represents an effective tool for data/scenario management and analysis. The framework is general and can be easily transferred to other hydropower systems with minimal effort, and it can be extended as the base layer is enriched. The multi-objective model that represents the trade-off between power quantity (i.e., maximum energy production) and power reliability (i.e., firm output) of hydropower operation has been formulated. With equivalent transformations, the optimization problem can be solved by the nonlinear programming (NLP) solvers embedded in the LINGO software, such as the General Solver, the Multi-start Solver, and the Global Solver. Both simulation and optimization are performed to verify the model's accuracy and to evaluate the operation of the UYR hydropower system. A total of 13 hydropower plants currently in operation are involved, including two pivotal storage reservoirs on the Yellow River, which are the Longyangxia Reservoir and the Liujiaxia Reservoir. Historical hydrological data from multiple years (2000-2010) are provided as input to the model for analysis. The results are as follows. 1) Assuming that the reservoirs are all in operation (in fact, some reservoirs were not operational or did not collect all of the relevant data during the study period), the energy production is estimated as 267.7, 357.5, and 358.3×108 KWh for the Qinghai Power Grid during dry, normal, and wet years, respectively. 2) Assuming that the hydropower system is operated jointly, the firm output can reach 3110 MW (reliability of 100%) and 3510 MW (reliability of 90%). Moreover, a decrease in energy production from the Longyangxia Reservoir can bring about a very large increase in firm output from the hydropower system. 3) The maximum energy production can reach 297.7, 363.9, and 411.4×108 KWh during dry, normal, and wet years, respectively. The trade-off curve between maximum energy production and firm output is also provided for reference.

  8. Evaluating and optimizing the operation of the hydropower system in the Upper Yellow River: A general LINGO-based integrated framework

    PubMed Central

    Si, Yuan; Liu, Ronghua; Wei, Jiahua; Huang, Yuefei; Li, Tiejian; Liu, Jiahong; Gu, Shenglong; Wang, Guangqian

    2018-01-01

    The hydropower system in the Upper Yellow River (UYR), one of the largest hydropower bases in China, plays a vital role in the energy structure of the Qinghai Power Grid. Due to management difficulties, there is still considerable room for improvement in the joint operation of this system. This paper presents a general LINGO-based integrated framework to study the operation of the UYR hydropower system. The framework is easy to use for operators with little experience in mathematical modeling, takes full advantage of LINGO’s capabilities (such as its solving capacity and multi-threading ability), and packs its three layers (the user layer, the coordination layer, and the base layer) together into an integrated solution that is robust and efficient and represents an effective tool for data/scenario management and analysis. The framework is general and can be easily transferred to other hydropower systems with minimal effort, and it can be extended as the base layer is enriched. The multi-objective model that represents the trade-off between power quantity (i.e., maximum energy production) and power reliability (i.e., firm output) of hydropower operation has been formulated. With equivalent transformations, the optimization problem can be solved by the nonlinear programming (NLP) solvers embedded in the LINGO software, such as the General Solver, the Multi-start Solver, and the Global Solver. Both simulation and optimization are performed to verify the model’s accuracy and to evaluate the operation of the UYR hydropower system. A total of 13 hydropower plants currently in operation are involved, including two pivotal storage reservoirs on the Yellow River, which are the Longyangxia Reservoir and the Liujiaxia Reservoir. Historical hydrological data from multiple years (2000–2010) are provided as input to the model for analysis. The results are as follows. 1) Assuming that the reservoirs are all in operation (in fact, some reservoirs were not operational or did not collect all of the relevant data during the study period), the energy production is estimated as 267.7, 357.5, and 358.3×108 KWh for the Qinghai Power Grid during dry, normal, and wet years, respectively. 2) Assuming that the hydropower system is operated jointly, the firm output can reach 3110 MW (reliability of 100%) and 3510 MW (reliability of 90%). Moreover, a decrease in energy production from the Longyangxia Reservoir can bring about a very large increase in firm output from the hydropower system. 3) The maximum energy production can reach 297.7, 363.9, and 411.4×108 KWh during dry, normal, and wet years, respectively. The trade-off curve between maximum energy production and firm output is also provided for reference. PMID:29370206

  9. Energy minimization for self-organized structure formation and actuation

    NASA Astrophysics Data System (ADS)

    Kofod, Guggi; Wirges, Werner; Paajanen, Mika; Bauer, Siegfried

    2007-02-01

    An approach for creating complex structures with embedded actuation in planar manufacturing steps is presented. Self-organization and energy minimization are central to this approach, illustrated with a model based on minimization of the hyperelastic free energy strain function of a stretched elastomer and the bending elastic energy of a plastic frame. A tulip-shaped gripper structure illustrates the technological potential of the approach. Advantages are simplicity of manufacture, complexity of final structures, and the ease with which any electroactive material can be exploited as means of actuation.

  10. Graduate Student-Run Course Framework for Comprehensive Professional Development

    ERIC Educational Resources Information Center

    Needelman, Brian A.; Ruppert, David E.

    2006-01-01

    Comprehensive professional development is rarely offered to graduate students, yet would assist students to obtain employment and prosper in their careers. Our objective was to design a course framework to provide professional development training to graduate students that is comprehensive, minimizes faculty workload, and provides enculturation…

  11. When the lowest energy does not induce native structures: parallel minimization of multi-energy values by hybridizing searching intelligences.

    PubMed

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.

  12. When the Lowest Energy Does Not Induce Native Structures: Parallel Minimization of Multi-Energy Values by Hybridizing Searching Intelligences

    PubMed Central

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708

  13. The strength and dislocation microstructure evolution in superalloy microcrystals

    NASA Astrophysics Data System (ADS)

    Hussein, Ahmed M.; Rao, Satish I.; Uchic, Michael D.; Parthasarathay, Triplicane A.; El-Awady, Jaafar A.

    2017-02-01

    In this work, the evolution of the dislocations microstructure in single crystal two-phase superalloy microcrystals under monotonic loading has been studied using the three-dimensional discrete dislocation dynamics (DDD) method. The DDD framework has been extended to properly handle the collective behavior of dislocations and their interactions with large collections of arbitrary shaped precipitates. Few constraints are imposed on the initial distribution of the dislocations or the precipitates, and the extended DDD framework can support experimentally-obtained precipitate geometries. Full tracking of the creation and destruction of anti-phase boundaries (APB) is accounted for. The effects of the precipitate volume fraction, APB energy, precipitate size, and crystal size on the deformation of superalloy microcrystals have been quantified. Correlations between the precipitate microstructure and the dominant deformation features, such as dislocation looping versus precipitate shearing, are also discussed. It is shown that the mechanical strength is independent of the crystal size, increases linearly with increasing the volume fraction, follows a near square-root relationship with the APB energy and an inverse square-root relationship with the precipitate size. Finally, the flow strength in simulations having initial dislocation pair sources show a flow strength that is about one half of that predicted from simulations starting with single dislocation sources. The method developed can be used, with minimal extensions, to simulate dislocation microstructure evolution in general multiphase materials.

  14. Detecting higher-order interactions among the spiking events in a group of neurons.

    PubMed

    Martignon, L; Von Hasseln, H; Grün, S; Aertsen, A; Palm, G

    1995-06-01

    We propose a formal framework for the description of interactions among groups of neurons. This framework is not restricted to the common case of pair interactions, but also incorporates higher-order interactions, which cannot be reduced to lower-order ones. We derive quantitative measures to detect the presence of such interactions in experimental data, by statistical analysis of the frequency distribution of higher-order correlations in multiple neuron spike train data. Our first step is to represent a frequency distribution as a Markov field on the minimal graph it induces. We then show the invariance of this graph with regard to changes of state. Clearly, only linear Markov fields can be adequately represented by graphs. Higher-order interdependencies, which are reflected by the energy expansion of the distribution, require more complex graphical schemes, like constellations or assembly diagrams, which we introduce and discuss. The coefficients of the energy expansion not only point to the interactions among neurons but are also a measure of their strength. We investigate the statistical meaning of detected interactions in an information theoretic sense and propose minimum relative entropy approximations as null hypotheses for significance tests. We demonstrate the various steps of our method in the situation of an empirical frequency distribution on six neurons, extracted from data on simultaneous multineuron recordings from the frontal cortex of a behaving monkey and close with a brief outlook on future work.

  15. Solubility prediction of naphthalene in carbon dioxide from crystal microstructure

    NASA Astrophysics Data System (ADS)

    Sang, Jiarong; Jin, Junsu; Mi, Jianguo

    2018-03-01

    Crystals dissolved in solvents are ubiquitous in both natural and artificial systems. Due to the complicated structures and asymmetric interactions between the crystal and solvent, it is difficult to interpret the dissolution mechanism and predict solubility using traditional theories and models. Here we use the classical density functional theory (DFT) to describe the crystal dissolution behavior. As an example, naphthalene dissolved in carbon dioxide (CO2) is considered within the DFT framework. The unit cell dimensions and microstructure of crystalline naphthalene are determined by minimizing the free-energy of the crystal. According to the microstructure, the solubilities of naphthalene in CO2 are predicted based on the equality of naphthalene's chemical potential in crystal and solution phases, and the interfacial structures and free-energies between different crystal planes and solution are determined to investigate the dissolution mechanism at the molecular level. The theoretical predictions are in general agreement with the available experimental data, implying that the present model is quantitatively reliable in describing crystal dissolution.

  16. Constraining the top-Higgs sector of the standard model effective field theory

    NASA Astrophysics Data System (ADS)

    Cirigliano, V.; Dekens, W.; de Vries, J.; Mereghetti, E.

    2016-08-01

    Working in the framework of the Standard Model effective field theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization-group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy C P -conserving and C P -violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both C P -even and C P -odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model effective field theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings.

  17. Optimum electric utility spot price determinations for small power producing facilities operating under PURPA provisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoudjehbaklou, H.; Puttgen, H.B.

    This paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility strives to use spot prices which maximize itsmore » revenues from any given Small Power Producing Facility, SPPF, a schedule while respecting the general regulatory and contractual framework. the proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less

  18. Optimum electric utility spot price determinations for small power producing facilities operating under PURPA provisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoudjehbaklou, H.; Puttgen, H.B.

    The present paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility will strive to use spot prices whichmore » maximize its revenues from any given Small Power Producing Facility, SPPF, schedule while respecting the general regulatory and contractual framework. The proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less

  19. Sleep Deprivation Attack Detection in Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Bhattasali, Tapalina; Chaki, Rituparna; Sanyal, Sugata

    2012-02-01

    Deployment of sensor network in hostile environment makes it mainly vulnerable to battery drainage attacks because it is impossible to recharge or replace the battery power of sensor nodes. Among different types of security threats, low power sensor nodes are immensely affected by the attacks which cause random drainage of the energy level of sensors, leading to death of the nodes. The most dangerous type of attack in this category is sleep deprivation, where target of the intruder is to maximize the power consumption of sensor nodes, so that their lifetime is minimized. Most of the existing works on sleep deprivation attack detection involve a lot of overhead, leading to poor throughput. The need of the day is to design a model for detecting intrusions accurately in an energy efficient manner. This paper proposes a hierarchical framework based on distributed collaborative mechanism for detecting sleep deprivation torture in wireless sensor network efficiently. Proposed model uses anomaly detection technique in two steps to reduce the probability of false intrusion.

  20. Free Energy Minimization Calculation of Complex Chemical Equilibria. Reduction of Silicon Dioxide with Carbon at High Temperature.

    ERIC Educational Resources Information Center

    Wai, C. M.; Hutchinson, S. G.

    1989-01-01

    Discusses the calculation of free energy in reactions between silicon dioxide and carbon. Describes several computer programs for calculating the free energy minimization and their uses in chemistry classrooms. Lists 16 references. (YP)

  1. In-situ monitoring and assessment of post barge-bridge collision damage for minimizing traffic delay and detour : final report.

    DOT National Transportation Integrated Search

    2016-07-31

    This report presents a novel framework for promptly assessing the probability of barge-bridge : collision damage of piers based on probabilistic-based classification through machine learning. The main : idea of the presented framework is to divide th...

  2. Millicharge or decay: a critical take on Minimal Dark Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nobile, Eugenio Del; Dipartimento di Fisica e Astronomia “G. Galilei”, Università di Padova and INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova; Nardecchia, Marco

    2016-04-26

    Minimal Dark Matter (MDM) is a theoretical framework highly appreciated for its minimality and yet its predictivity. Of the two only viable candidates singled out in the original analysis, the scalar eptaplet has been found to decay too quickly to be around today, while the fermionic quintuplet is now being probed by indirect Dark Matter (DM) searches. It is therefore timely to critically review the MDM paradigm, possibly pointing out generalizations of this framework. We propose and explore two distinct directions. One is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates whichmore » are part of SU(2){sub L} multiplets with integer isospin. Another possibility is to lower the cutoff of the model, which was originally fixed at the Planck scale, to allow for DM decays. We find new viable MDM candidates and study their phenomenology in detail.« less

  3. Millicharge or decay: a critical take on Minimal Dark Matter

    DOE PAGES

    Nobile, Eugenio Del; Nardecchia, Marco; Panci, Paolo

    2016-04-26

    Minimal Dark Matter (MDM) is a theoretical framework highly appreciated for its minimality and yet its predictivity. Of the two only viable candidates singled out in the original analysis, the scalar eptaplet has been found to decay too quickly to be around today, while the fermionic quintuplet is now being probed by indirect Dark Matter (DM) searches. It is therefore timely to critically review the MDM paradigm, possibly pointing out generalizations of this framework. We propose and explore two distinct directions. One is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates whichmore » are part of SU(2)L multiplets with integer isospin. Another possibility is to lower the cutoff of the model, which was originally fixed at the Planck scale, to allow for DM decays. We find new viable MDM candidates and study their phenomenology in detail.« less

  4. Weakly dynamic dark energy via metric-scalar couplings with torsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sur, Sourav; Bhatia, Arshdeep Singh, E-mail: sourav.sur@gmail.com, E-mail: arshdeepsb@gmail.com

    We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping themmore » within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.« less

  5. Weakly dynamic dark energy via metric-scalar couplings with torsion

    NASA Astrophysics Data System (ADS)

    Sur, Sourav; Singh Bhatia, Arshdeep

    2017-07-01

    We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping them within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.

  6. Integrated Framework for Patient Safety and Energy Efficiency in Healthcare Facilities Retrofit Projects.

    PubMed

    Mohammadpour, Atefeh; Anumba, Chimay J; Messner, John I

    2016-07-01

    There is a growing focus on enhancing energy efficiency in healthcare facilities, many of which are decades old. Since replacement of all aging healthcare facilities is not economically feasible, the retrofitting of these facilities is an appropriate path, which also provides an opportunity to incorporate energy efficiency measures. In undertaking energy efficiency retrofits, it is vital that the safety of the patients in these facilities is maintained or enhanced. However, the interactions between patient safety and energy efficiency have not been adequately addressed to realize the full benefits of retrofitting healthcare facilities. To address this, an innovative integrated framework, the Patient Safety and Energy Efficiency (PATSiE) framework, was developed to simultaneously enhance patient safety and energy efficiency. The framework includes a step -: by -: step procedure for enhancing both patient safety and energy efficiency. It provides a structured overview of the different stages involved in retrofitting healthcare facilities and improves understanding of the intricacies associated with integrating patient safety improvements with energy efficiency enhancements. Evaluation of the PATSiE framework was conducted through focus groups with the key stakeholders in two case study healthcare facilities. The feedback from these stakeholders was generally positive, as they considered the framework useful and applicable to retrofit projects in the healthcare industry. © The Author(s) 2016.

  7. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  8. A Framework for Optimizing the Placement of Tidal Turbines

    NASA Astrophysics Data System (ADS)

    Nelson, K. S.; Roberts, J.; Jones, C.; James, S. C.

    2013-12-01

    Power generation with marine hydrokinetic (MHK) current energy converters (CECs), often in the form of underwater turbines, is receiving growing global interest. Because of reasonable investment, maintenance, reliability, and environmental friendliness, this technology can contribute to national (and global) energy markets and is worthy of research investment. Furthermore, in remote areas, small-scale MHK energy from river, tidal, or ocean currents can provide a local power supply. However, little is known about the potential environmental effects of CEC operation in coastal embayments, estuaries, or rivers, or of the cumulative impacts of these devices on aquatic ecosystems over years or decades of operation. There is an urgent need for practical, accessible tools and peer-reviewed publications to help industry and regulators evaluate environmental impacts and mitigation measures, while establishing best sitting and design practices. Sandia National Laboratories (SNL) and Sea Engineering, Inc. (SEI) have investigated the potential environmental impacts and performance of individual tidal energy converters (TECs) in Cobscook Bay, ME; TECs are a subset of CECs that are specifically deployed in tidal channels. Cobscook Bay is the first deployment location of Ocean Renewable Power Company's (ORPC) TidGenTM unit. One unit is currently in place with four more to follow. Together, SNL and SEI built a coarse-grid, regional-scale model that included Cobscook Bay and all other landward embayments using the modeling platform SNL-EFDC. Within SNL-EFDC tidal turbines are represented using a unique set of momentum extraction, turbulence generation, and turbulence dissipation equations at TEC locations. The global model was then coupled to a local-scale model that was centered on the proposed TEC deployment locations. An optimization frame work was developed that used the refined model to determine optimal device placement locations that maximized array performance. Within the framework, environmental effects are considered to minimize the possibility of altering flows to an extent that would affect fish-swimming behavior and sediment-transport trends. Simulation results were compared between model runs with the optimized array configuration, and the originally purposed deployment locations; the optimized array showed a 17% increase in power generation. The developed framework can provide regulators and developers with a tool for assessing environmental impacts and device-performance parameters for the deployment of MHK devices. The more thoroughly understood this promising technology, the more likely it will become a viable source of alternative energy.

  9. A Bayesian Framework for Coupled Estimation of Key Unknown Parameters of Land Water and Energy Balance Equations

    NASA Astrophysics Data System (ADS)

    Farhadi, L.; Abdolghafoorian, A.

    2015-12-01

    The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states

  10. Minimizing the influence of unconscious bias in evaluations: a practical guide.

    PubMed

    Goldyne, Adam J

    2007-01-01

    The forensic psychiatrist's efforts to strive for objectivity may be impaired by unrecognized unconscious biases. The author presents a framework for understanding such biases. He then offers a practical approach for individual forensic psychiatrists who want to identify and minimize the influence of previously unrecognized biases on their evaluations.

  11. Amber Plug-In for Protein Shop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliva, Ricardo

    2004-05-10

    The Amber Plug-in for ProteinShop has two main components: an AmberEngine library to compute the protein energy models, and a module to solve the energy minimization problem using an optimization algorithm in the OPTI-+ library. Together, these components allow the visualization of the protein folding process in ProteinShop. AmberEngine is a object-oriented library to compute molecular energies based on the Amber model. The main class is called ProteinEnergy. Its main interface methods are (1) "init" to initialize internal variables needed to compute the energy. (2) "eval" to evaluate the total energy given a vector of coordinates. Additional methods allow themore » user to evaluate the individual components of the energy model (bond, angle, dihedral, non-bonded-1-4, and non-bonded energies) and to obtain the energy of each individual atom. The Amber Engine library source code includes examples and test routines that illustrate the use of the library in stand alone programs. The energy minimization module uses the AmberEngine library and the nonlinear optimization library OPT++. OPT++ is open source software available under the GNU Lesser General Public License. The minimization module currently makes use of the LBFGS optimization algorithm in OPT++ to perform the energy minimization. Future releases may give the user a choice of other algorithms available in OPT++.« less

  12. Real options and asset valuation in competitive energy markets

    NASA Astrophysics Data System (ADS)

    Oduntan, Adekunle Richard

    The focus of this work is to develop a robust valuation framework for physical power assets operating in competitive markets such as peaking or mid-merit thermal power plants and baseload power plants. The goal is to develop a modeling framework that can be adapted to different energy assets with different types of operating flexibilities and technical constraints and which can be employed for various purposes such as capital budgeting, business planning, risk management and strategic bidding planning among others. The valuation framework must also be able to capture the reality of power market rules and opportunities, as well as technical constraints of different assets. The modeling framework developed conceptualizes operating flexibilities of power assets as "switching options' whereby the asset operator decides at every decision point whether to switch from one operating mode to another mutually exclusive mode, within the limits of the equipment constraints of the asset. As a current decision to switch operating modes may affect future operating flexibilities of the asset and hence cash flows, a dynamic optimization framework is employed. The developed framework accounts for the uncertain nature of key value drivers by representing them with appropriate stochastic processes. Specifically, the framework developed conceptualizes the operation of a power asset as a multi-stage decision making problem where the operator has to make a decision at every stage to alter operating mode given currently available information about key value drivers. The problem is then solved dynamically by decomposing it into a series of two-stage sub-problems according to Bellman's optimality principle. The solution algorithm employed is the Least Squares Monte Carlo (LSM) method. The developed valuation framework was adapted for a gas-fired thermal power plant, a peaking hydroelectric power plant and a baseload power plant. This work built on previously published real options valuation methodologies for gas-fired thermal power plants by factoring in uncertainty from gas supply/consumption imbalance which is usually faced by gas-fired power generators. This source of uncertainty arises because of mismatch between natural gas and electricity wholesale markets. Natural gas markets in North America operate on a day-ahead basis while power plants are dispatched in real time. Inability of a power generator to match its gas supply and consumption in real time, leading to unauthorized gas over-run or under-run, attracts penalty charges from the gas supplier to the extent that the generator can not manage the imbalance through other means. By considering an illustrative power plant operating in Ontario, we show effects of gas-imbalance on dispatch strategies on a daily cycling operation basis and the resulting impact on net revenue. Similarly, we employ the developed valuation framework to value a peaking hydroelectric power plant. This application also builds on previous real options valuation work for peaking hydroelectric power plants by considering their operations in a joint energy and ancillary services market. Specifically, the valuation model is developed to capture the value of a peaking power plant whose owner has the flexibility to participate in a joint operating reserve market and an energy market, which is currently the case in the Ontario wholesale power market. The model factors in water inflow uncertainty into the reservoir forebay of a hydroelectric facility and also considers uncertain energy and operating reserve prices. The switching options considered include (i) a joint energy and operating reserve bid (ii) an energy only bid and (iii) a do nothing (idle) strategy. Being an energy limited power plant, by doing nothing at a decision interval, the power asset operator is able to timeshift scarce water for use at a future period when market situations are expected to be better. Finally, the developed valuation framework was employed to optimize life-cycle management decisions of a baseload power plant, such as a nuclear power plant. Given uncertainty of long-term value drivers, including power prices, equipment performance and the relationship between current life cycle spending and future equipment degradation, optimization is carried out with the objective of minimizing overall life-cycle related costs. These life-cycle costs include (i) lost revenue during planned and unplanned outages, (ii) potential costs of future equipment degradation due to inadequate preventative maintenance, and (iii) the direct costs of implementing the life-cycle projects. The switching options in this context include the option to shutdown the power plant in order to execute a given preventative maintenance and inspection project and the option to keep the option "alive" by choosing to delay a planned life-cycle activity.

  13. A Framework for the Development of Automatic DFA Method to Minimize the Number of Components and Assembly Reorientations

    NASA Astrophysics Data System (ADS)

    Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa

    2018-03-01

    Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.

  14. Leveraging advances in biology to design biomaterials

    NASA Astrophysics Data System (ADS)

    Darnell, Max; Mooney, David J.

    2017-12-01

    Biomaterials have dramatically increased in functionality and complexity, allowing unprecedented control over the cells that interact with them. From these engineering advances arises the prospect of improved biomaterial-based therapies, yet practical constraints favour simplicity. Tools from the biology community are enabling high-resolution and high-throughput bioassays that, if incorporated into a biomaterial design framework, could help achieve unprecedented functionality while minimizing the complexity of designs by identifying the most important material parameters and biological outputs. However, to avoid data explosions and to effectively match the information content of an assay with the goal of the experiment, material screens and bioassays must be arranged in specific ways. By borrowing methods to design experiments and workflows from the bioprocess engineering community, we outline a framework for the incorporation of next-generation bioassays into biomaterials design to effectively optimize function while minimizing complexity. This framework can inspire biomaterials designs that maximize functionality and translatability.

  15. Design and architecture of the Mars relay network planning and analysis framework

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Lee, C. H.

    2002-01-01

    In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.

  16. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    NASA Astrophysics Data System (ADS)

    Heidarinejad, Mohammad

    This dissertation develops rapid and accurate building energy simulations based on a building classification that identifies and focuses modeling efforts on most significant heat transfer processes. The building classification identifies energy use patterns and their contributing parameters for a portfolio of buildings. The dissertation hypothesis is "Building classification can provide minimal required inputs for rapid and accurate energy simulations for a large number of buildings". The critical literature review indicated there is lack of studies to (1) Consider synoptic point of view rather than the case study approach, (2) Analyze influence of different granularities of energy use, (3) Identify key variables based on the heat transfer processes, and (4) Automate the procedure to quantify model complexity with accuracy. Therefore, three dissertation objectives are designed to test out the dissertation hypothesis: (1) Develop different classes of buildings based on their energy use patterns, (2) Develop different building energy simulation approaches for the identified classes of buildings to quantify tradeoffs between model accuracy and complexity, (3) Demonstrate building simulation approaches for case studies. Penn State's and Harvard's campus buildings as well as high performance LEED NC office buildings are test beds for this study to develop different classes of buildings. The campus buildings include detailed chilled water, electricity, and steam data, enabling to classify buildings into externally-load, internally-load, or mixed-load dominated. The energy use of the internally-load buildings is primarily a function of the internal loads and their schedules. Externally-load dominated buildings tend to have an energy use pattern that is a function of building construction materials and outdoor weather conditions. However, most of the commercial medium-sized office buildings have a mixed-load pattern, meaning the HVAC system and operation schedule dictate the indoor condition regardless of the contribution of internal and external loads. To deploy the methodology to another portfolio of buildings, simulated LEED NC office buildings are selected. The advantage of this approach is to isolate energy performance due to inherent building characteristics and location, rather than operational and maintenance factors that can contribute to significant variation in building energy use. A framework for detailed building energy databases with annual energy end-uses is developed to select variables and omit outliers. The results show that the high performance office buildings are internally-load dominated with existence of three different clusters of low-intensity, medium-intensity, and high-intensity energy use pattern for the reviewed office buildings. Low-intensity cluster buildings benefit from small building area, while the medium- and high-intensity clusters have a similar range of floor areas and different energy use intensities. Half of the energy use in the low-intensity buildings is associated with the internal loads, such as lighting and plug loads, indicating that there are opportunities to save energy by using lighting or plug load management systems. A comparison between the frameworks developed for the campus buildings and LEED NC office buildings indicates these two frameworks are complementary to each other. Availability of the information has yielded to two different procedures, suggesting future studies for a portfolio of buildings such as city benchmarking and disclosure ordinance should collect and disclose minimal required inputs suggested by this study with the minimum level of monthly energy consumption granularity. This dissertation developed automated methods using the OpenStudio API (Application Programing Interface) to create energy models based on the building class. ASHRAE Guideline 14 defines well-accepted criteria to measure accuracy of energy simulations; however, there is no well-accepted methodology to quantify the model complexity without the influence of the energy modeler judgment about the model complexity. This study developed a novel method using two weighting factors, including weighting factors based on (1) computational time and (2) easiness of on-site data collection, to measure complexity of the energy models. Therefore, this dissertation enables measurement of both model complexity and accuracy as well as assessment of the inherent tradeoffs between energy simulation model complexity and accuracy. The results of this methodology suggest for most of the internal load contributors such as operation schedules the on-site data collection adds more complexity to the model compared to the computational time. Overall, this study provided specific data on tradeoffs between accuracy and model complexity that points to critical inputs for different building classes, rather than an increase in the volume and detail of model inputs as the current research and consulting practice indicates. (Abstract shortened by UMI.).

  17. Copy Hiding Application Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Holger; Poliakoff, David; Robinson, Peter

    2016-10-06

    CHAI is a light-weight framework which abstracts the automated movement of data (e.g. to/from Host/Device) via RAJA like performance portability programming model constructs. It can be viewed as a utility framework and an adjunct to FAJA (A Performance Portability Framework). Performance Portability is a technique that abstracts the complexities of modern Heterogeneous Architectures while allowing the original program to undergo incremental minimally invasive code changes in order to adapt to the newer architectures.

  18. Functional Relationship between Skull Form and Feeding Mechanics in Sphenodon, and Implications for Diapsid Skull Development

    PubMed Central

    Curtis, Neil; Jones, Marc E. H.; Shi, Junfen; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2011-01-01

    The vertebrate skull evolved to protect the brain and sense organs, but with the appearance of jaws and associated forces there was a remarkable structural diversification. This suggests that the evolution of skull form may be linked to these forces, but an important area of debate is whether bone in the skull is minimised with respect to these forces, or whether skulls are mechanically “over-designed” and constrained by phylogeny and development. Mechanical analysis of diapsid reptile skulls could shed light on this longstanding debate. Compared to those of mammals, the skulls of many extant and extinct diapsids comprise an open framework of fenestrae (window-like openings) separated by bony struts (e.g., lizards, tuatara, dinosaurs and crocodiles), a cranial form thought to be strongly linked to feeding forces. We investigated this link by utilising the powerful engineering approach of multibody dynamics analysis to predict the physiological forces acting on the skull of the diapsid reptile Sphenodon. We then ran a series of structural finite element analyses to assess the correlation between bone strain and skull form. With comprehensive loading we found that the distribution of peak von Mises strains was particularly uniform throughout the skull, although specific regions were dominated by tensile strains while others were dominated by compressive strains. Our analyses suggest that the frame-like skulls of diapsid reptiles are probably optimally formed (mechanically ideal: sufficient strength with the minimal amount of bone) with respect to functional forces; they are efficient in terms of having minimal bone volume, minimal weight, and also minimal energy demands in maintenance. PMID:22216358

  19. Conversion of laser energy to gas kinetic energy

    NASA Technical Reports Server (NTRS)

    Caledonia, G. E.

    1976-01-01

    Techniques for the gas phase absorption of laser radiation for ultimate conversion to gas kinetic energy are discussed. Particular emphasis is placed on absorption by the vibration rotation bands of diatomic molecules at high pressures. This high pressure absorption appears to offer efficient conversion of laser energy to gas translational energy. Bleaching and chemical effects are minimized and the variation of the total absorption coefficient with temperature is minimal.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Jun Hyung; Lee, Soo bin; Hodge, Bri-Mathias

    The energy system of process industry are faced with a new unprecedented challenge. Renewable energies should be incorporated but single of them cannot meet its energy demand of high degree and a large quantity. This paper investigates a simulation framework to compute the capacity of multiple energy sources including solar, wind power, diesel and batteries. The framework involves actual renewable energy supply and demand profile generation and supply demand matching. Eight configurations of different supply options are evaluated to illustrate the applicability of the proposed framework with some remarks.

  1. The Model of Transformational Change for Moral Action: A Conceptual Framework to Elevate Student Conduct Practice in Higher Education

    ERIC Educational Resources Information Center

    Neumeister, James R.

    2017-01-01

    Higher education faces heightened scrutiny regarding student misconduct, but collegiate disciplinary processes often have minimal impact on students. Their ineffectiveness is partially attributable to the absence of a conceptual framework that guides conduct administration by linking theory, practice, and outcomes. This article presents a…

  2. Minimizing inappropriate medications in older populations: a 10-step conceptual framework.

    PubMed

    Scott, Ian A; Gray, Leonard C; Martin, Jennifer H; Mitchell, Charles A

    2012-06-01

    The increasing burden of harm resulting from the use of multiple drugs in older patient populations represents a major health problem in developed countries. Approximately 1 in 4 older patients admitted to hospitals are prescribed at least 1 inappropriate medication, and up to 20% of all inpatient deaths are attributable to potentially preventable adverse drug reactions. To minimize this drug-related iatrogenesis, we propose a quality use of medicine framework that comprises 10 sequential steps: 1) ascertain all current medications; 2) identify patients at high risk of or experiencing adverse drug reactions; 3) estimate life expectancy in high-risk patients; 4) define overall care goals in the context of life expectancy; 5) define and confirm current indications for ongoing treatment; 6) determine the time until benefit for disease-modifying medications; 7) estimate the magnitude of benefit versus harm in relation to each medication; 8) review the relative utility of different drugs; 9) identify drugs that may be discontinued; and 10) implement and monitor a drug minimization plan with ongoing reappraisal of drug utility and patient adherence by a single nominated clinician. The framework aims to reduce drug use in older patients to the minimum number of essential drugs, and its utility is demonstrated in reference to a hypothetic case study. Further studies are warranted in validating this framework as a means for assisting clinicians to make more appropriate prescribing decisions in at-risk older patients. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Reproducing the Ensemble Average Polar Solvation Energy of a Protein from a Single Structure: Gaussian-Based Smooth Dielectric Function for Macromolecular Modeling.

    PubMed

    Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil

    2018-02-13

    Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.

  4. High-accuracy phase-field models for brittle fracture based on a new family of degradation functions

    NASA Astrophysics Data System (ADS)

    Sargado, Juan Michael; Keilegavlen, Eirik; Berre, Inga; Nordbotten, Jan Martin

    2018-02-01

    Phase-field approaches to fracture based on energy minimization principles have been rapidly gaining popularity in recent years, and are particularly well-suited for simulating crack initiation and growth in complex fracture networks. In the phase-field framework, the surface energy associated with crack formation is calculated by evaluating a functional defined in terms of a scalar order parameter and its gradients. These in turn describe the fractures in a diffuse sense following a prescribed regularization length scale. Imposing stationarity of the total energy leads to a coupled system of partial differential equations that enforce stress equilibrium and govern phase-field evolution. These equations are coupled through an energy degradation function that models the loss of stiffness in the bulk material as it undergoes damage. In the present work, we introduce a new parametric family of degradation functions aimed at increasing the accuracy of phase-field models in predicting critical loads associated with crack nucleation as well as the propagation of existing fractures. An additional goal is the preservation of linear elastic response in the bulk material prior to fracture. Through the analysis of several numerical examples, we demonstrate the superiority of the proposed family of functions to the classical quadratic degradation function that is used most often in the literature.

  5. Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks.

    PubMed

    Arunraja, Muruganantham; Malathi, Veluchamy; Sakthivel, Erulappan

    2015-11-01

    Wireless sensor networks are engaged in various data gathering applications. The major bottleneck in wireless data gathering systems is the finite energy of sensor nodes. By conserving the on board energy, the life span of wireless sensor network can be well extended. Data communication being the dominant energy consuming activity of wireless sensor network, data reduction can serve better in conserving the nodal energy. Spatial and temporal correlation among the sensor data is exploited to reduce the data communications. Data similar cluster formation is an effective way to exploit spatial correlation among the neighboring sensors. By sending only a subset of data and estimate the rest using this subset is the contemporary way of exploiting temporal correlation. In Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks, we construct data similar iso-clusters with minimal communication overhead. The intra-cluster communication is reduced using adaptive-normalized least mean squares based dual prediction framework. The cluster head reduces the inter-cluster data payload using a lossless compressive forwarding technique. The proposed work achieves significant data reduction in both the intra-cluster and the inter-cluster communications, with the optimal data accuracy of collected data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Strategies to Reduce Greenhouse Gas Emissions from Laparoscopic Surgery.

    PubMed

    Thiel, Cassandra L; Woods, Noe C; Bilec, Melissa M

    2018-04-01

    To determine the carbon footprint of various sustainability interventions used for laparoscopic hysterectomy. We designed interventions for laparoscopic hysterectomy from approaches that sustainable health care organizations advocate. We used a hybrid environmental life cycle assessment framework to estimate greenhouse gas emissions from the proposed interventions. We conducted the study from September 2015 to December 2016 at the University of Pittsburgh (Pittsburgh, Pennsylvania). The largest carbon footprint savings came from selecting specific anesthetic gases and minimizing the materials used in surgery. Energy-related interventions resulted in a 10% reduction in carbon footprint per case but would result in larger savings for the whole facility. Commonly implemented approaches, such as recycling surgical waste, resulted in less than a 5% reduction in greenhouse gases. To reduce the environmental emissions of surgeries, health care providers need to implement a combination of approaches, including minimizing materials, moving away from certain heat-trapping anesthetic gases, maximizing instrument reuse or single-use device reprocessing, and reducing off-hour energy use in the operating room. These strategies can reduce the carbon footprint of an average laparoscopic hysterectomy by up to 80%. Recycling alone does very little to reduce environmental footprint. Public Health Implications. Health care services are a major source of environmental emissions and reducing their carbon footprint would improve environmental and human health. Facilities seeking to reduce environmental footprint should take a comprehensive systems approach to find safe and effective interventions and should identify and address policy barriers to implementing more sustainable practices.

  7. Strategies to Reduce Greenhouse Gas Emissions from Laparoscopic Surgery

    PubMed Central

    Thiel, Cassandra L.; Woods, Noe C.

    2018-01-01

    Objectives. To determine the carbon footprint of various sustainability interventions used for laparoscopic hysterectomy. Methods. We designed interventions for laparoscopic hysterectomy from approaches that sustainable health care organizations advocate. We used a hybrid environmental life cycle assessment framework to estimate greenhouse gas emissions from the proposed interventions. We conducted the study from September 2015 to December 2016 at the University of Pittsburgh (Pittsburgh, Pennsylvania). Results. The largest carbon footprint savings came from selecting specific anesthetic gases and minimizing the materials used in surgery. Energy-related interventions resulted in a 10% reduction in carbon footprint per case but would result in larger savings for the whole facility. Commonly implemented approaches, such as recycling surgical waste, resulted in less than a 5% reduction in greenhouse gases. Conclusions. To reduce the environmental emissions of surgeries, health care providers need to implement a combination of approaches, including minimizing materials, moving away from certain heat-trapping anesthetic gases, maximizing instrument reuse or single-use device reprocessing, and reducing off-hour energy use in the operating room. These strategies can reduce the carbon footprint of an average laparoscopic hysterectomy by up to 80%. Recycling alone does very little to reduce environmental footprint. Public Health Implications. Health care services are a major source of environmental emissions and reducing their carbon footprint would improve environmental and human health. Facilities seeking to reduce environmental footprint should take a comprehensive systems approach to find safe and effective interventions and should identify and address policy barriers to implementing more sustainable practices. PMID:29698098

  8. Transaction-Based Building Controls Framework, Volume 1: Reference Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somasundaram, Sriram; Pratt, Robert G.; Akyol, Bora A.

    This document proposes a framework concept to achieve the objectives of raising buildings’ efficiency and energy savings potential benefitting building owners and operators. We call it a transaction-based framework, wherein mutually-beneficial and cost-effective market-based transactions can be enabled between multiple players across different domains. Transaction-based building controls are one part of the transactional energy framework. While these controls realize benefits by enabling automatic, market-based intra-building efficiency optimizations, the transactional energy framework provides similar benefits using the same market -based structure, yet on a larger scale and beyond just buildings, to the society at large.

  9. Radial Symmetry of p-Harmonic Minimizers

    NASA Astrophysics Data System (ADS)

    Koski, Aleksis; Onninen, Jani

    2018-03-01

    "It is still not known if the radial cavitating minimizers obtained by uc(Ball) (Philos Trans R Soc Lond A 306:557-611, 1982) (and subsequently by many others) are global minimizers of any physically reasonable nonlinearly elastic energy". This quotation is from uc(Sivaloganathan) and uc(Spector) (Ann Inst Henri Poincaré Anal Non Linéaire 25(1):201-213, 2008) and seems to be still accurate. The model case of the p-harmonic energy is considered here. We prove that the planar radial minimizers are indeed the global minimizers provided we prescribe the admissible deformations on the boundary. In the traction free setting, however, even the identity map need not be a global minimizer.

  10. Sociological Perspectives on Energy and Rural Development: A Review of Major Frameworks for Research on Developing Countries.

    ERIC Educational Resources Information Center

    Koppel, Bruce; Schlegel, Charles

    The principal sociological frameworks used in energy research on developing countries can be appraised in terms of the view of the energy-rural development problem that each framework implies. "Socio-Technical Analysis," which is used most in industrial and organizational sociology and in ecological anthropology, is oriented to the decomposition…

  11. Seismic waves in a self-gravitating planet

    NASA Astrophysics Data System (ADS)

    Brazda, Katharina; de Hoop, Maarten V.; Hörmann, Günther

    2013-04-01

    The elastic-gravitational equations describe the propagation of seismic waves including the effect of self-gravitation. We rigorously derive and analyze this system of partial differential equations and boundary conditions for a general, uniformly rotating, elastic, but aspherical, inhomogeneous, and anisotropic, fluid-solid earth model, under minimal assumptions concerning the smoothness of material parameters and geometry. For this purpose we first establish a consistent mathematical formulation of the low regularity planetary model within the framework of nonlinear continuum mechanics. Using calculus of variations in a Sobolev space setting, we then show how the weak form of the linearized elastic-gravitational equations directly arises from Hamilton's principle of stationary action. Finally we prove existence and uniqueness of weak solutions by the method of energy estimates and discuss additional regularity properties.

  12. A neural network for controlling the configuration of frame structure with elastic members

    NASA Technical Reports Server (NTRS)

    Tsutsumi, Kazuyoshi

    1989-01-01

    A neural network for controlling the configuration of frame structure with elastic members is proposed. In the present network, the structure is modeled not by using the relative angles of the members but by using the distances between the joint locations alone. The relationship between the environment and the joints is also defined by their mutual distances. The analog neural network attains the reaching motion of the manipulator as a minimization problem of the energy constructed by the distances between the joints, the target, and the obstacles. The network can generate not only the final but also the transient configurations and the trajectory. This framework with flexibility and parallelism is very suitable for controlling the Space Telerobotic systems with many degrees of freedom.

  13. Context-dependent logo matching and recognition.

    PubMed

    Sahbi, Hichem; Ballan, Lamberto; Serra, Giuseppe; Del Bimbo, Alberto

    2013-03-01

    We contribute, through this paper, to the design of a novel variational framework able to match and recognize multiple instances of multiple reference logos in image archives. Reference logos and test images are seen as constellations of local features (interest points, regions, etc.) and matched by minimizing an energy function mixing: 1) a fidelity term that measures the quality of feature matching, 2) a neighborhood criterion that captures feature co-occurrence/geometry, and 3) a regularization term that controls the smoothness of the matching solution. We also introduce a detection/recognition procedure and study its theoretical consistency. Finally, we show the validity of our method through extensive experiments on the challenging MICC-Logos dataset. Our method overtakes, by 20%, baseline as well as state-of-the-art matching/recognition procedures.

  14. A Data Driven Pre-cooling Framework for Energy Cost Optimization in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishwanath, Arun; Chandan, Vikas; Mendoza, Cameron

    Commercial buildings consume significant amount of energy. Facility managers are increasingly grappling with the problem of reducing their buildings’ peak power, overall energy consumption and energy bills. In this paper, we first develop an optimization framework – based on a gray box model for zone thermal dynamics – to determine a pre-cooling strategy that simultaneously shifts the peak power to low energy tariff regimes, and reduces both the peak power and overall energy consumption by exploiting the flexibility in a building’s thermal comfort range. We then evaluate the efficacy of the pre-cooling optimization framework by applying it to building managementmore » system data, spanning several days, obtained from a large commercial building located in a tropical region of the world. The results from simulations show that optimal pre-cooling reduces peak power by over 50%, energy consumption by up to 30% and energy bills by up to 37%. Next, to enable ease of use of our framework, we also propose a shortest path based heuristic algorithmfor solving the optimization problemand show that it has comparable erformance with the optimal solution. Finally, we describe an application of the proposed optimization framework for developing countries to reduce the dependency on expensive fossil fuels, which are often used as a source for energy backup.We conclude by highlighting our real world deployment of the optimal pre-cooling framework via a software service on the cloud platform of a major provider. Our pre-cooling methodology, based on the gray box optimization framework, incurs no capital expense and relies on data readily available from a building management system, thus enabling facility managers to take informed decisions for improving the energy and cost footprints of their buildings« less

  15. A duality framework for stochastic optimal control of complex systems

    DOE PAGES

    Malikopoulos, Andreas A.

    2016-01-01

    In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derivemore » online the optimal control policy in complex systems.« less

  16. Derisking Renewable Energy Investment. A Framework to Support Policymakers in Selecting Public Instruments to Promote Renewable Energy Investment in Developing Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waissbein, Oliver; Glemarec, Yannick; Bayraktar, Hande

    2013-03-15

    This report introduces an innovative framework to assist policymakers to quantitatively compare the impact of different public instruments to promote renewable energy. The report identifies the need to reduce the high financing costs for renewable energy in developing countries as an important task for policymakers acting today. The framework is structured in four stages: (i) risk environment, (ii) public instruments, (iii) levelised cost and (iv) evaluation. To illustrate how the framework can support decision-making in practice, the report presents findings from illustrative case studies in four developing countries. It then draws on these results to discuss possible directions for enhancingmore » public interventions to scale-up renewable energy investment. UNDP is also releasing a financial tool for policymakers to accompany the framework. The financial tool is available for download on the UNDP website.« less

  17. Energy stability of droplets and dry spots in a thin film model of hanging drops

    NASA Astrophysics Data System (ADS)

    Cheung, Ka-Luen; Chou, Kai-Seng

    2017-10-01

    The 2-D thin film equation describing the evolution of hang drops is studied. All radially symmetric steady states are classified, and their energy stability is determined. It is shown that the droplet with zero contact angle is the only global energy minimizer and the dry spot with zero contact angle is a strict local energy minimizer.

  18. The EU sustainable energy policy indicators framework.

    PubMed

    Streimikiene, Dalia; Sivickas, Gintautas

    2008-11-01

    The article deals with indicators framework to monitor implementation of the main EU (European Union) directives and other policy documents targeting sustainable energy development. The main EU directives which have impact on sustainable energy development are directives promoting energy efficiency and use of renewable energy sources, directives implementing greenhouse gas mitigation and atmospheric pollution reduction policies and other policy documents and strategies targeting energy sector. Promotion of use of renewable energy sources and energy efficiency improvements are among priorities of EU energy policy because the use of renewable energy sources and energy efficiency improvements has positive impact on energy security and climate change mitigation. The framework of indicators can be developed to establish the main targets set by EU energy and environmental policies allowing to connect indicators via chain of mutual impacts and to define policies and measures necessary to achieve established targets based on assessment of their impact on the targeted indicators representing sustainable energy development aims. The article discusses the application of indicators framework for EU sustainable energy policy analysis and presents the case study of this policy tool application for Baltic States. The article also discusses the use of biomass in Baltic States and future considerations in this field.

  19. Minimum energy information fusion in sensor networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapline, G

    1999-05-11

    In this paper we consider how to organize the sharing of information in a distributed network of sensors and data processors so as to provide explanations for sensor readings with minimal expenditure of energy. We point out that the Minimum Description Length principle provides an approach to information fusion that is more naturally suited to energy minimization than traditional Bayesian approaches. In addition we show that for networks consisting of a large number of identical sensors Kohonen self-organization provides an exact solution to the problem of combing the sensor outputs into minimal description length explanations.

  20. Flexible 2D Crystals of Polycyclic Aromatics Stabilized by Static Distortion Waves.

    PubMed

    Meissner, Matthias; Sojka, Falko; Matthes, Lars; Bechstedt, Friedhelm; Feng, Xinliang; Müllen, Klaus; Mannsfeld, Stefan C B; Forker, Roman; Fritz, Torsten

    2016-07-26

    The epitaxy of many organic films on inorganic substrates can be classified within the framework of rigid lattices which helps to understand the origin of energy gain driving the epitaxy of the films. Yet, there are adsorbate-substrate combinations with distinct mutual orientations for which this classification fails and epitaxy cannot be explained within a rigid lattice concept. It has been proposed that tiny shifts in atomic positions away from ideal lattice points, so-called static distortion waves (SDWs), are responsible for the observed orientational epitaxy in such cases. Using low-energy electron diffraction and scanning tunneling microscopy, we provide direct experimental evidence for SDWs in organic adsorbate films, namely hexa-peri-hexabenzocoronene on graphite. They manifest as wave-like sub-Ångström molecular displacements away from an ideal adsorbate lattice which is incommensurate with graphite. By means of a density-functional-theory based model, we show that, due to the flexibility in the adsorbate layer, molecule-substrate energy is gained by straining the intermolecular bonds and that the resulting total energy is minimal for the observed domain orientation, constituting the orientational epitaxy. While structural relaxation at an interface is a common assumption, the combination of the precise determination of the incommensurate epitaxial relation, the direct observation of SDWs in real space, and their identification as the sole source of epitaxial energy gain constitutes a comprehensive proof of this effect.

  1. Rigorous Statistical Bounds in Uncertainty Quantification for One-Layer Turbulent Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Qi, Di; Majda, Andrew J.

    2018-04-01

    Statistical bounds controlling the total fluctuations in mean and variance about a basic steady-state solution are developed for the truncated barotropic flow over topography. Statistical ensemble prediction is an important topic in weather and climate research. Here, the evolution of an ensemble of trajectories is considered using statistical instability analysis and is compared and contrasted with the classical deterministic instability for the growth of perturbations in one pointwise trajectory. The maximum growth of the total statistics in fluctuations is derived relying on the statistical conservation principle of the pseudo-energy. The saturation bound of the statistical mean fluctuation and variance in the unstable regimes with non-positive-definite pseudo-energy is achieved by linking with a class of stable reference states and minimizing the stable statistical energy. Two cases with dependence on initial statistical uncertainty and on external forcing and dissipation are compared and unified under a consistent statistical stability framework. The flow structures and statistical stability bounds are illustrated and verified by numerical simulations among a wide range of dynamical regimes, where subtle transient statistical instability exists in general with positive short-time exponential growth in the covariance even when the pseudo-energy is positive-definite. Among the various scenarios in this paper, there exist strong forward and backward energy exchanges between different scales which are estimated by the rigorous statistical bounds.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Timothy; Dolan, Matthew J.; El Hedri, Sonia

    Simplified Models are a useful way to characterize new physics scenarios for the LHC. Particle decays are often represented using non-renormalizable operators that involve the minimal number of fields required by symmetries. Generalizing to a wider class of decay operators allows one to model a variety of final states. This approach, which we dub the $n$-body extension of Simplified Models, provides a unifying treatment of the signal phase space resulting from a variety of signals. In this paper, we present the first application of this framework in the context of multijet plus missing energy searches. The main result of thismore » work is a global performance study with the goal of identifying which set of observables yields the best discriminating power against the largest Standard Model backgrounds for a wide range of signal jet multiplicities. Our analysis compares combinations of one, two and three variables, placing emphasis on the enhanced sensitivity gain resulting from non-trivial correlations. Utilizing boosted decision trees, we compare and classify the performance of missing energy, energy scale and energy structure observables. We demonstrate that including an observable from each of these three classes is required to achieve optimal performance. In conclusion, this work additionally serves to establish the utility of $n$-body extended Simplified Models as a diagnostic for unpacking the relative merits of different search strategies, thereby motivating their application to new physics signatures beyond jets and missing energy.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umino, Satoru; Takahashi, Hideaki, E-mail: hideaki@m.tohoku.ac.jp; Morita, Akihiro

    In a recent work, we developed a method [H. Takahashi et al., J. Chem. Phys. 143, 084104 (2015)] referred to as exchange-core function (ECF) approach, to compute exchange repulsion E{sub ex} between solute and solvent in the framework of the quantum mechanical (QM)/molecular mechanical (MM) method. The ECF, represented with a Slater function, plays an essential role in determining E{sub ex} on the basis of the overlap model. In the work of Takahashi et al. [J. Chem. Phys. 143, 084104 (2015)], it was demonstrated that our approach is successful in computing the hydrogen bond energies of minimal QM/MM systems includingmore » a cationic QM solute. We provide in this paper the extension of the ECF approach to the free energy calculation in condensed phase QM/MM systems by combining the ECF and the QM/MM-ER approach [H. Takahashi et al., J. Chem. Phys. 121, 3989 (2004)]. By virtue of the theory of solutions in energy representation, the free energy contribution δμ{sub ex} from the exchange repulsion was naturally formulated. We found that the ECF approach in combination with QM/MM-ER gives a substantial improvement on the calculation of the hydration free energy of a hydronium ion. This can be attributed to the fact that the ECF reasonably realizes the contraction of the electron density of the cation due to the deficit of an electron.« less

  4. Flexible 2D Crystals of Polycyclic Aromatics Stabilized by Static Distortion Waves

    PubMed Central

    2016-01-01

    The epitaxy of many organic films on inorganic substrates can be classified within the framework of rigid lattices which helps to understand the origin of energy gain driving the epitaxy of the films. Yet, there are adsorbate–substrate combinations with distinct mutual orientations for which this classification fails and epitaxy cannot be explained within a rigid lattice concept. It has been proposed that tiny shifts in atomic positions away from ideal lattice points, so-called static distortion waves (SDWs), are responsible for the observed orientational epitaxy in such cases. Using low-energy electron diffraction and scanning tunneling microscopy, we provide direct experimental evidence for SDWs in organic adsorbate films, namely hexa-peri-hexabenzocoronene on graphite. They manifest as wave-like sub-Ångström molecular displacements away from an ideal adsorbate lattice which is incommensurate with graphite. By means of a density-functional-theory based model, we show that, due to the flexibility in the adsorbate layer, molecule–substrate energy is gained by straining the intermolecular bonds and that the resulting total energy is minimal for the observed domain orientation, constituting the orientational epitaxy. While structural relaxation at an interface is a common assumption, the combination of the precise determination of the incommensurate epitaxial relation, the direct observation of SDWs in real space, and their identification as the sole source of epitaxial energy gain constitutes a comprehensive proof of this effect. PMID:27014920

  5. A revised energy-balance framework for the Earth

    NASA Astrophysics Data System (ADS)

    Dessler, A. E.

    2017-12-01

    Some of the most important conclusions of climate science are based on energy balance calculations, in which solar energy absorbed by the Earth system is set equal to infrared energy radiated to space. Traditionally, energy radiated to space is assumed to be proportional to surface temperature. We show here problems with this framework, including potential biases in estimates of climate sensitivity based on the 20th-century historical record. This could potentially explain why estimates of equilibrium climate sensitivity (ECS) using observations over the 20th century yield values lower than other estimates. We then present a modified version of the energy balance framework in which energy radiated to space is assumed to be proportional to tropical atmospheric temperature. We use this new framework to estimate ECS and obtain an estimate of 3°C, with a likely range (66% confidence interval) of 2.2-4.1°C.

  6. An Asset-Based Approach to Tribal Community Energy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Rachael A.; Martino, Anthony; Begay, Sandra K.

    Community energy planning is a vital component of successful energy resource development and project implementation. Planning can help tribes develop a shared vision and strategies to accomplish their energy goals. This paper explores the benefits of an asset-based approach to tribal community energy planning. While a framework for community energy planning and federal funding already exists, some areas of difficulty in the planning cycle have been identified. This paper focuses on developing a planning framework that offsets those challenges. The asset-based framework described here takes inventory of a tribe’s capital assets, such as: land capital, human capital, financial capital, andmore » political capital. Such an analysis evaluates how being rich in a specific type of capital can offer a tribe unique advantages in implementing their energy vision. Finally, a tribal case study demonstrates the practical application of an asset-based framework.« less

  7. Energy minimization on manifolds for docking flexible molecules

    PubMed Central

    Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima

    2015-01-01

    In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722

  8. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    PubMed

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.

  9. Ideas That Work! The Midnight Audit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Steven A.

    The midnight audit provides valuable insight toward identifying opportunities to reduce energy consumption—insight that can be easily overlooked during the normal (daytime) energy auditing process. The purpose of the midnight audit is to observe after-hour operation with the mindset of seeking ways to further minimize energy consumption during the unoccupied mode and minimize energy waste by reducing unnecessary operation. The midnight audit should be used to verify that equipment is off when it is supposed to be, or operating in set-back mode when applicable. Even a facility that operates 2 shifts per day, 5 days per week experiences fewer annualmore » hours in occupied mode than it does during unoccupied mode. Minimizing energy loads during unoccupied hours can save significant energy, which is why the midnight audit is an Idea That Works.« less

  10. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  11. Verbal Working Memory and Language Production: Common Approaches to the Serial Ordering of Verbal Information

    ERIC Educational Resources Information Center

    Acheson, Daniel J.; MacDonald, Maryellen C.

    2009-01-01

    Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by…

  12. Overcoming double-step CO2 adsorption and minimizing water co-adsorption in bulky diamine-appended variants of Mg2(dobpdc).

    PubMed

    Milner, Phillip J; Martell, Jeffrey D; Siegelman, Rebecca L; Gygi, David; Weston, Simon C; Long, Jeffrey R

    2018-01-07

    Alkyldiamine-functionalized variants of the metal-organic framework Mg 2 (dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary , secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behavior likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2 (dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2 (dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2 (pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para -carboxylate), which, in contrast to Mg 2 (dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2 (pc-dobpdc) with large diamines such as N -( n -heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.

  13. Overcoming double-step CO 2 adsorption and minimizing water co-adsorption in bulky diamine-appended variants of Mg 2(dobpdc)

    DOE PAGES

    Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.; ...

    2017-10-26

    Alkyldiamine-functionalized variants of the metal–organic framework Mg 2(dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behaviormore » likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2(dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2(dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2(pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg 2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.« less

  14. Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Aziz, H M Abdul; Young, Stan

    Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less

  15. Minimizing center of mass vertical movement increases metabolic cost in walking.

    PubMed

    Ortega, Justus D; Farley, Claire T

    2005-12-01

    A human walker vaults up and over each stance limb like an inverted pendulum. This similarity suggests that the vertical motion of a walker's center of mass reduces metabolic cost by providing a mechanism for pendulum-like mechanical energy exchange. Alternatively, some researchers have hypothesized that minimizing vertical movements of the center of mass during walking minimizes the metabolic cost, and this view remains prevalent in clinical gait analysis. We examined the relationship between vertical movement and metabolic cost by having human subjects walk normally and with minimal center of mass vertical movement ("flat-trajectory walking"). In flat-trajectory walking, subjects reduced center of mass vertical displacement by an average of 69% (P = 0.0001) but consumed approximately twice as much metabolic energy over a range of speeds (0.7-1.8 m/s) (P = 0.0001). In flat-trajectory walking, passive pendulum-like mechanical energy exchange provided only a small portion of the energy required to accelerate the center of mass because gravitational potential energy fluctuated minimally. Thus, despite the smaller vertical movements in flat-trajectory walking, the net external mechanical work needed to move the center of mass was similar in both types of walking (P = 0.73). Subjects walked with more flexed stance limbs in flat-trajectory walking (P < 0.001), and the resultant increase in stance limb force generation likely helped cause the doubling in metabolic cost compared with normal walking. Regardless of the cause, these findings clearly demonstrate that human walkers consume substantially more metabolic energy when they minimize vertical motion.

  16. Energy minimization strategies and renewable energy utilization for desalination: a review.

    PubMed

    Subramani, Arun; Badruzzaman, Mohammad; Oppenheimer, Joan; Jacangelo, Joseph G

    2011-02-01

    Energy is a significant cost in the economics of desalinating waters, but water scarcity is driving the rapid expansion in global installed capacity of desalination facilities. Conventional fossil fuels have been utilized as their main energy source, but recent concerns over greenhouse gas (GHG) emissions have promoted global development and implementation of energy minimization strategies and cleaner energy supplies. In this paper, a comprehensive review of energy minimization strategies for membrane-based desalination processes and utilization of lower GHG emission renewable energy resources is presented. The review covers the utilization of energy efficient design, high efficiency pumping, energy recovery devices, advanced membrane materials (nanocomposite, nanotube, and biomimetic), innovative technologies (forward osmosis, ion concentration polarization, and capacitive deionization), and renewable energy resources (solar, wind, and geothermal). Utilization of energy efficient design combined with high efficiency pumping and energy recovery devices have proven effective in full-scale applications. Integration of advanced membrane materials and innovative technologies for desalination show promise but lack long-term operational data. Implementation of renewable energy resources depends upon geography-specific abundance, a feasible means of handling renewable energy power intermittency, and solving technological and economic scale-up and permitting issues. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Internet Civil Defense: Feasibility Study

    DTIC Science & Technology

    2002-12-09

    quell spread of false rumors that induce fear. Summary: Internet Civil Defense can help to manage fear in various ways. We can continuously poll...timely information to minimize rumors and accelerate recovery from disasters. ICD will create an “umbrella” framework to raise the overall...information to minimize rumors and accelerate recovery from disasters. Overall, ICD will collect and deliver actionable information to enhance public

  18. General squark flavour mixing: constraints, phenomenology and benchmarks

    DOE PAGES

    De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...

    2015-11-19

    Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  19. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  20. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning.

    PubMed

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-07-20

    [Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  1. Lattice analysis for the energy scale of QCD phenomena.

    PubMed

    Yamamoto, Arata; Suganuma, Hideo

    2008-12-12

    We formulate a new framework in lattice QCD to study the relevant energy scale of QCD phenomena. By considering the Fourier transformation of link variable, we can investigate the intrinsic energy scale of a physical quantity nonperturbatively. This framework is broadly available for all lattice QCD calculations. We apply this framework for the quark-antiquark potential and meson masses in quenched lattice QCD. The gluonic energy scale relevant for the confinement is found to be less than 1 GeV in the Landau or Coulomb gauge.

  2. Quality Assurance Framework for Mini-Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esterly, Sean; Baring-Gould, Ian; Booth, Samuel

    To address the root challenges of providing quality power to remote consumers through financially viable mini-grids, the Global Lighting and Energy Access Partnership (Global LEAP) initiative of the Clean Energy Ministerial and the U.S. Department of Energy teamed with the National Renewable Energy Laboratory (NREL) and Power Africa to develop a Quality Assurance Framework (QAF) for isolated mini-grids. The framework addresses both alternating current (AC) and direct current (DC) mini-grids, and is applicable to renewable, fossil-fuel, and hybrid systems.

  3. Rapid development of Proteomic applications with the AIBench framework.

    PubMed

    López-Fernández, Hugo; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Méndez Reboredo, José R; Santos, Hugo M; Carreira, Ricardo J; Capelo-Martínez, José L; Fdez-Riverola, Florentino

    2011-09-15

    In this paper we present two case studies of Proteomics applications development using the AIBench framework, a Java desktop application framework mainly focused in scientific software development. The applications presented in this work are Decision Peptide-Driven, for rapid and accurate protein quantification, and Bacterial Identification, for Tuberculosis biomarker search and diagnosis. Both tools work with mass spectrometry data, specifically with MALDI-TOF spectra, minimizing the time required to process and analyze the experimental data. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.

  4. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  5. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  6. SEE Action Guide for States: Evaluation, Measurement, and Verification Frameworks$-$Guidance for Energy Efficiency Portfolios Funded by Utility Customers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Michael; Dietsch, Niko

    2018-01-01

    This guide describes frameworks for evaluation, measurement, and verification (EM&V) of utility customer–funded energy efficiency programs. The authors reviewed multiple frameworks across the United States and gathered input from experts to prepare this guide. This guide provides the reader with both the contents of an EM&V framework, along with the processes used to develop and update these frameworks.

  7. Energy Technology Investments: Maximizing Efficiency Through a Maritime Energy Portfolio Interface and Decision Aid

    DTIC Science & Technology

    2012-02-09

    Investment (ROI) and Break Even Point ( BEP ). These metrics are essential for determining whether an initiative would be worth pursuing. Balanced...is Unlimited Energy Decision Framework Identify Inefficiencies 2. Perform Analyses 3. Examine Technology Candidates 1. Improve Energy...Unlimited Energy Decision Framework Identify Inefficiencies 2. Perform Analyses 3. Examine Technology Candidates 1. Improve Energy Efficiency 4

  8. Contrast-enhanced spectral mammography with a photon-counting detector.

    PubMed

    Fredenberg, Erik; Hemmendorff, Magnus; Cederström, Björn; Aslund, Magnus; Danielsson, Mats

    2010-05-01

    Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. The authors have investigated a photon-counting spectral imaging system with two energy bins for contrast-enhanced mammography. System optimization and the potential benefit compared to conventional non-energy-resolved absorption imaging was studied. A framework for system characterization was set up that included quantum and anatomical noise and a theoretical model of the system was benchmarked to phantom measurements. Optimal combination of the energy-resolved images corresponded approximately to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging in the phantom study. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, yielded only a minute improvement. In a simulation of a clinically more realistic case, spectral imaging was predicted to perform approximately 30% better than absorption imaging for an average glandularity breast with an average level of anatomical noise. For dense breast tissue and a high level of anatomical noise, however, a rise in detectability by a factor of 6 was predicted. Another approximately 70%-90% improvement was found to be within reach for an optimized system. Contrast-enhanced spectral mammography is feasible and beneficial with the current system, and there is room for additional improvements. Inclusion of anatomical noise is essential for optimizing spectral imaging systems.

  9. Dissecting jets and missing energy searches using $n$-body extended simplified models

    DOE PAGES

    Cohen, Timothy; Dolan, Matthew J.; El Hedri, Sonia; ...

    2016-08-04

    Simplified Models are a useful way to characterize new physics scenarios for the LHC. Particle decays are often represented using non-renormalizable operators that involve the minimal number of fields required by symmetries. Generalizing to a wider class of decay operators allows one to model a variety of final states. This approach, which we dub the $n$-body extension of Simplified Models, provides a unifying treatment of the signal phase space resulting from a variety of signals. In this paper, we present the first application of this framework in the context of multijet plus missing energy searches. The main result of thismore » work is a global performance study with the goal of identifying which set of observables yields the best discriminating power against the largest Standard Model backgrounds for a wide range of signal jet multiplicities. Our analysis compares combinations of one, two and three variables, placing emphasis on the enhanced sensitivity gain resulting from non-trivial correlations. Utilizing boosted decision trees, we compare and classify the performance of missing energy, energy scale and energy structure observables. We demonstrate that including an observable from each of these three classes is required to achieve optimal performance. In conclusion, this work additionally serves to establish the utility of $n$-body extended Simplified Models as a diagnostic for unpacking the relative merits of different search strategies, thereby motivating their application to new physics signatures beyond jets and missing energy.« less

  10. Dirac δ -function potential in quasiposition representation of a minimal-length scenario

    NASA Astrophysics Data System (ADS)

    Gusson, M. F.; Gonçalves, A. Oakes O.; Francisco, R. O.; Furtado, R. G.; Fabris, J. C.; Nogueira, J. A.

    2018-03-01

    A minimal-length scenario can be considered as an effective description of quantum gravity effects. In quantum mechanics the introduction of a minimal length can be accomplished through a generalization of Heisenberg's uncertainty principle. In this scenario, state eigenvectors of the position operator are no longer physical states and the representation in momentum space or a representation in a quasiposition space must be used. In this work, we solve the Schroedinger equation with a Dirac δ -function potential in quasiposition space. We calculate the bound state energy and the coefficients of reflection and transmission for the scattering states. We show that leading corrections are of order of the minimal length ({ O}(√{β })) and the coefficients of reflection and transmission are no longer the same for the Dirac delta well and barrier as in ordinary quantum mechanics. Furthermore, assuming that the equivalence of the 1s state energy of the hydrogen atom and the bound state energy of the Dirac {{δ }}-function potential in the one-dimensional case is kept in a minimal-length scenario, we also find that the leading correction term for the ground state energy of the hydrogen atom is of the order of the minimal length and Δx_{\\min } ≤ 10^{-25} m.

  11. Consolidation of hydrophobic transition criteria by using an approximate energy minimization approach.

    PubMed

    Patankar, Neelesh A

    2010-06-01

    Recent experimental work has successfully revealed pressure induced transition from Cassie to Wenzel state on rough hydrophobic substrates. Formulas, based on geometric considerations and imposed pressure, have been developed as transition criteria. In the past, transition has also been considered as a process of overcoming the energy barrier between the Cassie and Wenzel states. A unified understanding of the various considerations of transition has not been apparent. To address this issue, in this work, we consolidate the transition criteria with a homogenized energy minimization approach. This approach decouples the problem of minimizing the energy to wet the rough substrate, from the energy of the macroscopic drop. It is seen that the transition from Cassie to Wenzel state, due to depinning of the liquid-air interface, emerges from the approximate energy minimization approach if the pressure-volume energy associated with the impaled liquid in the roughness is included. This transition can be viewed as a process in which the work done by the pressure force is greater than the barrier due to the surface energy associated with wetting the roughness. It is argued that another transition mechanism, due to a sagging liquid-air interface that touches the bottom of the roughness grooves, is not typically relevant if the substrate roughness is designed such that the Cassie state is at lower energy compared to the Wenzel state.

  12. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  13. Canadian consensus conference on the development of training and practice standards in advanced minimally invasive surgery

    PubMed Central

    Birch, Daniel W.; Bonjer, H. Jaap; Crossley, Claire; Burnett, Gayle; de Gara, Chris; Gomes, Anthony; Hagen, John; Maciver, Angus G.; Mercer, C. Dale; Panton, O. Neely; Schlachta, Chris M.; Smith, Andy J.; Warnock, Garth L.

    2009-01-01

    Despite the complexities of minimally invasive surgery (MIS), a Canadian approach to training surgeons in this field does not exist. Whereas a limited number of surgeons are fellowship-trained in the specialty, guidelines are still clearly needed to implement advanced MIS. Leaders in the field of gastrointestinal surgery and MIS attended a consensus conference where they proposed a comprehensive mentoring program that may evolve into a framework for a national mentoring and training system. Leadership and commitment from national experts to define the most appropriate template for introducing new surgical techniques into practice is required. This national framework should also provide flexibility for truly novel procedures such as natural orifice translumenal endoscopic surgery. PMID:19680520

  14. A task scheduler framework for self-powered wireless sensors.

    PubMed

    Nordman, Mikael M

    2003-10-01

    The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.

  15. Functional relationship between skull form and feeding mechanics in Sphenodon, and implications for diapsid skull development.

    PubMed

    Curtis, Neil; Jones, Marc E H; Shi, Junfen; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J

    2011-01-01

    The vertebrate skull evolved to protect the brain and sense organs, but with the appearance of jaws and associated forces there was a remarkable structural diversification. This suggests that the evolution of skull form may be linked to these forces, but an important area of debate is whether bone in the skull is minimised with respect to these forces, or whether skulls are mechanically "over-designed" and constrained by phylogeny and development. Mechanical analysis of diapsid reptile skulls could shed light on this longstanding debate. Compared to those of mammals, the skulls of many extant and extinct diapsids comprise an open framework of fenestrae (window-like openings) separated by bony struts (e.g., lizards, tuatara, dinosaurs and crocodiles), a cranial form thought to be strongly linked to feeding forces. We investigated this link by utilising the powerful engineering approach of multibody dynamics analysis to predict the physiological forces acting on the skull of the diapsid reptile Sphenodon. We then ran a series of structural finite element analyses to assess the correlation between bone strain and skull form. With comprehensive loading we found that the distribution of peak von Mises strains was particularly uniform throughout the skull, although specific regions were dominated by tensile strains while others were dominated by compressive strains. Our analyses suggest that the frame-like skulls of diapsid reptiles are probably optimally formed (mechanically ideal: sufficient strength with the minimal amount of bone) with respect to functional forces; they are efficient in terms of having minimal bone volume, minimal weight, and also minimal energy demands in maintenance. © 2011 Curtis et al.

  16. Ground-state densities from the Rayleigh-Ritz variation principle and from density-functional theory.

    PubMed

    Kvaal, Simen; Helgaker, Trygve

    2015-11-14

    The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh-Ritz variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg-Kohn variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground-state energy for a given external potential by minimizing over densities in the Hohenberg-Kohn variation principle, necessary and sufficient conditions on such functionals are established to ensure that the Rayleigh-Ritz ground-state densities and the Hohenberg-Kohn ground-state densities are identical. We apply the results to molecular systems in the Born-Oppenheimer approximation. For any given potential v ∈ L(3/2)(ℝ(3)) + L(∞)(ℝ(3)), we establish a one-to-one correspondence between the mixed ground-state densities of the Rayleigh-Ritz variation principle and the mixed ground-state densities of the Hohenberg-Kohn variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the Rayleigh-Ritz variation principle and the pure ground-state densities obtained using the Hohenberg-Kohn variation principle with the Levy-Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.

  17. Bacc to the Future: Why We Urgently Need a More Coherent and Exciting Framework for Learning

    ERIC Educational Resources Information Center

    Benn, Melissa

    2015-01-01

    Our current curriculum and qualifications framework is a "fragmented mess" according to many of those who teach in, and lead, our schools. How can we change it with minimal disruption, particularly after four years of often destructive meddling from above? A number of individuals and groups at school level have been working to develop a…

  18. A Lightweight Encryption Scheme Combined with Trust Management for Privacy-Preserving in Body Sensor Networks.

    PubMed

    Guo, Ping; Wang, Jin; Ji, Sai; Geng, Xue Hua; Xiong, Neal N

    2015-12-01

    With the pervasiveness of smart phones and the advance of wireless body sensor network (BSN), mobile Healthcare (m-Healthcare), which extends the operation of Healthcare provider into a pervasive environment for better health monitoring, has attracted considerable interest recently. However, the flourish of m-Healthcare still faces many challenges including information security and privacy preservation. In this paper, we propose a secure and privacy-preserving framework combining with multilevel trust management. In our scheme, smart phone resources including computing power and energy can be opportunistically gathered to process the computing-intensive PHI (personal health information) during m-Healthcare emergency with minimal privacy disclosure. In specific, to leverage the PHI privacy disclosure and the high reliability of PHI process and transmission in m-Healthcare emergency, we introduce an efficient lightweight encryption for those users whose trust level is low, which is based on mix cipher algorithms and pair of plain text and cipher texts, and allow a medical user to decide who can participate in the opportunistic computing to assist in processing his overwhelming PHI data. Detailed security analysis and simulations show that the proposed framework can efficiently achieve user-centric privacy protection in m-Healthcare system.

  19. Increasing organizational energy conservation behaviors: Comparing the theory of planned behavior and reasons theory for identifying specific motivational factors to target for change

    NASA Astrophysics Data System (ADS)

    Finlinson, Scott Michael

    Social scientists frequently assess factors thought to underlie behavior for the purpose of designing behavioral change interventions. Researchers commonly identify these factors by examining relationships between specific variables and the focal behaviors being investigated. Variables with the strongest relationships to the focal behavior are then assumed to be the most influential determinants of that behavior, and therefore often become the targets for change in a behavioral change intervention. In the current proposal, multiple methods are used to compare the effectiveness of two theoretical frameworks for identifying influential motivational factors. Assessing the relative influence of all factors and sets of factors for driving behavior should clarify which framework and methodology is the most promising for identifying effective change targets. Results indicated each methodology adequately predicted the three focal behaviors examined. However, the reasons theory approach was superior for predicting factor influence ratings compared to the TpB approach. While common method variance contamination had minimal impact on the results or conclusions derived from the present study's findings, there were substantial differences in conclusions depending on the questionnaire design used to collect the data. Examples of applied uses of the present study are discussed.

  20. A Simulation Framework for Battery Cell Impact Safety Modeling Using LS-DYNA

    DOE PAGES

    Marcicki, James; Zhu, Min; Bartlett, Alexander; ...

    2017-02-04

    The development process of electrified vehicles can benefit significantly from computer-aided engineering tools that predict themultiphysics response of batteries during abusive events. A coupled structural, electrical, electrochemical, and thermal model framework has been developed within the commercially available LS-DYNA software. The finite element model leverages a three-dimensional mesh structure that fully resolves the unit cell components. The mechanical solver predicts the distributed stress and strain response with failure thresholds leading to the onset of an internal short circuit. In this implementation, an arbitrary compressive strain criterion is applied locally to each unit cell. A spatially distributed equivalent circuit model providesmore » an empirical representation of the electrochemical responsewith minimal computational complexity.The thermalmodel provides state information to index the electrical model parameters, while simultaneously accepting irreversible and reversible sources of heat generation. The spatially distributed models of the electrical and thermal dynamics allow for the localization of current density and corresponding temperature response. The ability to predict the distributed thermal response of the cell as its stored energy is completely discharged through the short circuit enables an engineering safety assessment. A parametric analysis of an exemplary model is used to demonstrate the simulation capabilities.« less

  1. MSSM A-funnel and the galactic center excess: prospects for the LHC and direct detection experiments

    DOE PAGES

    Freese, Katherine; López, Alejandro; Shah, Nausheen R.; ...

    2016-04-11

    The pseudoscalar resonance or “A-funnel” in the Minimal Supersymmetric Standard Model (MSSM) is a widely studied framework for explaining dark matter that can yield interesting indirect detection and collider signals. The well-known Galactic Center excess (GCE) at GeV energies in the gamma ray spectrum, consistent with annihilation of a ≲ 40 GeV dark matter particle, has more recently been shown to be compatible with significantly heavier masses following reanalysis of the background.For this study, we explore the LHC and direct detection implications of interpreting the GCE in this extended mass window within the MSSM A-funnel framework. We find that compatibilitymore » with relic density, signal strength, collider constraints, and Higgs data can be simultaneously achieved with appropriate parameter choices. The compatible regions give very sharp predictions of 200-600 GeV CP-odd/even Higgs bosons at low tan β at the LHC and spin-independent cross sections ≈ 10 -11 pb at direct detection experiments. Finally, regardless of consistency with the GCE, this study serves as a useful template of the strong correlations between indirect, direct, and LHC signatures of the MSSM A-funnel region.« less

  2. 10 CFR 20.1406 - Minimization of contamination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Minimization of contamination. 20.1406 Section 20.1406 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Radiological Criteria for... subsurface, in accordance with the existing radiation protection requirements in subpart B and radiological...

  3. 10 CFR 20.1406 - Minimization of contamination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Minimization of contamination. 20.1406 Section 20.1406 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Radiological Criteria for... subsurface, in accordance with the existing radiation protection requirements in subpart B and radiological...

  4. Minimally flavored colored scalar in and the mass matrices constraints

    NASA Astrophysics Data System (ADS)

    Doršner, Ilja; Fajfer, Svjetlana; Košnik, Nejc; Nišandžić, Ivan

    2013-11-01

    The presence of a colored scalar that is a weak doublet with fractional electric charges of | Q| = 2 /3 and | Q| = 5 /3 with mass below 1 TeV can provide an explanation of the observed branching ratios in decays. The required combination of scalar and tensor operators in the effective Hamiltonian for is generated through the t-channel exchange. We focus on a scenario with a minimal set of Yukawa couplings that can address a semitauonic puzzle and show that its resolution puts a nontrivial bound on the product of the scalar couplings to and . We also derive additional constraints posed by , muon magnetic moment, lepton flavor violating decays μ → eγ, τ → μγ, τ → eγ, and τ electric dipole moment. The minimal set of Yukawa couplings is not only compatible with the mass generation in an SU(5) unification framework, a natural environment for colored scalars, but specifies all matter mixing parameters except for one angle in the up-type quark sector. We accordingly spell out predictions for the proton decay signatures through gauge boson exchange and show that p → π0 e + is suppressed with respect to and even p → K 0 e + in some parts of available parameter space. Impact of the colored scalar embedding in 45-dimensional representation of SU(5) on low-energy phenomenology is also presented. Finally, we make predictions for rare top and charm decays where presence of this scalar can be tested independently.

  5. Minimizing energy dissipation of matrix multiplication kernel on Virtex-II

    NASA Astrophysics Data System (ADS)

    Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook

    2002-07-01

    In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.

  6. On the Minimal Length Uncertainty Relation and the Foundations of String Theory

    DOE PAGES

    Chang, Lay Nam; Lewis, Zachary; Minic, Djordje; ...

    2011-01-01

    We review our work on the minimal length uncertainty relation as suggested by perturbative string theory. We discuss simple phenomenological implications of the minimal length uncertainty relation and then argue that the combination of the principles of quantum theory and general relativity allow for a dynamical energy-momentum space. We discuss the implication of this for the problem of vacuum energy and the foundations of nonperturbative string theory.

  7. Search for the supersymmetric partner of the top quark in ppbar collisions at sqrt(s) = 1.96 TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.; /Helsinki Inst. of Phys.; Alvarez Gonzalez, B.

    We present a search for the lightest supersymmetric partner of the top quark in proton-antiproton collisions at a center-of-mass energy {radical}s = 1.96 TeV. This search was conducted within the framework of the R-parity conserving minimal supersymmetric extension of the standard model, assuming the stop decays dominantly to a lepton, a sneutrino, and a bottom quark. We searched for events with two oppositely-charged leptons, at least one jet, and missing transverse energy in a data sample corresponding to an integrated luminosity of 1 fb{sup -1} collected by the CDF experiment. No significant evidence of a stop quark signal was found.more » Exclusion limits at 95% confidence level in the stop quark versus sneutrino mass plane are set. Stop quark masses up to 180 GeV/c{sup 2} are excluded for sneutrino masses around 45 GeV/c{sup 2}, and sneutrino masses up to 116 GeV/c{sup 2} are excluded for stop quark masses around 150 GeV/c{sup 2}.« less

  8. Health costs of reproduction are minimal despite high fertility, mortality and subsistence lifestyle.

    PubMed

    Gurven, Michael; Costa, Megan; Ben Trumble; Stieglitz, Jonathan; Beheim, Bret; Eid Rodriguez, Daniel; Hooper, Paul L; Kaplan, Hillard

    2016-07-20

    Women exhibit greater morbidity than men despite higher life expectancy. An evolutionary life history framework predicts that energy invested in reproduction trades-off against investments in maintenance and survival. Direct costs of reproduction may therefore contribute to higher morbidity, especially for women given their greater direct energetic contributions to reproduction. We explore multiple indicators of somatic condition among Tsimane forager-horticulturalist women (Total Fertility Rate = 9.1; n =  592 aged 15-44 years, n = 277 aged 45+). We test whether cumulative live births and the pace of reproduction are associated with nutritional status and immune function using longitudinal data spanning 10 years. Higher parity and faster reproductive pace are associated with lower nutritional status (indicated by weight, body mass index, body fat) in a cross-section, but longitudinal analyses show improvements in women's nutritional status with age. Biomarkers of immune function and anemia vary little with parity or pace of reproduction. Our findings demonstrate that even under energy-limited and infectious conditions, women are buffered from the potential depleting effects of rapid reproduction and compound offspring dependency characteristic of human life histories.

  9. Supernatural supersymmetry: Phenomenological implications of anomaly-mediated supersymmetry breaking

    NASA Astrophysics Data System (ADS)

    Feng, Jonathan L.; Moroi, Takeo

    2000-05-01

    We discuss the phenomenology of supersymmetric models in which supersymmetry breaking terms are induced by the super-Weyl anomaly. Such a scenario is envisioned to arise when supersymmetry breaking takes place in another world, i.e., on another brane. We review the anomaly-mediated framework and study in detail the minimal anomaly-mediated model parametrized by only 3+1 parameters: Maux, m0, tan β, and sgn(μ). The renormalization group equations exhibit a novel ``focus point'' (as opposed to fixed point) behavior, which allows squark and slepton masses far above their usual naturalness bounds. We present the superparticle spectrum and highlight several implications for high energy colliders. Three lightest supersymmetric particle (LSP) candidates exist: the W-ino, the stau, and the tau sneutrino. For the W-ino LSP scenario, light W-ino triplets with the smallest possible mass splittings are preferred; such W-inos are within reach of run II Fermilab Tevatron searches. Finally, we study a variety of sensitive low energy probes, including b-->sγ, the anomalous magnetic moment of the muon, and the electric dipole moments of the electron and neutron.

  10. Life cycle study of different constructive solutions for building enclosures.

    PubMed

    Garcia-Ceballos, Luz; de Andres-Díaz, Jose Ramon; Contreras-Lopez, Miguel A

    2018-06-01

    The construction sector must advance in a more sustainable way and to achieve this goal, the application of global methodologies is needed. These methodologies should take into account all life stages of a building: planning, design, construction, use and demolition. The quantity and variety of the materials used in building construction condition the buildings' environmental and energy impacts. Life Cycle Assessment offers a standardized framework to evaluate the environmental loads of a product, process or activity. This work aims to demonstrate the feasibility of using Life Cycle Assessment (LCA) to select facilities in the construction sector, which minimize environmental and energy impacts. To facilitate the understanding of the proposed methodology, a comparative LCA is performed, to determine the type of thermal insulating material in a double sheet ceramic façade and its thickness, which allows reducing the environmental impacts associated to the enclosure. The three most used enclosure types used in the city of Malaga (Spain) have been selected for this study. The results show the adequacy of the procedure used. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  12. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D).

    PubMed

    van de Streek, Jacco; Neumann, Marcus A

    2014-12-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom.

  13. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D)

    PubMed Central

    van de Streek, Jacco; Neumann, Marcus A.

    2014-01-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom. PMID:25449625

  14. Overview of legislation on sewage sludge management in developed countries worldwide.

    PubMed

    Christodoulou, A; Stamatelatou, K

    2016-01-01

    The need to apply innovative technologies for maximizing the efficiency and minimizing the carbon footprint of sewage treatment plants has upgraded sewage sludge management to a highly sophisticated research and development sector. Sewage sludge cannot be regarded solely as 'waste'; it is a renewable resource for energy and material recovery. From this perspective, legislation on sewage sludge management tends to incorporate issues related to environmental protection, public health, climate change impacts and socio-economic benefits. This paper reviews the existing legislative frameworks and policies on sewage sludge management in various countries, highlighting the common ground as well as the different priorities in all cases studied. More specifically, the key features of legislation regarding sludge management in developed countries such as the USA, Japan, Australia, New Zealand and the European Union (EU27) are discussed.

  15. From Cylindrical to Stretching Ridges and Wrinkles in Twisted Ribbons

    NASA Astrophysics Data System (ADS)

    Pham Dinh, Huy; Démery, Vincent; Davidovitch, Benny; Brau, Fabian; Damman, Pascal

    2016-09-01

    Twisted ribbons under tension exhibit a remarkably rich morphology, from smooth and wrinkled helicoids, to cylindrical or faceted patterns. This complexity emanates from the instability of the natural, helicoidal symmetry of the system, which generates both longitudinal and transverse stresses, thereby leading to buckling of the ribbon. Here, we focus on the tessellation patterns made of triangular facets. Our experimental observations are described within an "asymptotic isometry" approach that brings together geometry and elasticity. The geometry consists of parametrized families of surfaces, isometric to the undeformed ribbon in the singular limit of vanishing thickness and tensile load. The energy, whose minimization selects the favored structure among those families, is governed by the tensile work and bending cost of the pattern. This framework describes the coexistence lines in a morphological phase diagram, and determines the domain of existence of faceted structures.

  16. Lorentz violation, gravitoelectromagnetism and Bhabha scattering at finite temperature

    NASA Astrophysics Data System (ADS)

    Santos, A. F.; Khanna, Faqir C.

    2018-04-01

    Gravitoelectromagnetism (GEM) is an approach for the gravitation field that is described using the formulation and terminology similar to that of electromagnetism. The Lorentz violation is considered in the formulation of GEM that is covariant in its form. In practice, such a small violation of the Lorentz symmetry may be expected in a unified theory at very high energy. In this paper, a non-minimal coupling term, which exhibits Lorentz violation, is added as a new term in the covariant form. The differential cross-section for Bhabha scattering in the GEM framework at finite temperature is calculated that includes Lorentz violation. The Thermo Field Dynamics (TFD) formalism is used to calculate the total differential cross-section at finite temperature. The contribution due to Lorentz violation is isolated from the total cross-section. It is found to be small in magnitude.

  17. Integrating pro-environmental behavior with transportation network modeling: User and system level strategies, implementation, and evaluation

    NASA Astrophysics Data System (ADS)

    Aziz, H. M. Abdul

    Personal transport is a leading contributor to fossil fuel consumption and greenhouse (GHG) emissions in the U.S. The U.S. Energy Information Administration (EIA) reports that light-duty vehicles (LDV) are responsible for 61% of all transportation related energy consumption in 2012, which is equivalent to 8.4 million barrels of oil (fossil fuel) per day. The carbon content in fossil fuels is the primary source of GHG emissions that links to the challenge associated with climate change. Evidently, it is high time to develop actionable and innovative strategies to reduce fuel consumption and GHG emissions from the road transportation networks. This dissertation integrates the broader goal of minimizing energy and emissions into the transportation planning process using novel systems modeling approaches. This research aims to find, investigate, and evaluate strategies that minimize carbon-based fuel consumption and emissions for a transportation network. We propose user and system level strategies that can influence travel decisions and can reinforce pro-environmental attitudes of road users. Further, we develop strategies that system operators can implement to optimize traffic operations with emissions minimization goal. To complete the framework we develop an integrated traffic-emissions (EPA-MOVES) simulation framework that can assess the effectiveness of the strategies with computational efficiency and reasonable accuracy. The dissertation begins with exploring the trade-off between emissions and travel time in context of daily travel decisions and its heterogeneous nature. Data are collected from a web-based survey and the trade-off values indicating the average additional travel minutes a person is willing to consider for reducing a lb. of GHG emissions are estimated from random parameter models. Results indicate that different trade-off values for male and female groups. Further, participants from high-income households are found to have higher trade-off values compared with other groups. Next, we propose personal mobility carbon allowance (PMCA) scheme to reduce emissions from personal travel. PMCA is a market-based scheme that allocates carbon credits to users at no cost based on the emissions reduction goal of the system. Users can spend carbon credits for travel and a market place exists where users can buy or sell credits. This dissertation addresses two primary dimensions: the change in travel behavior of the users and the impact at network level in terms of travel time and emissions when PMCA is implemented. To understand this process, a real-time experimental game tool is developed where players are asked to make travel decisions within the carbon budget set by PMCA and they are allowed to trade carbon credits in a market modeled as a double auction game. Random parameter models are estimated to examine the impact of PMCA on short-term travel decisions. Further, to assess the impact at system level, a multi-class dynamic user equilibrium model is formulated that captures the travel behavior under PMCA scheme. The equivalent variational inequality problem is solved using projection method. Results indicate that PMCA scheme is able to reduce GHG emissions from transportation networks. Individuals with high value of travel time (VOTT) are less sensitive to PMCA scheme in context of work trips. High and medium income users are more likely to have non-work trips with lower carbon cost (higher travel time) to save carbon credits for work trips. Next, we focus on the strategies from the perspectives of system operators in transportation networks. Learning based signal control schemes are developed that can reduce emissions from signalized urban networks. The algorithms are implemented and tested in VISSIM micro simulator. Finally, an integrated emissions-traffic simulator framework is outlined that can be used to evaluate the effectiveness of the strategies. The integrated framework uses MOVES2010b as the emissions simulator. To estimate the emissions efficiently we propose a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the link driving schedules for MOVES2010b. Test results using the data from a five-intersection corridor show that HC-DTW technique can significantly reduce emissions estimation time without compromising the accuracy. The benefits are found to be most significant when the level of congestion variation is high. In addition to finding novel strategies for reducing emissions from transportation networks, this dissertation has broader impacts on behavior based energy policy design and transportation network modeling research. The trade-off values can be a useful indicator to identify which policies are most effective to reinforce pro-environmental travel choices. For instance, the model can estimate the distribution of trade-off between emissions and travel time, and provide insights on the effectiveness of policies for New York City if we are able to collect data to construct a representative sample. The probability of route choice decisions vary across population groups and trip contexts. The probability as a function of travel and demographic attributes can be used as behavior rules for agents in an agent-based traffic simulation. Finally, the dynamic user equilibrium based network model provides a general framework for energy policies such carbon tax, tradable permit, and emissions credits system.

  18. Real-Time Load-Side Control of Electric Power Systems

    NASA Astrophysics Data System (ADS)

    Zhao, Changhong

    Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems. (1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control. (2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.

  19. Energy-Water Microgrid Opportunity Analysis at the University of Arizona's Biosphere 2 Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daw, Jennifer A; Kandt, Alicen J; Macknick, Jordan E

    Microgrids provide reliable and cost-effective energy services in a variety of conditions and locations. There has been minimal effort invested in developing energy-water microgrids that demonstrate the feasibility and leverage synergies of operating renewable energy and water systems in a coordinated framework. Water systems can be operated in ways to provide ancillary services to the electrical grid and renewable energy can be utilized to power water-related infrastructure, but the potential for co-managed systems has not yet been quantified or fully characterized. Energy-water microgrids could be a promising solution to improve energy and water resource management for islands, rural communities, distributedmore » generation, Defense operations, and many parts of the world lacking critical infrastructure. NREL and the University of Arizona have been jointly researching energy-water microgrid opportunities at the University's Biosphere 2 (B2) research facility. B2 is an ideal case study for an energy-water microgrid test site, given its size, its unique mission and operations, the criticality of water and energy infrastructure, and its ability to operate connected to or disconnected from the local electrical grid. Moreover, the B2 is a premier facility for undertaking agricultural research, providing an excellent opportunity to evaluate connections and tradeoffs at the food-energy-water nexus. In this study, NREL used the B2 facility as a case study for an energy-water microgrid test site, with the potential to catalyze future energy-water system integration research. The study identified opportunities for energy and water efficiency and estimated the sizes of renewable energy and storage systems required to meet remaining loads in a microgrid, identified dispatchable loads in the water system, and laid the foundation for an in-depth energy-water microgrid analysis. The foundational work performed at B2 serves a model that can be built upon for identifying relevant energy-water microgrid data, analytical requirements, and operational challenges associated with development of future energy-water microgrids.« less

  20. 75 FR 67637 - Energy Conservation Program for Certain Commercial and Industrial Equipment: Framework Document...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ... Industrial Electric Motors AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy... electric motors. The comment period is extended to November 24, 2010. DATES: The comment period for the framework document for certain commercial and industrial electric motors, referenced in the notice of public...

  1. 2016-2020 Strategic Plan and Implementing Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-11-01

    The 2016-2020 Strategic Plan and Implementing Framework from the Office of Energy Efficiency and Renewable Energy (EERE) is the blueprint for launching the nation’s leadership in the global clean energy economy. This document will guide the organization to build on decades of progress in powering our nation from clean, affordable and secure energy.

  2. State Energy Resilience Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, J.; Finster, M.; Pillon, J.

    2016-12-01

    The energy sector infrastructure’s high degree of interconnectedness with other critical infrastructure systems can lead to cascading and escalating failures that can strongly affect both economic and social activities.The operational goal is to maintain energy availability for customers and consumers. For this body of work, a State Energy Resilience Framework in five steps is proposed.

  3. 10 CFR 20.1406 - Minimization of contamination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Minimization of contamination. 20.1406 Section 20.1406 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Radiological Criteria for..., including the subsurface, in accordance with the existing radiation protection requirements in Subpart B and...

  4. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  5. Many-objective Groundwater Monitoring Network Design Using Bias-Aware Ensemble Kalman Filtering and Evolutionary Optimization

    NASA Astrophysics Data System (ADS)

    Kollat, J. B.; Reed, P. M.

    2009-12-01

    This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem formulation provides a tremendous amount of information for decision makers. Specifically our many-objective analysis highlights the limitations and potentially negative design consequences of traditional single and two-objective problem formulations. These consequences become apparent through visual exploration of high-dimensional tradeoffs and the identification of regions with interesting compromise solutions. The prediction characteristics of these compromise designs are explored in detail, as well as their implications for subsequent design decisions in both space and time.

  6. Bi-scalar modified gravity and cosmology with conformal invariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saridakis, Emmanuel N.; Tsoukalas, Minas, E-mail: Emmanuel_Saridakis@baylor.edu, E-mail: minasts@central.ntua.gr

    2016-04-01

    We investigate the cosmological applications of a bi-scalar modified gravity that exhibits partial conformal invariance, which could become full conformal invariance in the absence of the usual Einstein-Hilbert term and introducing additionally either the Weyl derivative or properly rescaled fields. Such a theory is constructed by considering the action of a non-minimally conformally-coupled scalar field, and adding a second scalar allowing for a nonminimal derivative coupling with the Einstein tensor and the energy-momentum tensor of the first field. At a cosmological framework we obtain an effective dark-energy sector constituted from both scalars. In the absence of an explicit matter sectormore » we extract analytical solutions, which for some parameter regions correspond to an effective matter era and/or to an effective radiation era, thus the two scalars give rise to 'mimetic dark matter' or to 'dark radiation' respectively. In the case where an explicit matter sector is included we obtain a cosmological evolution in agreement with observations, that is a transition from matter to dark energy era, with the onset of cosmic acceleration. Furthermore, for particular parameter regions, the effective dark-energy equation of state can transit to the phantom regime at late times. These behaviors reveal the capabilities of the theory, since they arise purely from the novel, bi-scalar construction and the involved couplings between the two fields.« less

  7. Complications of laryngeal framework surgery (phonosurgery).

    PubMed

    Tucker, H M; Wanamaker, J; Trott, M; Hicks, D

    1993-05-01

    The rising popularity of surgery involving the laryngeal framework (surgical medialization of immobile vocal folds, vocal fold tightening, pitch variation, etc.) has resulted in increasing case experience. Little has appeared in the literature regarding complications or long-term results of this type of surgery. Several years' experience in a major referral center with various types of laryngeal framework surgery has led to a small number of complications. These have included late extrusion of the prosthesis and delayed hemorrhage. A review of these complications and recommendations for modification of technique to minimize them in the future are discussed.

  8. Boom Minimization Framework for Supersonic Aircraft Using CFD Analysis

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Rallabhandi, Sriram K.

    2010-01-01

    A new framework is presented for shape optimization using analytical shape functions and high-fidelity computational fluid dynamics (CFD) via Cart3D. The focus of the paper is the system-level integration of several key enabling analysis tools and automation methods to perform shape optimization and reduce sonic boom footprint. A boom mitigation case study subject to performance, stability and geometrical requirements is presented to demonstrate a subset of the capabilities of the framework. Lastly, a design space exploration is carried out to assess the key parameters and constraints driving the design.

  9. Energy performance evaluation of AAC

    NASA Astrophysics Data System (ADS)

    Aybek, Hulya

    The U.S. building industry constitutes the largest consumer of energy (i.e., electricity, natural gas, petroleum) in the world. The building sector uses almost 41 percent of the primary energy and approximately 72 percent of the available electricity in the United States. As global energy-generating resources are being depleted at exponential rates, the amount of energy consumed and wasted cannot be ignored. Professionals concerned about the environment have placed a high priority on finding solutions that reduce energy consumption while maintaining occupant comfort. Sustainable design and the judicious combination of building materials comprise one solution to this problem. A future including sustainable energy may result from using energy simulation software to accurately estimate energy consumption and from applying building materials that achieve the potential results derived through simulation analysis. Energy-modeling tools assist professionals with making informed decisions about energy performance during the early planning phases of a design project, such as determining the most advantageous combination of building materials, choosing mechanical systems, and determining building orientation on the site. By implementing energy simulation software to estimate the effect of these factors on the energy consumption of a building, designers can make adjustments to their designs during the design phase when the effect on cost is minimal. The primary objective of this research consisted of identifying a method with which to properly select energy-efficient building materials and involved evaluating the potential of these materials to earn LEED credits when properly applied to a structure. In addition, this objective included establishing a framework that provides suggestions for improvements to currently available simulation software that enhance the viability of the estimates concerning energy efficiency and the achievements of LEED credits. The primary objective was accomplished by using conducting several simulation models to determine the relative energy efficiency of wood-framed, metal-framed, and Aerated Autoclaved Concrete (AAC) wall structures for both commercial and residential buildings.

  10. Energy conservation through sealing technology

    NASA Technical Reports Server (NTRS)

    Stair, W. K.; Ludwig, L. P.

    1978-01-01

    Improvements in fluid film sealing resulting from a proposed research program could lead to an annual energy saving, on a national basis, equivalent to about 37 million bbl of oil or 0.3% of the total U.S. energy consumption. Further, the application of known sealing technology can result in an annual saving of an additional 10 million bbl of oil. The energy saving would be accomplished by reduction in process heat energy loss, reduction of frictional energy generated, and minimization of energy required to operate ancillary equipment associated with the seal system. In addition to energy saving, cost effectiveness is further enhanced by reduction in maintenance and in minimization of equipment for collecting leakage and for meeting environmental pollution standards.

  11. A Scatter-Based Prototype Framework and Multi-Class Extension of Support Vector Machines

    PubMed Central

    Jenssen, Robert; Kloft, Marius; Zien, Alexander; Sonnenburg, Sören; Müller, Klaus-Robert

    2012-01-01

    We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results. PMID:23118845

  12. Developing an Analytical Framework for Argumentation on Energy Consumption Issues

    ERIC Educational Resources Information Center

    Jin, Hui; Mehl, Cathy E.; Lan, Deborah H.

    2015-01-01

    In this study, we aimed to develop a framework for analyzing the argumentation practice of high school students and high school graduates. We developed the framework in a specific context--how energy consumption activities such as changing diet, converting forests into farmlands, and choosing transportation modes affect the carbon cycle. The…

  13. Food waste and the food-energy-water nexus: A review of food waste management alternatives.

    PubMed

    Kibler, Kelly M; Reinhart, Debra; Hawkins, Christopher; Motlagh, Amir Mohaghegh; Wright, James

    2018-04-01

    Throughout the world, much food produced is wasted. The resource impact of producing wasted food is substantial; however, little is known about the energy and water consumed in managing food waste after it has been disposed. Herein, we characterize food waste within the Food-Energy-Water (FEW) nexus and parse the differential FEW effects of producing uneaten food and managing food loss and waste. We find that various food waste management options, such as waste prevention, landfilling, composting, anaerobic digestion, and incineration, present variable pathways for FEW impacts and opportunities. Furthermore, comprehensive sustainable management of food waste will involve varied mechanisms and actors at multiple levels of governance and at the level of individual consumers. To address the complex food waste problem, we therefore propose a "food-waste-systems" approach to optimize resources within the FEW nexus. Such a framework may be applied to devise strategies that, for instance, minimize the amount of edible food that is wasted, foster efficient use of energy and water in the food production process, and simultaneously reduce pollution externalities and create opportunities from recycled energy and nutrients. Characterization of FEW nexus impacts of wasted food, including descriptions of dynamic feedback behaviors, presents a significant research gap and a priority for future work. Large-scale decision making requires more complete understanding of food waste and its management within the FEW nexus, particularly regarding post-disposal impacts related to water. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. The design of a wireless batteryless biflash installation with high power LEDs

    NASA Astrophysics Data System (ADS)

    Cappelle, J.; De Geest, W.; Hanselaer, P.

    2011-05-01

    Adding flashlights at crosswalks may make these weak traffic points safer. Unfortunately plugging in traffic lights into the electrical grid is expensive and complex. This paper reports about the energetic, the electronic and the optical design and building of a wireless and batteryless biflash installation in the framework of a flemish SME supporting program. The energy is supplied by a small solar panel and is buffered by supercapacitors instead of batteries. This has the advantage of being maintenance free: the number of charge-discharge cycles is almost unlimited because there is no chemical reaction involved in the storage mechanism. On the other hand the limited energy storage capacity of supercapacitors requires a new approach for the system design. Based on the EN-12352 standard for warning light devices, all design choices were filled in to be as energy efficient as possible. The duty cycle and the light output of the high power led flashlights are minimized. The components for the electronic circuits for the led driver, the control and the RF communication are selected based on their energy consumption and power management techniques are implemented. A lot of energy is saved by making the biflash system active. The leds are only flashing on demand or at preprogrammed moments. A biflash installation is typically installed at both sides of a crosswalk. A call at one of the sides should result in flashing at both sides. To maintain the drag and drop principle, a wireless RF communication system is designed.

  15. A holistic framework for design of cost-effective minimum water utilization network.

    PubMed

    Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N

    2008-07-01

    Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.

  16. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  17. Hack's relation and optimal channel networks: The elongation of river basins as a consequence of energy minimization

    NASA Astrophysics Data System (ADS)

    Ijjasz-Vasquez, Ede J.; Bras, Rafael L.; Rodriguez-Iturbe, Ignacio

    1993-08-01

    As pointed by Hack (1957), river basins tend to become longer and narrower as their size increases. This work shows that this property may be partially regarded as the consequence of competition and minimization of energy expenditure in river basins.

  18. High energy KrCl electric discharge laser

    DOEpatents

    Sze, Robert C.; Scott, Peter B.

    1981-01-01

    A high energy KrCl laser for producing coherent radiation at 222 nm. Output energies on the order of 100 mJ per pulse are produced utilizing a discharge excitation source to minimize formation of molecular ions, thereby minimizing absorption of laser radiation by the active medium. Additionally, HCl is used as a halogen donor which undergoes a harpooning reaction with metastable Kr.sub.M * to form KrCl.

  19. High energy KrCl electric discharge laser

    DOEpatents

    Sze, R.C.; Scott, P.B.

    A high energy KrCl laser is presented for producing coherent radiation at 222 nm. Output energies on the order of 100 mJ per pulse are produced utilizing a discharge excitation source to minimize formation of molecular ions, thereby minimizing absorption of laser radiation by the active medium. Additionally, HCl is used as a halogen donor which undergoes a harpooning reaction with metastable Kr/sub M/ to form KrCl.

  20. A Framework to Survey the Energy Efficiency of Installed Motor Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Prakash; Hasanbeigi, Ali; McKane, Aimee

    2013-08-01

    While motors are ubiquitous throughout the globe, there is insufficient data to properly assess their level of energy efficiency across regional boundaries. Furthermore, many of the existing data sets focus on motor efficiency and neglect the connected drive and system. Without a comprehensive survey of the installed motor system base, a baseline energy efficiency of a country or region’s motor systems cannot be developed. The lack of data impedes government agencies, utilities, manufacturers, distributers, and energy managers when identifying where to invest resources to capture potential energy savings, creating programs aimed at reducing electrical energy consumption, or quantifying the impactsmore » of such programs. This paper will outline a data collection framework for use when conducting a survey under a variety of execution models to characterize motor system energy efficiency within a country or region. The framework is intended to standardize the data collected ensuring consistency across independently conducted surveys. Consistency allows for the surveys to be leveraged against each other enabling comparisons to motor system energy efficiencies from other regions. In creating the framework, an analysis of various motor driven systems, including compressed air, pumping, and fan systems, was conducted and relevant parameters characterizing the efficiency of these systems were identified. A database using the framework will enable policymakers and industry to better assess the improvement potential of their installed motor system base particularly with respect to other regions, assisting in efforts to promote improvements to the energy efficiency of motor driven systems.« less

  1. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  2. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  3. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-07

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  4. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo

    2011-05-20

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrkmore » 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.« less

  5. Electronic Chemical Potentials of Porous Metal–Organic Frameworks

    PubMed Central

    2014-01-01

    The binding energy of an electron in a material is a fundamental characteristic, which determines a wealth of important chemical and physical properties. For metal–organic frameworks this quantity is hitherto unknown. We present a general approach for determining the vacuum level of porous metal–organic frameworks and apply it to obtain the first ionization energy for six prototype materials including zeolitic, covalent, and ionic frameworks. This approach for valence band alignment can explain observations relating to the electrochemical, optical, and electrical properties of porous frameworks. PMID:24447027

  6. Hovering efficiency comparison of rotary and flapping flight for rigid rectangular wings via dimensionless multi-objective optimization.

    PubMed

    Bayiz, Yagiz; Ghanaatpishe, Mohammad; Fathy, Hosam; Cheng, Bo

    2018-05-08

    In this work, a multi-objective optimization framework is developed for optimizing low Reynolds number ([Formula: see text]) hovering flight. This framework is then applied to compare the efficiency of rigid revolving and flapping wings with rectangular shape under varying [Formula: see text] and Rossby number ([Formula: see text], or aspect ratio). The proposed framework is capable of generating sets of optimal solutions and Pareto fronts for maximizing the lift coefficient and minimizing the power coefficient in dimensionless space, explicitly revealing the trade-off between lift generation and power consumption. The results indicate that revolving wings are more efficient when the required average lift coefficient [Formula: see text] is low (<1 for [Formula: see text] and  <1.6 for [Formula: see text]), while flapping wings are more efficient in achieving higher [Formula: see text]. With the dimensionless power loading as the single-objective performance measure to be maximized, rotary flight is more efficient than flapping wings for [Formula: see text] regardless of the amount of energy storage assumed in the flapping wing actuation mechanism, while flapping flight is more efficient for [Formula: see text]. It is observed that wings with low [Formula: see text] perform better when higher [Formula: see text] is needed, whereas higher [Formula: see text] cases are more efficient at [Formula: see text] regions. However, for the selected geometry and [Formula: see text], the efficiency is weakly dependent on [Formula: see text] when the dimensionless power loading is maximized.

  7. Automated antibody structure prediction using Accelrys tools: Results and best practices

    PubMed Central

    Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa

    2014-01-01

    We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271

  8. An enhanced mobile-healthcare emergency system based on extended chaotic maps.

    PubMed

    Lee, Cheng-Chi; Hsu, Che-Wei; Lai, Yan-Ming; Vasilakos, Athanasios

    2013-10-01

    Mobile Healthcare (m-Healthcare) systems, namely smartphone applications of pervasive computing that utilize wireless body sensor networks (BSNs), have recently been proposed to provide smartphone users with health monitoring services and received great attentions. An m-Healthcare system with flaws, however, may leak out the smartphone user's personal information and cause security, privacy preservation, or user anonymity problems. In 2012, Lu et al. proposed a secure and privacy-preserving opportunistic computing (SPOC) framework for mobile-Healthcare emergency. The brilliant SPOC framework can opportunistically gather resources on the smartphone such as computing power and energy to process the computing-intensive personal health information (PHI) in case of an m-Healthcare emergency with minimal privacy disclosure. To balance between the hazard of PHI privacy disclosure and the necessity of PHI processing and transmission in m-Healthcare emergency, in their SPOC framework, Lu et al. introduced an efficient user-centric privacy access control system which they built on the basis of an attribute-based access control mechanism and a new privacy-preserving scalar product computation (PPSPC) technique. However, we found out that Lu et al.'s protocol still has some secure flaws such as user anonymity and mutual authentication. To fix those problems and further enhance the computation efficiency of Lu et al.'s protocol, in this article, the authors will present an improved mobile-Healthcare emergency system based on extended chaotic maps. The new system is capable of not only providing flawless user anonymity and mutual authentication but also reducing the computation cost.

  9. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  10. Mitigating direct detection bounds in non-minimal Higgs portal scalar dark matter models

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Subhaditya; Ghosh, Purusottam; Maity, Tarak Nath; Ray, Tirtha Sankar

    2017-10-01

    The minimal Higgs portal dark matter model is increasingly in tension with recent results form direct detection experiments like LUX and XENON. In this paper we make a systematic study of simple extensions of the Z_2 stabilized singlet scalar Higgs portal scenario in terms of their prospects at direct detection experiments. We consider both enlarging the stabilizing symmetry to Z_3 and incorporating multipartite features in the dark sector. We demonstrate that in these non-minimal models the interplay of annihilation, co-annihilation and semi-annihilation processes considerably relax constraints from present and proposed direct detection experiments while simultaneously saturating observed dark matter relic density. We explore in particular the resonant semi-annihilation channel within the multipartite Z_3 framework which results in new unexplored regions of parameter space that would be difficult to constrain by direct detection experiments in the near future. The role of dark matter exchange processes within multi-component Z_3× Z_3^' } framework is illustrated. We make quantitative estimates to elucidate the role of various annihilation processes in the different allowed regions of parameter space within these models.

  11. Establishing a commercial building energy data framework for India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iyer, Maithili; Kumar, Satish; Mathew, Sangeeta

    Buildings account for over 40% of the world’s energy consumption and are therefore a key contributor to a country’s energy as well as carbon budget. Understanding how buildings use energy is critical to understanding how related policies may impact energy use. Data enables decision making, and good quality data arms consumers with the tools to compare their energy performance to their peers, allowing them to differentiate their buildings in the real estate market on the basis of their energy footprint. Good quality data are also essential for policy makers to prioritize their energy saving strategies and track implementation. The Unitedmore » States’ Commercial Building Energy Consumption Survey (CBECS) is an example of a successful data framework that is highly useful for governmental and nongovernmental initiatives related to benchmarking energy forecasting, rating systems and metrics, and more. The Bureau of Energy Efficiency (BEE) in India developed the Energy Conservation Building Code (ECBC) and launched the Star Labeling program for a few energy-intensive building segments as a significant first step. However, a data driven policy framework for systematically targeting energy efficiency in both new construction and existing buildings has largely been missing. There is no quantifiable mechanism currently in place to track the impact of code adoption through regular reporting/survey of energy consumption in the commercial building stock. In this paper we present findings from our study that explored use cases and approaches for establishing a commercial buildings data framework for India.« less

  12. Market-Based Decision Guidance Framework for Power and Alternative Energy Collaboration

    NASA Astrophysics Data System (ADS)

    Altaleb, Hesham

    With the introduction of power energy markets deregulation, innovations have transformed once a static network into a more flexible grid. Microgrids have also been deployed to serve various purposes (e.g., reliability, sustainability, etc.). With the rapid deployment of smart grid technologies, it has become possible to measure and record both, the quantity and time of the consumption of electrical power. In addition, capabilities for controlling distributed supply and demand have resulted in complex systems where inefficiencies are possible and where improvements can be made. Electric power like other volatile resources cannot be stored efficiently, therefore, managing such resource requires considerable attention. Such complex systems present a need for decisions that can streamline consumption, delay infrastructure investments, and reduce costs. When renewable power resources and the need for limiting harmful emissions are added to the equation, the search space for decisions becomes increasingly complex. As a result, the need for a comprehensive decision guidance system for electrical power resources consumption and productions becomes evident. In this dissertation, I formulate and implement a comprehensive framework that addresses different aspect of the electrical power generation and consumption using optimization models and utilizing collaboration concepts. Our solution presents a two-prong approach: managing interaction in real-time for the short-term immediate consumption of already allocated resources; and managing the operational planning for the long-run consumption. More specifically, in real-time, we present and implement a model of how to organize a secondary market for peak-demand allocation and describe the properties of the market that guarantees efficient execution and a method for the fair distribution of collaboration gains. We also propose and implement a primary market for peak demand bounds determination problem with the assumption that participants of this market have the ability to collaborate in real-time. Moreover, proposed in this dissertation is an extensible framework to facilitate C&I entities forming a consortium to collaborate on their electric power supply and demand. The collaborative framework includes the structure of market setting, bids, and market resolution that produces a schedule of how power components are controlled as well as the resulting payment. The market resolution must satisfy a number of desirable properties (i.e., feasibility, Nash equilibrium, Pareto optimality, and equal collaboration profitability) which are formally defined in the dissertation. Furthermore, to support the extensible framework components' library, power components such as utility contract, back-up power generator, renewable resource, and power consuming service are formally modeled. Finally, the validity of this framework is evaluated by a case study using simulated load scenarios to examine the ability of the framework to efficiently operate at the specified time intervals with minimal overhead cost.

  13. 78 FR 7304 - Energy Efficiency Program for Commercial and Industrial Equipment: Public Meeting and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-01

    ... Efficiency Program for Commercial and Industrial Equipment: Public Meeting and Availability of the Framework Document for Commercial and Industrial Pumps AGENCY: Office of Energy Efficiency and Renewable Energy... industrial pumps. To inform interested parties and to facilitate this process, DOE has prepared a Framework...

  14. Wind energy on the horizon in British Columbia. A review and evaluation of the British Columbia wind energy planning framework

    NASA Astrophysics Data System (ADS)

    Day, Jason

    This study examines the wind energy planning frameworks from ten North American jurisdictions, drawing important lessons that British Columbia could use to build on its current model which has been criticized for its limited scope and restriction of local government powers. This study contributes to similar studies conducted by Kimrey (2006), Longston (2006), and Eriksen (2009). This study concludes that inclusion of wind resource zones delineated through strategic environmental assessment, programme assessment, and conducting research-oriented studies could improve the current British Columbia planning framework. The framework should also strengthen its bat impact assessment practices and incorporate habitat compensation. This research also builds upon Rosenberg's (2008) wind energy planning framework typologies. I conclude that the typology utilized in Texas should be employed in British Columbia in order to facilitate utilizing wind power. The only adaptation needed is the establishment of a cross-jurisdictional review committee for project assessment to address concerns about local involvement and site-specific environmental and social concerns.

  15. A Confidence Paradigm for Classification Systems

    DTIC Science & Technology

    2008-09-01

    methodology to determine how much confi- dence one should have in a classifier output. This research proposes a framework to determine the level of...theoretical framework that attempts to unite the viewpoints of the classification system developer (or engineer) and the classification system user (or...operating point. An algorithm is developed that minimizes a “confidence” measure called Binned Error in the Posterior ( BEP ). Then, we prove that training a

  16. First-Order Frameworks for Managing Models in Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natlia M.; Lewis, Robert Michael

    2000-01-01

    Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.

  17. On evolutionary systems.

    PubMed

    Alvarez de Lorenzana, J M; Ward, L M

    1987-01-01

    This paper develops a metatheoretical framework for understanding evolutionary systems (systems that develop in ways that increase their own variety). The framework addresses shortcomings seen in other popular systems theories. It concerns both living and nonliving systems, and proposes a metahierarchy of hierarchical systems. Thus, it potentially addresses systems at all descriptive levels. We restrict our definition of system to that of a core system whose parts have a different ontological status than the system, and characterize the core system in terms of five global properties: minimal length interval, minimal time interval, system cycle, total receptive capacity, and system potential. We propose two principles through the interaction of which evolutionary systems develop. The Principle of Combinatorial Expansion describes how a core system realizes its developmental potential through a process of progressive differentiation of the single primal state up to a limit stage. The Principle of Generative Condensation describes how the components of the last stage of combinatorial expansion condense and become the environment for and components of new, enriched systems. The early evolution of the Universe after the "big bang" is discussed in light of these ideas as an example of the application of the framework.

  18. Spatial Optimization of Future Urban Development with Regards to Climate Risk and Sustainability Objectives.

    PubMed

    Caparros-Midwood, Daniel; Barr, Stuart; Dawson, Richard

    2017-11-01

    Future development in cities needs to manage increasing populations, climate-related risks, and sustainable development objectives such as reducing greenhouse gas emissions. Planners therefore face a challenge of multidimensional, spatial optimization in order to balance potential tradeoffs and maximize synergies between risks and other objectives. To address this, a spatial optimization framework has been developed. This uses a spatially implemented genetic algorithm to generate a set of Pareto-optimal results that provide planners with the best set of trade-off spatial plans for six risk and sustainability objectives: (i) minimize heat risks, (ii) minimize flooding risks, (iii) minimize transport travel costs to minimize associated emissions, (iv) maximize brownfield development, (v) minimize urban sprawl, and (vi) prevent development of greenspace. The framework is applied to Greater London (U.K.) and shown to generate spatial development strategies that are optimal for specific objectives and differ significantly from the existing development strategies. In addition, the analysis reveals tradeoffs between different risks as well as between risk and sustainability objectives. While increases in heat or flood risk can be avoided, there are no strategies that do not increase at least one of these. Tradeoffs between risk and other sustainability objectives can be more severe, for example, minimizing heat risk is only possible if future development is allowed to sprawl significantly. The results highlight the importance of spatial structure in modulating risks and other sustainability objectives. However, not all planning objectives are suited to quantified optimization and so the results should form part of an evidence base to improve the delivery of risk and sustainability management in future urban development. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  19. Examination of the consumer decision process for residential energy use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinan, T.M.

    1987-01-01

    Numerous studies have examined the factors that influence consumers' energy-using behavior. A comprehensive review of these studies was conducted in which articles from different research disciplines (economics, sociology, psychology, and marketing) were examined. This paper provides a discussion of a subset of these studies, and based on findings of the review, offers recommendations for future research. The literature review revealed a need to develop an integrated framework for examining consumers' energy-using behavior. This integrated framework should simultaneously consider both price and nonprice related factors which underlie energy use decisions. It should also examined the process by which decisions are made,more » as well as the factors that affect the decision outcome. This paper provides a suggested integrated framework for future research and discusses the data required to support this framework. 23 references, 3 figures.« less

  20. Developing a Novel Hydrogen Sponge with Ideal Binding Energy and High Surface Area for Practical Hydrogen Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, T. C. Mike

    This Phase I (5 quarters) research project was to examine the validity of a new class of boron-containing polymer (B-polymer) frameworks, serving as the adsorbents for the practical onboard H2 storage applications. Three B-polymer frameworks were synthesized and investigated, which include B-poly(butyenylstyrene) (B-PBS) framework (A), B-poly(phenyldiacetyene) (B-PPDA) framework (B), and B-poly(phenyltriacetylene) (B-PPTA) framework (C). They are 2-D polymer structures with the repeating cyclic units that spontaneously form open morphology and the B-doped (p-type) π-electrons delocalized surfaces. The ideal B-polymer framework shall exhibit open micropores (pore size in the range of 1-1.5nm) with high surface area (>3000 m 2/g), and themore » B-dopants in the conjugated framework shall provide high surface energy for interacting with H 2 molecules (an ideal H 2 binding energy in the range of 15-25 kJ/mol). The pore size distribution and H2 binding energy were investigated at both Penn State and NREL laboratories. So far, the experimental results show the successful synthesis of B-polymer frameworks with the relatively well-defined planar (2-D) structures. The intrinsically formed porous morphology exhibits a broad pore size distribution (in the range of 0.5-10 nm) with specific surface area (~1000 m 2/g). The miss-alignment between 2-D layers may block some micropore channels and limit gas diffusion throughout the entire matrix. In addition, the 2-D planar conjugated structure may also allow free π-electrons delocalization throughout the framework, which significantly reduces the acidity of B-moieties (electron-deficiency).The resulting 2-D B-polymer frameworks only exhibit a small increase of H 2 binding energy in the range of 8-9 KJ/mole (quite constant over the whole sorption range).« less

  1. Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach

    DOE PAGES

    Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.

    2017-11-14

    A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.

  2. Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.

    A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.

  3. New approach to wireless data communication in a propagation environment

    NASA Astrophysics Data System (ADS)

    Hunek, Wojciech P.; Majewski, Paweł

    2017-10-01

    This paper presents a new idea of perfect signal reconstruction in multivariable wireless communications systems including a different number of transmitting and receiving antennas. The proposed approach is based on the polynomial matrix S-inverse associated with Smith factorization. Crucially, the above mentioned inverse implements the so-called degrees of freedom. It has been confirmed by simulation study that the degrees of freedom allow to minimalize the negative impact of the propagation environment in terms of increasing the robustness of whole signal reconstruction process. Now, the parasitic drawbacks in form of dynamic ISI and ICI effects can be eliminated in framework described by polynomial calculus. Therefore, the new method guarantees not only reducing the financial impact but, more importantly, provides potentially the lower consumption energy systems than other classical ones. In order to show the potential of new approach, the simulation studies were performed by author's simulator based on well-known OFDM technique.

  4. A low-dissipation monotonicity-preserving scheme for turbulent flows in hydraulic turbines

    NASA Astrophysics Data System (ADS)

    Yang, L.; Nadarajah, S.

    2016-11-01

    The objective of this work is to improve the inherent dissipation of the numerical schemes under the framework of a Reynolds-averaged Navier-Stokes (RANS) simulation. The governing equations are solved by the finite volume method with the k-ω SST turbulence model. Instead of the van Albada limiter, a novel eddy-preserving limiter is employed in the MUSCL reconstructions to minimize the dissipation of the vortex. The eddy-preserving procedure inactivates the van Albada limiter in the swirl plane and reduces the artificial dissipation to better preserve vortical flow structures. Steady and unsteady simulations of turbulent flows in a straight channel and a straight asymmetric diffuser are demonstrated. Profiles of velocity, Reynolds shear stress and turbulent kinetic energy are presented and compared against large eddy simulation (LES) and/or experimental data. Finally, comparisons are made to demonstrate the capability of the eddy-preserving limiter scheme.

  5. Improved cardiac motion detection from ultrasound images using TDIOF: a combined B-mode/ tissue Doppler approach

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Stoddard, Marcus F.; Amini, Amir A.

    2013-03-01

    Quantitative motion analysis of echocardiographic images helps clinicians with the diagnosis and therapy of patients suffering from cardiac disease. Quantitative analysis is usually based on TDI (Tissue Doppler Imaging) or speckle tracking. These methods are based on two independent techniques - the Doppler Effect and image registration, respectively. In order to increase the accuracy of the speckle tracking technique and cope with the angle dependency of TDI, herein, a combined approach dubbed TDIOF (Tissue Doppler Imaging Optical Flow) is proposed. TDIOF is formulated based on the combination of B-mode and Doppler energy terms in an optical flow framework and minimized using algebraic equations. In this paper, we report on validations with simulated, physical cardiac phantom, and in-vivo patient data. It is shown that the additional Doppler term is able to increase the accuracy of speckle tracking, the basis for several commercially available echocardiography analysis techniques.

  6. Ultrasound tissue analysis and characterization

    NASA Astrophysics Data System (ADS)

    Kaufhold, John; Chan, Ray C.; Karl, William C.; Castanon, David A.

    1999-07-01

    On the battlefield of the future, it may become feasible for medics to perform, via application of new biomedical technologies, more sophisticated diagnoses and surgery than is currently practiced. Emerging biomedical technology may enable the medic to perform laparoscopic surgical procedures to remove, for example, shrapnel from injured soldiers. Battlefield conditions constrain the types of medical image acquisition and interpretation which can be performed. Ultrasound is the only viable biomedical imaging modality appropriate for deployment on the battlefield -- which leads to image interpretation issues because of the poor quality of ultrasound imagery. To help overcome these issues, we develop and implement a method of image enhancement which could aid non-experts in the rapid interpretation and use of ultrasound imagery. We describe an energy minimization approach to finding boundaries in medical images and show how prior information on edge orientation can be incorporated into this framework to detect tissue boundaries oriented at a known angle.

  7. Trading strategies for distribution company with stochastic distributed energy resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chunyu; Wang, Qi; Wang, Jianhui

    2016-09-01

    This paper proposes a methodology to address the trading strategies of a proactive distribution company (PDISCO) engaged in the transmission-level (TL) markets. A one-leader multi-follower bilevel model is presented to formulate the gaming framework between the PDISCO and markets. The lower-level (LL) problems include the TL day-ahead market and scenario-based real-time markets, respectively with the objectives of maximizing social welfare and minimizing operation cost. The upper-level (UL) problem is to maximize the PDISCO’s profit across these markets. The PDISCO’s strategic offers/bids interactively influence the outcomes of each market. Since the LL problems are linear and convex, while the UL problemmore » is non-linear and non-convex, an equivalent primal–dual approach is used to reformulate this bilevel model to a solvable mathematical program with equilibrium constraints (MPEC). The effectiveness of the proposed model is verified by case studies.« less

  8. Replicator equations, maximal cliques, and graph isomorphism.

    PubMed

    Pelillo, M

    1999-11-15

    We present a new energy-minimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid-1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a standard quadratic program. The attractive feature of this formulation is that a clear one-to-one correspondence exists between the solutions of the quadratic program and those in the original, combinatorial problem. To solve the program we use the so-called replicator equations--a class of straightforward continuous- and discrete-time dynamical systems developed in various branches of theoretical biology. We show how, despite their inherent inability to escape from local solutions, they nevertheless provide experimental results that are competitive with those obtained using more elaborate mean-field annealing heuristics.

  9. A modified method for MRF segmentation and bias correction of MR image with intensity inhomogeneity.

    PubMed

    Xie, Mei; Gao, Jingjing; Zhu, Chongjin; Zhou, Yan

    2015-01-01

    Markov random field (MRF) model is an effective method for brain tissue classification, which has been applied in MR image segmentation for decades. However, it falls short of the expected classification in MR images with intensity inhomogeneity for the bias field is not considered in the formulation. In this paper, we propose an interleaved method joining a modified MRF classification and bias field estimation in an energy minimization framework, whose initial estimation is based on k-means algorithm in view of prior information on MRI. The proposed method has a salient advantage of overcoming the misclassifications from the non-interleaved MRF classification for the MR image with intensity inhomogeneity. In contrast to other baseline methods, experimental results also have demonstrated the effectiveness and advantages of our algorithm via its applications in the real and the synthetic MR images.

  10. Equilibrium Conformations of Concentric-tube Continuum Robots

    PubMed Central

    Rucker, D. Caleb; Webster, Robert J.; Chirikjian, Gregory S.; Cowan, Noah J.

    2013-01-01

    Robots consisting of several concentric, preshaped, elastic tubes can work dexterously in narrow, constrained, and/or winding spaces, as are commonly found in minimally invasive surgery. Previous models of these “active cannulas” assume piecewise constant precurvature of component tubes and neglect torsion in curved sections of the device. In this paper we develop a new coordinate-free energy formulation that accounts for general preshaping of an arbitrary number of component tubes, and which explicitly includes both bending and torsion throughout the device. We show that previously reported models are special cases of our formulation, and then explore in detail the implications of torsional flexibility for the special case of two tubes. Experiments demonstrate that this framework is more descriptive of physical prototype behavior than previous models; it reduces model prediction error by 82% over the calibrated bending-only model, and 17% over the calibrated transmissional torsion model in a set of experiments. PMID:25125773

  11. A Generalized Formulation of Demand Response under Market Environments

    NASA Astrophysics Data System (ADS)

    Nguyen, Minh Y.; Nguyen, Duc M.

    2015-06-01

    This paper presents a generalized formulation of Demand Response (DR) under deregulated electricity markets. The problem is scheduling and controls the consumption of electrical loads according to the market price to minimize the energy cost over a day. Taking into account the modeling of customers' comfort (i.e., preference), the formulation can be applied to various types of loads including what was traditionally classified as critical loads (e.g., air conditioning, lights). The proposed DR scheme is based on Dynamic Programming (DP) framework and solved by DP backward algorithm in which the stochastic optimization is used to treat the uncertainty, if any occurred in the problem. The proposed formulation is examined with the DR problem of different loads, including Heat Ventilation and Air Conditioning (HVAC), Electric Vehicles (EVs) and a newly DR on the water supply systems of commercial buildings. The result of simulation shows significant saving can be achieved in comparison with their traditional (On/Off) scheme.

  12. Minimal string theories and integrable hierarchies

    NASA Astrophysics Data System (ADS)

    Iyer, Ramakrishnan

    Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non-perturbative definition for the first time. Notably, we discover that the Painleve IV equation plays a key role in organizing the string theory physics, joining its siblings, Painleve I and II, whose roles have previously been identified in this minimal string context. We then present evidence that the conjectured type II theories have smooth non-perturbative solutions, connecting two perturbative asymptotic regimes, in a 't Hooft limit. Our technique also demonstrates evidence for new minimal string theories that are not apparent in a perturbative analysis.

  13. Error minimizing algorithms for nearest eighbor classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Hush, Don; Zimmer, G. Beate

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. Wemore » use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.« less

  14. A framework for determining improved placement of current energy converters subject to environmental constraints

    DOE PAGES

    Nelson, Kurt; James, Scott C.; Roberts, Jesse D.; ...

    2017-06-05

    A modelling framework identifies deployment locations for current-energy-capture devices that maximise power output while minimising potential environmental impacts. The framework, based on the Environmental Fluid Dynamics Code, can incorporate site-specific environmental constraints. Over a 29-day period, energy outputs from three array layouts were estimated for: (1) the preliminary configuration (baseline), (2) an updated configuration that accounted for environmental constraints, (3) and an improved configuration subject to no environmental constraints. Of these layouts, array placement that did not consider environmental constraints extracted the most energy from flow (4.38 MW-hr/day), 19% higher than output from the baseline configuration (3.69 MW-hr/day). Array placementmore » that considered environmental constraints removed 4.27 MW-hr/day of energy (16% more than baseline). In conclusion, this analysis framework accounts for bathymetry and flow-pattern variations that typical experimental studies cannot, demonstrating that it is a valuable tool for identifying improved array layouts for field deployments.« less

  15. Building a framework for ergonomic research on laparoscopic instrument handles.

    PubMed

    Li, Zheng; Wang, Guohui; Tan, Juan; Sun, Xulong; Lin, Hao; Zhu, Shaihong

    2016-06-01

    Laparoscopic surgery carries the advantage of minimal invasiveness, but ergonomic design of the instruments used has progressed slowly. Previous studies have demonstrated that the handle of laparoscopic instruments is vital for both surgical performance and surgeon's health. This review provides an overview of the sub-discipline of handle ergonomics, including an evaluation framework, objective and subjective assessment systems, data collection and statistical analyses. Furthermore, a framework for ergonomic research on laparoscopic instrument handles is proposed to standardize work on instrument design. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  16. A framework for targeting household energy savings through habitual behavioural change

    NASA Astrophysics Data System (ADS)

    Pothitou, Mary; Kolios, Athanasios J.; Varga, Liz; Gu, Sai

    2016-08-01

    This paper reviews existing up-to-date literature related to individual household energy consumption. The how and why individual behaviour affects energy use are discussed, together with the principles and perspectives which have so far been considered in order to explain the habitual consuming behaviour. The research gaps, which are revealed from previous studies in terms of the limitations or assumptions on the methodology to alter individuals' energy usage, give insights for a conceptual framework to define a comprehensive approach. The proposed framework suggests that the individual energy perception gaps are affected by psychological, habitual, structural and cultural variables in a wider-contextual, meso-societal and micro-individual spectrum. All these factors need to be considered in order for a variety of combined intervention methods, which are discussed and recommended, to introduce a more effective shift in the conventional energy-consuming behaviour, advancing insights for successful energy policies.

  17. An Open Source Extensible Smart Energy Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rankin, Linda

    Aggregated distributed energy resources are the subject of much interest in the energy industry and are expected to play an important role in meeting our future energy needs by changing how we use, distribute and generate electricity. This energy future includes an increased amount of energy from renewable resources, load management techniques to improve resiliency and reliability, and distributed energy storage and generation capabilities that can be managed to meet the needs of the grid as well as individual customers. These energy assets are commonly referred to as Distributed Energy Resources (DER). DERs rely on a means to communicate informationmore » between an energy provider and multitudes of devices. Today DER control systems are typically vendor-specific, using custom hardware and software solutions. As a result, customers are locked into communication transport protocols, applications, tools, and data formats. Today’s systems are often difficult to extend to meet new application requirements, resulting in stranded assets when business requirements or energy management models evolve. By partnering with industry advisors and researchers, an implementation DER research platform was developed called the Smart Energy Framework (SEF). The hypothesis of this research was that an open source Internet of Things (IoT) framework could play a role in creating a commodity-based eco-system for DER assets that would reduce costs and provide interoperable products. SEF is based on the AllJoynTM IoT open source framework. The demonstration system incorporated DER assets, specifically batteries and smart water heaters. To verify the behavior of the distributed system, models of water heaters and batteries were also developed. An IoT interface for communicating between the assets and a control server was defined. This interface supports a series of “events” and telemetry reporting, similar to those defined by current smart grid communication standards. The results of this effort demonstrated the feasibility and application potential of using IoT frameworks for the creation of commodity-based DER systems. All of the identified commodity-based system requirements were met by the AllJoyn framework. By having commodity solutions, small vendors can enter the market and the cost of implementation for all parties is reduced. Utilities and aggregators can choose from multiple interoperable products reducing the risk of stranded assets. Based on this research it is recommended that interfaces based on existing smart grid communication protocol standards be created for these emerging IoT frameworks. These interfaces should be standardized as part of the IoT framework allowing for interoperability testing and certification. Similarly, IoT frameworks are introducing application level security. This type of security is needed for protecting application and platforms and will be important moving forward. Recommendations are that along with DER-based data model interfaces, platform and application security requirements also be prescribed when IoT devices support DER applications.« less

  18. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less

  19. Parametric study of minimum converter loss in an energy-storage dc-to-dc converter

    NASA Technical Reports Server (NTRS)

    Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.

    1982-01-01

    Through a combination of analytical and numerical minimization procedures, a converter design that results in the minimum total converter loss (including core loss, winding loss, capacitor and energy-storage-reactor loss, and various losses in the semiconductor switches) is obtained. Because the initial phase involves analytical minimization, the computation time required by the subsequent phase of numerical minimization is considerably reduced in this combination approach. The effects of various loss parameters on the optimum values of the design variables are also examined.

  20. Radical covalent organic frameworks: a general strategy to immobilize open-accessible polyradicals for high-performance capacitive energy storage.

    PubMed

    Xu, Fei; Xu, Hong; Chen, Xiong; Wu, Dingcai; Wu, Yang; Liu, Hao; Gu, Cheng; Fu, Ruowen; Jiang, Donglin

    2015-06-01

    Ordered π-columns and open nanochannels found in covalent organic frameworks (COFs) could render them able to store electric energy. However, the synthetic difficulty in achieving redox-active skeletons has thus far restricted their potential for energy storage. A general strategy is presented for converting a conventional COF into an outstanding platform for energy storage through post-synthetic functionalization with organic radicals. The radical frameworks with openly accessible polyradicals immobilized on the pore walls undergo rapid and reversible redox reactions, leading to capacitive energy storage with high capacitance, high-rate kinetics, and robust cycle stability. The results suggest that channel-wall functional engineering with redox-active species will be a facile and versatile strategy to explore COFs for energy storage. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Energy and time determine scaling in biological and computer designs.

    PubMed

    Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-08-19

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).

  2. The Oseen-Frank Limit of Onsager's Molecular Theory for Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Liu, Yuning; Wang, Wei

    2018-03-01

    We study the relationship between Onsager's molecular theory, which involves the effects of nonlocal molecular interactions and the Oseen-Frank theory for nematic liquid crystals. Under the molecular setting, we prove the existence of global minimizers for the generalized Onsager's free energy, subject to a nonlocal boundary condition which prescribes the second moment of the number density function near the boundary. Moreover, when the re-scaled interaction distance tends to zero, the global minimizers will converge to a uniaxial distribution predicted by a minimizing harmonic map. This is achieved through the investigations of the compactness property and the boundary behaviors of the corresponding second moments. A similar result is established for critical points of the free energy that fulfill a natural energy bound.

  3. Trajectory-Oriented Approach to Managing Traffic Complexity: Operational Concept and Preliminary Metrics Definition

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert; Garcia-Chico, Jose L.

    2008-01-01

    This document describes preliminary research on a distributed, trajectory-oriented approach for traffic complexity management. The approach is to manage traffic complexity in a distributed control environment, based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents an analytical framework to study trajectory flexibility and the impact of trajectory constraints on it. The document proposes preliminary flexibility metrics that can be interpreted and measured within the framework.

  4. Priors in perception: Top-down modulation, Bayesian perceptual learning rate, and prediction error minimization.

    PubMed

    Hohwy, Jakob

    2017-01-01

    I discuss top-down modulation of perception in terms of a variable Bayesian learning rate, revealing a wide range of prior hierarchical expectations that can modulate perception. I then switch to the prediction error minimization framework and seek to conceive cognitive penetration specifically as prediction error minimization deviations from a variable Bayesian learning rate. This approach retains cognitive penetration as a category somewhat distinct from other top-down effects, and carves a reasonable route between penetrability and impenetrability. It prevents rampant, relativistic cognitive penetration of perception and yet is consistent with the continuity of cognition and perception. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  6. Evaluation Framework and Analyses for Thermal Energy Storage Integrated with Packaged Air Conditioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kung, F.; Deru, M.; Bonnema, E.

    2013-10-01

    Few third-party guidance documents or tools are available for evaluating thermal energy storage (TES) integrated with packaged air conditioning (AC), as this type of TES is relatively new compared to TES integrated with chillers or hot water systems. To address this gap, researchers at the National Renewable Energy Laboratory conducted a project to improve the ability of potential technology adopters to evaluate TES technologies. Major project outcomes included: development of an evaluation framework to describe key metrics, methodologies, and issues to consider when assessing the performance of TES systems integrated with packaged AC; application of multiple concepts from the evaluationmore » framework to analyze performance data from four demonstration sites; and production of a new simulation capability that enables modeling of TES integrated with packaged AC in EnergyPlus. This report includes the evaluation framework and analysis results from the project.« less

  7. Energy Storage Applications in Power Systems with Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Ghofrani, Mahmoud

    In this dissertation, we propose new operational and planning methodologies for power systems with renewable energy sources. A probabilistic optimal power flow (POPF) is developed to model wind power variations and evaluate the power system operation with intermittent renewable energy generation. The methodology is used to calculate the operating and ramping reserves that are required to compensate for power system uncertainties. Distributed wind generation is introduced as an operational scheme to take advantage of the spatial diversity of renewable energy resources and reduce wind power fluctuations using low or uncorrelated wind farms. The POPF is demonstrated using the IEEE 24-bus system where the proposed operational scheme reduces the operating and ramping reserve requirements and operation and congestion cost of the system as compared to operational practices available in the literature. A stochastic operational-planning framework is also proposed to adequately size, optimally place and schedule storage units within power systems with high wind penetrations. The method is used for different applications of energy storage systems for renewable energy integration. These applications include market-based opportunities such as renewable energy time-shift, renewable capacity firming, and transmission and distribution upgrade deferral in the form of revenue or reduced cost and storage-related societal benefits such as integration of more renewables, reduced emissions and improved utilization of grid assets. A power-pool model which incorporates the one-sided auction market into POPF is developed. The model considers storage units as market participants submitting hourly price bids in the form of marginal costs. This provides an accurate market-clearing process as compared to the 'price-taker' analysis available in the literature where the effects of large-scale storage units on the market-clearing prices are neglected. Different case studies are provided to demonstrate our operational-planning framework and economic justification for different storage applications. A new reliability model is proposed for security and adequacy assessment of power networks containing renewable resources and energy storage systems. The proposed model is used in combination with the operational-planning framework to enhance the reliability and operability of wind integration. The proposed framework optimally utilizes the storage capacity for reliability applications of wind integration. This is essential for justification of storage deployment within regulated utilities where the absence of market opportunities limits the economic advantage of storage technologies over gas-fired generators. A control strategy is also proposed to achieve the maximum reliability using energy storage systems. A cost-benefit analysis compares storage technologies and conventional alternatives to reliably and efficiently integrate different wind penetrations and determines the most economical design. Our simulation results demonstrate the necessity of optimal storage placement for different wind applications. This dissertation also proposes a new stochastic framework to optimally charge and discharge electric vehicles (EVs) to mitigate the effects of wind power uncertainties. Vehicle-to-grid (V2G) service for hedging against wind power imbalances is introduced as a novel application for EVs. This application enhances the predictability of wind power and reduces the power imbalances between the scheduled output and actual power. An Auto Regressive Moving Average (ARMA) wind speed model is developed to forecast the wind power output. Driving patterns of EVs are stochastically modeled and the EVs are clustered in the fleets of similar daily driving patterns. Monte Carlo Simulation (MCS) simulates the system behavior by generating samples of system states using the wind ARMA model and EVs driving patterns. A Genetic Algorithm (GA) is used in combination with MCS to optimally coordinate the EV fleets for their V2G services and minimize the penalty cost associated with wind power imbalances. The economic characteristics of automotive battery technologies and costs of V2G service are incorporated into a cost-benefit analysis which evaluates the economic justification of the proposed V2G application. Simulation results demonstrate that the developed algorithm enhances wind power utilization and reduces the penalty cost for wind power under-/over-production. This offers potential revenues for the wind producer. Our cost-benefit analysis also demonstrates that the proposed algorithm will provide the EV owners with economic incentives to participate in V2G services. The proposed smart scheduling strategy develops a sustainable integrated electricity and transportation infrastructure.

  8. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  9. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  10. Cost minimization in a full-scale conventional wastewater treatment plant: associated costs of biological energy consumption versus sludge production.

    PubMed

    Sid, S; Volant, A; Lesage, G; Heran, M

    2017-11-01

    Energy consumption and sludge production minimization represent rising challenges for wastewater treatment plants (WWTPs). The goal of this study is to investigate how energy is consumed throughout the whole plant and how operating conditions affect this energy demand. A WWTP based on the activated sludge process was selected as a case study. Simulations were performed using a pre-compiled model implemented in GPS-X simulation software. Model validation was carried out by comparing experimental and modeling data of the dynamic behavior of the mixed liquor suspended solids (MLSS) concentration and nitrogen compounds concentration, energy consumption for aeration, mixing and sludge treatment and annual sludge production over a three year exercise. In this plant, the energy required for bioreactor aeration was calculated at approximately 44% of the total energy demand. A cost optimization strategy was applied by varying the MLSS concentrations (from 1 to 8 gTSS/L) while recording energy consumption, sludge production and effluent quality. An increase of MLSS led to an increase of the oxygen requirement for biomass aeration, but it also reduced total sludge production. Results permit identification of a key MLSS concentration allowing identification of the best compromise between levels of treatment required, biological energy demand and sludge production while minimizing the overall costs.

  11. Materials and Techniques for Implantable Nutrient Sensing Using Flexible Sensors Integrated with Metal-Organic Frameworks.

    PubMed

    Ling, Wei; Liew, Guoguang; Li, Ya; Hao, Yafeng; Pan, Huizhuo; Wang, Hanjie; Ning, Baoan; Xu, Hang; Huang, Xian

    2018-06-01

    The combination of novel materials with flexible electronic technology may yield new concepts of flexible electronic devices that effectively detect various biological chemicals to facilitate understanding of biological processes and conduct health monitoring. This paper demonstrates single- or multichannel implantable flexible sensors that are surface modified with conductive metal-organic frameworks (MOFs) such as copper-MOF and cobalt-MOF with large surface area, high porosity, and tunable catalysis capability. The sensors can monitor important nutriments such as ascorbicacid, glycine, l-tryptophan (l-Trp), and glucose with detection resolutions of 14.97, 0.71, 4.14, and 54.60 × 10 -6 m, respectively. In addition, they offer sensing capability even under extreme deformation and complex surrounding environment with continuous monitoring capability for 20 d due to minimized use of biological active chemicals. Experiments using live cells and animals indicate that the MOF-modified sensors are biologically safe to cells, and can detect l-Trp in blood and interstitial fluid. This work represents the first effort in integrating MOFs with flexible sensors to achieve highly specific and sensitive implantable electrochemical detection and may inspire appearance of more flexible electronic devices with enhanced capability in sensing, energy storage, and catalysis using various properties of MOFs. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Green Urbanism for the Greener Future of Metropolitan Areas

    NASA Astrophysics Data System (ADS)

    Zaręba, Anna; Krzemińska, Alicja; Widawski, Krzysztof

    2016-10-01

    Intensive urbanization is swallowing municipal green areas which causes intensification of erosion, decrease in biodiversity and permanent fragmentation of habitats. In the face of these changes, a risk of irreversible damages to urban ecosystems is growing. That is why planning of solutions within the framework of Green Urbanism in metropolitan areas inhabited by over 55% of the global population is of extraordinary importance. The task of the paper is to present patterns of the Green Urbanism using selected examples of metropolitan areas as case studies. The main goal of the research is to make comparison between GU practices in different countries, in various spatial settings. The principles of triple zero framework: zero fossil-fuel energy use, zero waste, zero emissions (from low-to-no-carbon emissions) introduce not only the contemporary trends in theoretical urban planning but are dictated by practical considerations to create a healthy environment for a healthy society with a minimized environmental footprint. The research results help to identify Green Urbanism techniques used for multiple functions, including ecological, recreational, cultural, aesthetic and other uses and present opportunities for implementation of Green Urbanism solutions in metropolitan areas. To achieve healthier society and environment, highly congested and polluted cities have to be recreated through working with the existing landscape, topography and natural resources particular to the site.

  13. Data assimilation and prognostic whole ice sheet modelling with the variationally derived, higher order, open source, and fully parallel ice sheet model VarGlaS

    NASA Astrophysics Data System (ADS)

    Brinkerhoff, D. J.; Johnson, J. V.

    2013-07-01

    We introduce a novel, higher order, finite element ice sheet model called VarGlaS (Variational Glacier Simulator), which is built on the finite element framework FEniCS. Contrary to standard procedure in ice sheet modelling, VarGlaS formulates ice sheet motion as the minimization of an energy functional, conferring advantages such as a consistent platform for making numerical approximations, a coherent relationship between motion and heat generation, and implicit boundary treatment. VarGlaS also solves the equations of enthalpy rather than temperature, avoiding the solution of a contact problem. Rather than include a lengthy model spin-up procedure, VarGlaS possesses an automated framework for model inversion. These capabilities are brought to bear on several benchmark problems in ice sheet modelling, as well as a 500 yr simulation of the Greenland ice sheet at high resolution. VarGlaS performs well in benchmarking experiments and, given a constant climate and a 100 yr relaxation period, predicts a mass evolution of the Greenland ice sheet that matches present-day observations of mass loss. VarGlaS predicts a thinning in the interior and thickening of the margins of the ice sheet.

  14. Environmental management system for transportation maintenance operations : [technical brief].

    DOT National Transportation Integrated Search

    2014-04-01

    This report provides the framework for the environmental management system to analyze : greenhouse gas emissions from transportation maintenance operations. The system enables user : to compare different scenarios and make informed decisions to minim...

  15. Merriam's kangaroo rats (Dipodomys merriami) voluntarily select temperatures that conserve energy rather than water.

    PubMed

    Banta, Marilyn R

    2003-01-01

    Desert endotherms such as Merriam's kangaroo rat (Dipodomys merriami) use both behavioral and physiological means to conserve energy and water. The energy and water needs of kangaroo rats are affected by their thermal environment. Animals that choose temperatures within their thermoneutral zone (TNZ) minimize energy expenditure but may impair water balance because the ratio of water loss to water gain is high. At temperatures below the TNZ, water balance may be improved because animals generate more oxidative water and reduce evaporative water loss; however, they must also increase energy expenditure to maintain a normal body temperature. Hence, it is not possible for kangaroo rats to choose thermal environments that simultaneously minimize energy expenditure and increase water conservation. I used a thermal gradient to test whether water stress, energy stress, simultaneous water and energy stress, or no water/energy stress affected the thermal environment selected by D. merriami. During the night (i.e., active phase), animals in all four treatments chose temperatures near the bottom of their TNZ. During the day (i.e., inactive phase), animals in all four treatments settled at temperatures near the top of their TNZ. Thus, kangaroo rats chose thermal environments that minimized energy requirements, not water requirements. Because kangaroo rats have evolved high water use efficiency, energy conservation may be more important than water conservation to the fitness of extant kangaroo rats.

  16. Dynamic management of integrated residential energy systems

    NASA Astrophysics Data System (ADS)

    Muratori, Matteo

    This study combines principles of energy systems engineering and statistics to develop integrated models of residential energy use in the United States, to include residential recharging of electric vehicles. These models can be used by government, policymakers, and the utility industry to provide answers and guidance regarding the future of the U.S. energy system. Currently, electric power generation must match the total demand at each instant, following seasonal patterns and instantaneous fluctuations. Thus, one of the biggest drivers of costs and capacity requirement is the electricity demand that occurs during peak periods. These peak periods require utility companies to maintain operational capacity that often is underutilized, outdated, expensive, and inefficient. In light of this, flattening the demand curve has long been recognized as an effective way of cutting the cost of producing electricity and increasing overall efficiency. The problem is exacerbated by expected widespread adoption of non-dispatchable renewable power generation. The intermittent nature of renewable resources and their non-dispatchability substantially limit the ability of electric power generation of adapting to the fluctuating demand. Smart grid technologies and demand response programs are proposed as a technical solution to make the electric power demand more flexible and able to adapt to power generation. Residential demand response programs offer different incentives and benefits to consumers in response to their flexibility in the timing of their electricity consumption. Understanding interactions between new and existing energy technologies, and policy impacts therein, is key to driving sustainable energy use and economic growth. Comprehensive and accurate models of the next-generation power system allow for understanding the effects of new energy technologies on the power system infrastructure, and can be used to guide policy, technology, and economic decisions. This dissertation presents a bottom-up highly resolved model of a generic residential energy eco-system in the United States. The model is able to capture the entire energy footprint of an individual household, to include all appliances, space conditioning systems, in-home charging of plug-in electric vehicles, and any other energy needs, viewing residential and transportation energy needs as an integrated continuum. The residential energy eco-system model is based on a novel bottom-up approach that quantifies consumer energy use behavior. The incorporation of stochastic consumer behaviors allows capturing the electricity consumption of each residential specific end-use, providing an accurate estimation of the actual amount of available controllable resources, and for a better understanding of the potential of residential demand response programs. A dynamic energy management framework is then proposed to manage electricity consumption inside each residential energy eco-system. Objective of the dynamic energy management framework is to optimize the scheduling of all the controllable appliances and in-home charging of plug-in electric vehicles to minimize cost. Such an automated energy management framework is used to simulate residential demand response programs, and evaluate their impact on the electric power infrastructure. For instance, time-varying electricity pricing might lead to synchronization of the individual residential demands, creating pronounced rebound peaks in the aggregate demand that are higher and steeper than the original demand peaks that the time-varying electricity pricing structure intended to eliminate. The modeling tools developed in this study can serve as a virtual laboratory for investigating fundamental economic and policy-related questions regarding the interplay of individual consumers with energy use. The models developed allow for evaluating the impact of different energy policies, technology adoption, and electricity price structures on the total residential electricity demand. In particular, two case studies are reported in this dissertation to illustrate application of the tools developed. The first considers the impact of market penetration of plug-in electric vehicles on the electric power infrastructure. The second provides a quantitative comparison of the impact of different electricity price structures on residential demand response. Simulation results and an electricity price structure, called Multi-TOU, aimed at solving the rebound peak issue, are presented.

  17. University of Maryland Energy Research Center |

    Science.gov Websites

    ENERGY MICRO POWER SYSTEMS ENERGY EFFICIENCY SMART GRID POWER ELECTRONICS RENEWABLE ENERGY NUCLEAR ENERGY most efficient use of our natural resources while minimizing environmental impacts and our dependence

  18. Energy prediction using spatiotemporal pattern networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhanhong; Liu, Chao; Akintayo, Adedotun

    This paper presents a novel data-driven technique based on the spatiotemporal pattern network (STPN) for energy/power prediction for complex dynamical systems. Built on symbolic dynamical filtering, the STPN framework is used to capture not only the individual system characteristics but also the pair-wise causal dependencies among different sub-systems. To quantify causal dependencies, a mutual information based metric is presented and an energy prediction approach is subsequently proposed based on the STPN framework. To validate the proposed scheme, two case studies are presented, one involving wind turbine power prediction (supply side energy) using the Western Wind Integration data set generated bymore » the National Renewable Energy Laboratory (NREL) for identifying spatiotemporal characteristics, and the other, residential electric energy disaggregation (demand side energy) using the Building America 2010 data set from NREL for exploring temporal features. In the energy disaggregation context, convex programming techniques beyond the STPN framework are developed and applied to achieve improved disaggregation performance.« less

  19. AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.

    PubMed

    Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A

    2017-07-03

    AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics

    PubMed Central

    Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza

    2017-01-01

    Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703

  1. A distributed algorithm for demand-side management: Selling back to the grid.

    PubMed

    Latifi, Milad; Khalili, Azam; Rastegarnia, Amir; Zandi, Sajad; Bazzi, Wael M

    2017-11-01

    Demand side energy consumption scheduling is a well-known issue in the smart grid research area. However, there is lack of a comprehensive method to manage the demand side and consumer behavior in order to obtain an optimum solution. The method needs to address several aspects, including the scale-free requirement and distributed nature of the problem, consideration of renewable resources, allowing consumers to sell electricity back to the main grid, and adaptivity to a local change in the solution point. In addition, the model should allow compensation to consumers and ensurance of certain satisfaction levels. To tackle these issues, this paper proposes a novel autonomous demand side management technique which minimizes consumer utility costs and maximizes consumer comfort levels in a fully distributed manner. The technique uses a new logarithmic cost function and allows consumers to sell excess electricity (e.g. from renewable resources) back to the grid in order to reduce their electric utility bill. To develop the proposed scheme, we first formulate the problem as a constrained convex minimization problem. Then, it is converted to an unconstrained version using the segmentation-based penalty method. At each consumer location, we deploy an adaptive diffusion approach to obtain the solution in a distributed fashion. The use of adaptive diffusion makes it possible for consumers to find the optimum energy consumption schedule with a small number of information exchanges. Moreover, the proposed method is able to track drifts resulting from changes in the price parameters and consumer preferences. Simulations and numerical results show that our framework can reduce the total load demand peaks, lower the consumer utility bill, and improve the consumer comfort level.

  2. Patient-controlled sharing of medical imaging data across unaffiliated healthcare organizations

    PubMed Central

    Ahn, David K; Unde, Bhagyashree; Gage, H Donald; Carr, J Jeffrey

    2013-01-01

    Background Current image sharing is carried out by manual transportation of CDs by patients or organization-coordinated sharing networks. The former places a significant burden on patients and providers. The latter faces challenges to patient privacy. Objective To allow healthcare providers efficient access to medical imaging data acquired at other unaffiliated healthcare facilities while ensuring strong protection of patient privacy and minimizing burden on patients, providers, and the information technology infrastructure. Methods An image sharing framework is described that involves patients as an integral part of, and with full control of, the image sharing process. Central to this framework is the Patient Controlled Access-key REgistry (PCARE) which manages the access keys issued by image source facilities. When digitally signed by patients, the access keys are used by any requesting facility to retrieve the associated imaging data from the source facility. A centralized patient portal, called a PCARE patient control portal, allows patients to manage all the access keys in PCARE. Results A prototype of the PCARE framework has been developed by extending open-source technology. The results for feasibility, performance, and user assessments are encouraging and demonstrate the benefits of patient-controlled image sharing. Discussion The PCARE framework is effective in many important clinical cases of image sharing and can be used to integrate organization-coordinated sharing networks. The same framework can also be used to realize a longitudinal virtual electronic health record. Conclusion The PCARE framework allows prior imaging data to be shared among unaffiliated healthcare facilities while protecting patient privacy with minimal burden on patients, providers, and infrastructure. A prototype has been implemented to demonstrate the feasibility and benefits of this approach. PMID:22886546

  3. Oven wall panel construction

    DOEpatents

    Ellison, Kenneth; Whike, Alan S.

    1980-04-22

    An oven roof or wall is formed from modular panels, each of which comprises an inner fabric and an outer fabric. Each such fabric is formed with an angle iron framework and somewhat resilient tie-bars or welded at their ends to flanges of the angle irons to maintain the inner and outer frameworks in spaced disposition while minimizing heat transfer by conduction and permitting some degree of relative movement on expansion and contraction of the module components. Suitable thermal insulation is provided within the module. Panels or skins are secured to the fabric frameworks and each such skin is secured to a framework and projects laterally so as slidingly to overlie the adjacent frame member of an adjacent panel in turn to permit relative movement during expansion and contraction.

  4. A generalized framework for nucleosynthesis calculations

    NASA Astrophysics Data System (ADS)

    Sprouse, Trevor; Mumpower, Matthew; Aprahamian, Ani

    2014-09-01

    Simulating astrophysical events is a difficult process, requiring a detailed pairing of knowledge from both astrophysics and nuclear physics. Astrophysics guides the thermodynamic evolution of an astrophysical event. We present a nucleosynthesis framework written in Fortran that combines as inputs a thermodynamic evolution and nuclear data to time evolve the abundances of nuclear species. Through our coding practices, we have emphasized the applicability of our framework to any astrophysical event, including those involving nuclear fission. Because these calculations are often very complicated, our framework dynamically optimizes itself based on the conditions at each time step in order to greatly minimize total computation time. To highlight the power of this new approach, we demonstrate the use of our framework to simulate both Big Bang nucleosynthesis and r-process nucleosynthesis with speeds competitive with current solutions dedicated to either process alone.

  5. Health costs of reproduction are minimal despite high fertility, mortality and subsistence lifestyle

    PubMed Central

    Gurven, Michael; Costa, Megan; Ben Trumble; Stieglitz, Jonathan; Beheim, Bret; Eid Rodriguez, Daniel; Hooper, Paul L.; Kaplan, Hillard

    2016-01-01

    Women exhibit greater morbidity than men despite higher life expectancy. An evolutionary life history framework predicts that energy invested in reproduction trades-off against investments in maintenance and survival. Direct costs of reproduction may therefore contribute to higher morbidity, especially for women given their greater direct energetic contributions to reproduction. We explore multiple indicators of somatic condition among Tsimane forager-horticulturalist women (Total Fertility Rate = 9.1; n =  592 aged 15–44 years, n = 277 aged 45+). We test whether cumulative live births and the pace of reproduction are associated with nutritional status and immune function using longitudinal data spanning 10 years. Higher parity and faster reproductive pace are associated with lower nutritional status (indicated by weight, body mass index, body fat) in a cross-section, but longitudinal analyses show improvements in women’s nutritional status with age. Biomarkers of immune function and anemia vary little with parity or pace of reproduction. Our findings demonstrate that even under energy-limited and infectious conditions, women are buffered from the potential depleting effects of rapid reproduction and compound offspring dependency characteristic of human life histories. PMID:27436412

  6. Porous coordination polymers as novel sorption materials for heat transformation processes.

    PubMed

    Janiak, Christoph; Henninger, Stefan K

    2013-01-01

    Porous coordination polymers (PCPs)/metal-organic frameworks (MOFs) are inorganic-organic hybrid materials with a permanent three-dimensional porous metal-ligand network. PCPs or MOFs are inorganic-organic analogs of zeolites in terms of porosity and reversible guest exchange properties. Microporous water-stable PCPs with high water uptake capacity are gaining attention for low temperature heat transformation applications in thermally driven adsorption chillers (TDCs) or adsorption heat pumps (AHPs). TDCs or AHPs are an alternative to traditional air conditioners or heat pumps operating on electricity or fossil fuels. By using solar or waste heat as the operating energy TDCs or AHPs can significantly help to minimize primary energy consumption and greenhouse gas emissions generated by industrial or domestic heating and cooling processes. TDCs and AHPs are based on the evaporation and consecutive adsorption of coolant liquids, preferably water, under specific conditions. The process is driven and controlled by the microporosity and hydrophilicity of the employed sorption material. Here we summarize the current investigations, developments and possibilities of PCPs/MOFs for use in low-temperature heat transformation applications as alternative materials for the traditional inorganic porous substances like silica gel, aluminophosphates or zeolites.

  7. A test of local Lorentz invariance with Compton scattering asymmetry

    DOE PAGES

    Mohanmurthy, Prajwal; Narayan, Amrendra; Dutta, Dipangkar

    2016-12-14

    Here, we report on a measurement of the constancy and anisotropy of the speed of light relative to the electrons in photon-electron scattering. We also used the Compton scattering asymmetry measured by the new Compton polarimeter in Hall~C at Jefferson Lab to test for deviations from unity of the vacuum refractive index (more » $n$). For photon energies in the range of 9 - 46 MeV, we obtain a new limit of $$1-n < 1.4 \\times 10^{-8}$$. In addition, the absence of sidereal variation over the six month period of the measurement constrains any anisotropies in the speed of light. These constitute the first study of Lorentz invariance using Compton asymmetry. Within the minimal standard model extension framework, our result yield limits on the photon and electron coefficients $$\\tilde{\\kappa}_{0^+}^{YZ}, c_{TX}, \\tilde{\\kappa}_{0^+}^{ZX}$$, and $$c_{TY}$$. Though, these limits are several orders of magnitude larger than the current best limits, they demonstrate the feasibility of using Compton asymmetry for tests of Lorentz invariance. For future parity violating electron scattering experiments at Jefferson Lab we will use higher energy electrons enabling better constraints.« less

  8. Supernatural supersymmetry: Phenomenological implications of anomaly-mediated supersymmetry breaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Jonathan L.; Moroi, Takeo

    2000-05-01

    We discuss the phenomenology of supersymmetric models in which supersymmetry breaking terms are induced by the super-Weyl anomaly. Such a scenario is envisioned to arise when supersymmetry breaking takes place in another world, i.e., on another brane. We review the anomaly-mediated framework and study in detail the minimal anomaly-mediated model parametrized by only 3+1 parameters: M{sub aux}, m{sub 0}, tan {beta}, and sgn({mu}). The renormalization group equations exhibit a novel ''focus point'' (as opposed to fixed point) behavior, which allows squark and slepton masses far above their usual naturalness bounds. We present the superparticle spectrum and highlight several implications formore » high energy colliders. Three lightest supersymmetric particle (LSP) candidates exist: the W-ino, the stau, and the tau sneutrino. For the W-ino LSP scenario, light W-ino triplets with the smallest possible mass splittings are preferred; such W-inos are within reach of run II Fermilab Tevatron searches. Finally, we study a variety of sensitive low energy probes, including b{yields}s{gamma}, the anomalous magnetic moment of the muon, and the electric dipole moments of the electron and neutron. (c) 2000 The American Physical Society.« less

  9. New technology based on clamping for high gradient radio frequency photogun

    NASA Astrophysics Data System (ADS)

    Alesini, David; Battisti, Antonio; Ferrario, Massimo; Foggetta, Luca; Lollo, Valerio; Ficcadenti, Luca; Pettinacci, Valerio; Custodio, Sean; Pirez, Eylene; Musumeci, Pietro; Palumbo, Luigi

    2015-09-01

    High gradient rf photoguns have been a key development to enable several applications of high quality electron beams. They allow the generation of beams with very high peak current and low transverse emittance, satisfying the tight demands for free-electron lasers, energy recovery linacs, Compton/Thomson sources and high-energy linear colliders. In the present paper we present the design of a new rf photogun recently developed in the framework of the SPARC_LAB photoinjector activities at the laboratories of the National Institute of Nuclear Physics in Frascati (LNF-INFN, Italy). This design implements several new features from the electromagnetic point of view and, more important, a novel technology for its realization that does not involve any brazing process. From the electromagnetic point of view the gun presents high mode separation, low peak surface electric field at the iris and minimized pulsed heating on the coupler. For the realization, we have implemented a novel fabrication design that, avoiding brazing, strongly reduces the cost, the realization time and the risk of failure. Details on the electromagnetic design, low power rf measurements and high power radiofrequency and beam tests performed at the University of California in Los Angeles (UCLA) are discussed in the paper.

  10. Supporting Effective Feed-in Tariff Development in Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Since 2011, Malaysia's overarching policy framework for clean energy development, the New Energy Policy, has led to significant deployment of renewable energy and energy efficiency. Building on the New Energy Policy, Malaysia mandated adoption of a renewable energy feed-in tariff (FiT) mechanism under the 2011 Renewable Energy Act. In 2013, Malaysia's Sustainable Energy Development Authority partnered with the Clean Energy Solutions Center and the Clean Energy Regulators Initiative (CERI), via the Ask an Expert service, to implement FiT policies and expand renewable energy development. Through collaboration between the government of Malaysia and the Clean Energy Solutions Center, concrete policy actionmore » was supported and implemented, building a strong framework to expand and catalyze clean energy development.« less

  11. Analyzing the effect of homogeneous frustration in protein folding.

    PubMed

    Contessoto, Vinícius G; Lima, Debora T; Oliveira, Ronaldo J; Bruni, Aline T; Chahine, Jorge; Leite, Vitor B P

    2013-10-01

    The energy landscape theory has been an invaluable theoretical framework in the understanding of biological processes such as protein folding, oligomerization, and functional transitions. According to the theory, the energy landscape of protein folding is funneled toward the native state, a conformational state that is consistent with the principle of minimal frustration. It has been accepted that real proteins are selected through natural evolution, satisfying the minimum frustration criterion. However, there is evidence that a low degree of frustration accelerates folding. We examined the interplay between topological and energetic protein frustration. We employed a Cα structure-based model for simulations with a controlled nonspecific energetic frustration added to the potential energy function. Thermodynamics and kinetics of a group of 19 proteins are completely characterized as a function of increasing level of energetic frustration. We observed two well-separated groups of proteins: one group where a little frustration enhances folding rates to an optimal value and another where any energetic frustration slows down folding. Protein energetic frustration regimes and their mechanisms are explained by the role of non-native contact interactions in different folding scenarios. These findings strongly correlate with the protein free-energy folding barrier and the absolute contact order parameters. These computational results are corroborated by principal component analysis and partial least square techniques. One simple theoretical model is proposed as a useful tool for experimentalists to predict the limits of improvements in real proteins. Copyright © 2013 Wiley Periodicals, Inc.

  12. Semiconductor color-center structure and excitation spectra: Equation-of-motion coupled-cluster description of vacancy and transition-metal defect photoluminescence

    NASA Astrophysics Data System (ADS)

    Lutz, Jesse J.; Duan, Xiaofeng F.; Burggraf, Larry W.

    2018-03-01

    Valence excitation spectra are computed for deep-center silicon-vacancy defects in 3C, 4H, and 6H silicon carbide (SiC), and comparisons are made with literature photoluminescence measurements. Optimizations of nuclear geometries surrounding the defect centers are performed within a Gaussian basis-set framework using many-body perturbation theory or density functional theory (DFT) methods, with computational expenses minimized by a QM/MM technique called SIMOMM. Vertical excitation energies are subsequently obtained by applying excitation-energy, electron-attached, and ionized equation-of-motion coupled-cluster (EOMCC) methods, where appropriate, as well as time-dependent (TD) DFT, to small models including only a few atoms adjacent to the defect center. We consider the relative quality of various EOMCC and TD-DFT methods for (i) energy-ordering potential ground states differing incrementally in charge and multiplicity, (ii) accurately reproducing experimentally measured photoluminescence peaks, and (iii) energy-ordering defects of different types occurring within a given polytype. The extensibility of this approach to transition-metal defects is also tested by applying it to silicon-substituted chromium defects in SiC and comparing with measurements. It is demonstrated that, when used in conjunction with SIMOMM-optimized geometries, EOMCC-based methods can provide a reliable prediction of the ground-state charge and multiplicity, while also giving a quantitative description of the photoluminescence spectra, accurate to within 0.1 eV of measurement for all cases considered.

  13. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    NASA Astrophysics Data System (ADS)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  14. Anisotropic strange stars under simplest minimal matter-geometry coupling in the f (R ,T ) gravity

    NASA Astrophysics Data System (ADS)

    Deb, Debabrata; Guha, B. K.; Rahaman, Farook; Ray, Saibal

    2018-04-01

    We study strange stars in the framework of f (R ,T ) theory of gravity. To provide exact solutions of the field equations it is considered that the gravitational Lagrangian can be expressed as the linear function of the Ricci scalar R and the trace of the stress-energy tensor T , i.e. f (R ,T )=R +2 χ T , where χ is a constant. We also consider that the strange quark matter (SQM) distribution inside the stellar system is governed by the phenomenological MIT bag model equation of state (EOS), given as pr=1/3 (ρ -4 B ) , where B is the bag constant. Further, for a specific value of B and observed values of mass of the strange star candidates we obtain the exact solution of the modified Tolman-Oppenheimer-Volkoff (TOV) equation in the framework of f (R ,T ) gravity and have studied in detail the dependence of the different physical parameters, like the metric potentials, energy density, radial and tangential pressures and anisotropy etc., due to the chosen different values of χ . Likewise in GR, as have been shown in our previous work [Deb et al., Ann. Phys. (Amsterdam) 387, 239 (2017), 10.1016/j.aop.2017.10.010] in the present work also we find maximum anisotropy at the surface which seems an inherent property of the strange stars in modified f (R ,T ) theory of gravity. To check the physical acceptability and stability of the stellar system based on the obtained solutions we have performed different physical tests, viz., the energy conditions, Herrera cracking concept, adiabatic index etc. In this work, we also have explained the effects, those are arising due to the interaction between the matter and the curvature terms in f (R ,T ) gravity, on the anisotropic compact stellar system. It is interesting to note that as the values of χ increase the strange stars become more massive and their radius increase gradually so that eventually they gradually turn into less dense compact objects. The present study reveals that the modified f (R ,T ) gravity is a suitable theory to explain massive stellar systems like recent magnetars, massive pulsars and super-Chandrasekhar stars, which cannot be explained in the framework of GR. However, for χ =0 the standard results of Einsteinian gravity are retrieved.

  15. Effective Techniques for Augmenting Heat Transfer: An Application of Entropy Generation Minimization Principles.

    DTIC Science & Technology

    1980-12-01

    augmentation techniques, entropy generation, irreversibility, exergy . 20. ABSTRACT (Continue on rovers. side If necessary and Identify by block number...35 3.5 Internally finned tubes ...... ................. .. 37 3.6 Internally roughened tubes ..... ............... . 41 3.7 Other heat transfer...irreversibility and entropy generation as fundamental criterion for evaluating and, eventually, minimizing the waste of usable energy ( exergy ) in energy

  16. High energy XeBr electric discharge laser

    DOEpatents

    Sze, Robert C.; Scott, Peter B.

    1981-01-01

    A high energy XeBr laser for producing coherent radiation at 282 nm. The XeBr laser utilizes an electric discharge as the excitation source to minimize formation of molecular ions thereby minimizing absorption of laser radiation by the active medium. Additionally, HBr is used as the halogen donor which undergoes harpooning reactions with Xe.sub.M * to form XeBr*.

  17. High energy XeBr electric discharge laser

    DOEpatents

    Sze, R.C.; Scott, P.B.

    A high energy XeBr laser for producing coherent radiation at 282 nm is disclosed. The XeBr laser utilizes an electric discharge as the excitation source to minimize formation of molecular ions thereby minimizing absorption of laser radiation by the active medium. Additionally, HBr, is used as the halogen donor which undergoes harpooning reactions with Xe/sub M/ to form XeBr.

  18. Energy-efficient skylight structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dame, J.V.

    1988-03-29

    This patent describes an energy-efficient skylight structure for attaching to a ceiling having a hole therein. The structure includes a roof membrane of light translucent material. The improvement comprises: a framework being larger in size than the hole in the ceiling, the framework adapted to receive a light-diffusing panel; means for attaching the framework over the hole in the ceiling to support beams for the ceiling; gasket means between the framework and the ceiling for sealing the framework to the ceiling around the hole; a light-diffusing panel held by the framework; sealing means between the light-diffusing panel and the frameworkmore » for sealing the perimeter of the light diffusing panel to the framework; and a light-channeling means attached at one end to the ceiling around the opening on the side opposite the framework and at the other end around the light translucent material of the roof membrane.« less

  19. Acoustic transducer apparatus with reduced thermal conduction

    NASA Technical Reports Server (NTRS)

    Lierke, Ernst G. (Inventor); Leung, Emily W. (Inventor); Bhat, Balakrishna T. (Inventor)

    1990-01-01

    A horn is described for transmitting sound from a transducer to a heated chamber containing an object which is levitated by acoustic energy while it is heated to a molten state, which minimizes heat transfer to thereby minimize heating of the transducer, minimize temperature variation in the chamber, and minimize loss of heat from the chamber. The forward portion of the horn, which is the portion closest to the chamber, has holes that reduce its cross-sectional area to minimize the conduction of heat along the length of the horn, with the entire front portion of the horn being rigid and having an even front face to efficiently transfer high frequency acoustic energy to fluid in the chamber. In one arrangement, the horn has numerous rows of holes extending perpendicular to the length of horn, with alternate rows extending perpendicular to one another to form a sinuous path for the conduction of heat along the length of the horn.

  20. Minimal Model of Quantum Kinetic Clusters for the Energy-Transfer Network of a Light-Harvesting Protein Complex.

    PubMed

    Wu, Jianlan; Tang, Zhoufei; Gong, Zhihao; Cao, Jianshu; Mukamel, Shaul

    2015-04-02

    The energy absorbed in a light-harvesting protein complex is often transferred collectively through aggregated chromophore clusters. For population evolution of chromophores, the time-integrated effective rate matrix allows us to construct quantum kinetic clusters quantitatively and determine the reduced cluster-cluster transfer rates systematically, thus defining a minimal model of energy-transfer kinetics. For Fenna-Matthews-Olson (FMO) and light-havrvesting complex II (LCHII) monomers, quantum Markovian kinetics of clusters can accurately reproduce the overall energy-transfer process in the long-time scale. The dominant energy-transfer pathways are identified in the picture of aggregated clusters. The chromophores distributed extensively in various clusters can assist a fast and long-range energy transfer.

  1. New insights gained on mechanisms of low-energy proton-induced SEUs by minimizing energy straggle

    DOE PAGES

    Dodds, Nathaniel Anson; Dodd, Paul E.; Shaneyfelt, Marty R.; ...

    2015-12-01

    In this study, we present low-energy proton single-event upset (SEU) data on a 65 nm SOI SRAM whose substrate has been completely removed. Since the protons only had to penetrate a very thin buried oxide layer, these measurements were affected by far less energy loss, energy straggle, flux attrition, and angular scattering than previous datasets. The minimization of these common sources of experimental interference allows more direct interpretation of the data and deeper insight into SEU mechanisms. The results show a strong angular dependence, demonstrate that energy straggle, flux attrition, and angular scattering affect the measured SEU cross sections, andmore » prove that proton direct ionization is the dominant mechanism for low-energy proton-induced SEUs in these circuits.« less

  2. Development of a Neural Network-Based Renewable Energy Forecasting Framework for Process Industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Soobin; Ryu, Jun-Hyung; Hodge, Bri-Mathias

    2016-06-25

    This paper presents a neural network-based forecasting framework for photovoltaic power (PV) generation as a decision-supporting tool to employ renewable energies in the process industry. The applicability of the proposed framework is illustrated by comparing its performance against other methodologies such as linear and nonlinear time series modelling approaches. A case study of an actual PV power plant in South Korea is presented.

  3. A framework for automatic feature extraction from airborne light detection and ranging data

    NASA Astrophysics Data System (ADS)

    Yan, Jianhua

    Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance.

  4. Energy-Water Microgrid Case Study at the University of Arizona's BioSphere 2

    NASA Astrophysics Data System (ADS)

    Daw, J.; Macknick, J.; Kandt, A.; Giraldez, J.

    2016-12-01

    Microgrids can provide reliable and cost-effective energy services in a variety of conditions and locations. To date, there has been minimal effort invested in developing energy-water microgrids that demonstrate the feasibility and leverage the synergies associated with designing and operating renewable energy and water systems in a coordinated framework. Water and wastewater treatment equipment can be operated in ways to provide ancillary services to the electrical grid and renewable energy can be utilized to power water-related infrastructure, but the potential for co-managed systems has not yet been quantified or fully characterized. Co-management and optimization of energy and water resources could lead to improved reliability and economic operating conditions. Energy-water microgrids could be a promising solution to improve energy and water resource management for islands, rural communities, distributed generation, Defense operations, and many parts of the world lacking critical infrastructure.The National Renewable Energy Laboratory (NREL) and the University of Arizona have been jointly researching energy-water microgrid opportunities through an effort at the university's BioSphere 2 (B2) Earth systems science research facility. B2 is an ideal case study for an energy-water microgrid test site, given its size, its unique mission and operations, the existence and criticality of water and energy infrastructure, and its ability to operate connected-to or disconnected-from the local electrical grid. Moreover, the B2 is a premier facility for undertaking agricultural research, providing an excellent opportunity to evaluate connections and tradeoffs in the food-energy-water nexus. The research effort at B2 identified the technical potential and associated benefits of an energy-water microgrid through the evaluation of energy ancillary services and peak load reductions and quantified the potential for B2 water-related loads to be utilized and modified to provide grid services in the context of an optimized energy-water microgrid. The foundational work performed at B2 also serves a model that can be built upon for identifying relevant energy-water microgrid data, analytical requirements, and operational challenges associated with development of future energy-water microgrids.

  5. Energy conditions of high quality laser-oxygen cutting of mild steel

    NASA Astrophysics Data System (ADS)

    Shulyatyev, V. B.; Orishich, A. M.; Malikov, A. G.

    2011-02-01

    In our previous work we found experimentally the scaling laws for the oxygen-assisted laser cutting of low-carbon steel of 5 - 25 mm. No dross and minimal roughness of the cut surface were chosen as criteria of quality. Formulas were obtained to determine the optimum values of the laser power and cutting speed for the given sheet thickness. In the present paper, the energy balance of the oxygen-assisted laser cutting is studied experimentally at these optimum parameters. The absorbed laser energy and heat conduction losses and cut width were measured experimentally, and then the energy of exothermic reaction of oxidation was found from the balance equation. To define the integral coefficient of absorption, the laser power was measured on the cutting channel exit during the cutting. The heat conduction losses were measured by the calorimetric method. It has been established that the absorbed laser energy, oxidation energy, thermal losses and melting enthalpy related to a sheet thickness unit, do not depend on the sheet thickness at the cutting with the minimal roughness. The results enable to determine the fraction of the oxidized iron in the melt and thermal efficiency at the cutting with the minimal roughness. The share of the oxidation reaction energy is 50 - 60% in the total contributed energy.

  6. A Framework for Engaging Navajo Women in Clean Energy Development through Applied Theatre

    ERIC Educational Resources Information Center

    Osnes, Beth; Manygoats, Adrian; Weitkamp, Lindsay

    2015-01-01

    Through applied theatre, Navajo women can participate in authoring a new story for how energy is mined, produced, developed, disseminated and used in the Navajo Nation. This article is an analysis of a creative process that was utilised with primarily Navajo women to create a Navajo Women's Energy Project (NWEP). The framework for this creative…

  7. Building energy simulation in real time through an open standard interface

    DOE PAGES

    Pang, Xiufeng; Nouidui, Thierry S.; Wetter, Michael; ...

    2015-10-20

    Building energy models (BEMs) are typically used for design and code compliance for new buildings and in the renovation of existing buildings to predict energy use. We present the increasing adoption of BEM as standard practice in the building industry presents an opportunity to extend the use of BEMs into construction, commissioning and operation. In 2009, the authors developed a real-time simulation framework to execute an EnergyPlus model in real time to improve building operation. This paper reports an enhancement of that real-time energy simulation framework. The previous version only works with software tools that implement the custom co-simulation interfacemore » of the Building Controls Virtual Test Bed (BCVTB), such as EnergyPlus, Dymola and TRNSYS. The new version uses an open standard interface, the Functional Mockup Interface (FMI), to provide a generic interface to any application that supports the FMI protocol. In addition, the new version utilizes the Simple Measurement and Actuation Profile (sMAP) tool as the data acquisition system to acquire, store and present data. Lastly, this paper introduces the updated architecture of the real-time simulation framework using FMI and presents proof-of-concept demonstration results which validate the new framework.« less

  8. An Ethics Framework for Public Health

    PubMed Central

    Kass, Nancy E.

    2001-01-01

    More than 100 years ago, public health began as an organized discipline, its purpose being to improve the health of populations rather than of individuals. Given its population-based focus, however, public health perennially faces dilemmas concerning the appropriate extent of its reach and whether its activities infringe on individual liberties in ethically troublesome ways. In this article a framework for ethics analysis of public health programs is proposed. To advance traditional public health goals while maximizing individual liberties and furthering social justice, public health interventions should reduce morbidity or mortality; data must substantiate that a program (or the series of programs of which a program is a part) will reduce morbidity or mortality; burdens of the program must be identified and minimized; the program must be implemented fairly and must, at times, minimize preexisting social injustices; and fair procedures must be used to determine which burdens are acceptable to a community. PMID:11684600

  9. Evaluation of the carotid artery stenosis based on minimization of mechanical energy loss of the blood flow.

    PubMed

    Sia, Sheau Fung; Zhao, Xihai; Li, Rui; Zhang, Yu; Chong, Winston; He, Le; Chen, Yu

    2016-11-01

    Internal carotid artery stenosis requires an accurate risk assessment for the prevention of stroke. Although the internal carotid artery area stenosis ratio at the common carotid artery bifurcation can be used as one of the diagnostic methods of internal carotid artery stenosis, the accuracy of results would still depend on the measurement techniques. The purpose of this study is to propose a novel method to estimate the effect of internal carotid artery stenosis on the blood flow based on the concept of minimization of energy loss. Eight internal carotid arteries from different medical centers were diagnosed as stenosed internal carotid arteries, as plaques were found at different locations on the vessel. A computational fluid dynamics solver was developed based on an open-source code (OpenFOAM) to test the flow ratio and energy loss of those stenosed internal carotid arteries. For comparison, a healthy internal carotid artery and an idealized internal carotid artery model have also been tested and compared with stenosed internal carotid artery in terms of flow ratio and energy loss. We found that at a given common carotid artery bifurcation, there must be a certain flow distribution in the internal carotid artery and external carotid artery, for which the total energy loss at the bifurcation is at a minimum; for a given common carotid artery flow rate, an irregular shaped plaque at the bifurcation constantly resulted in a large value of minimization of energy loss. Thus, minimization of energy loss can be used as an indicator for the estimation of internal carotid artery stenosis.

  10. Effective energy data management for low-carbon growth planning: An analytical framework for assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bo; Evans, Meredydd; Yu, Sha

    Readily available and reliable energy data is fundamental to effective analysis and policymaking for the energy sector. Energy statistics of high quality, systematically compiled and effectively disseminated, not only support governments to ensure national security and evaluate energy policies, but they also guide investment decisions in both the private and public sectors. Because of energy’s close link to greenhouse gas emissions, energy data has a particularly important role in assessing emissions and strategies to reduce emissions. In this study, energy data management in four countries – Canada, Germany, the United Kingdom and the United States – are examined from bothmore » organizational and operational perspectives. With insights from these best practices, we present a framework for the evaluation of national energy data management systems. It can be used by national statistics compilers to assess their chosen model and to identify areas for improvement. We then use India as a test case for this framework. Its government is working to enhance India’s energy data management to improve sustainable growth planning.« less

  11. Minimizing medical litigation, part 2.

    PubMed

    Harold, Tan Keng Boon

    2006-01-01

    Provider-patient disputes are inevitable in the healthcare sector. Healthcare providers and regulators should recognize this and plan opportunities to enforce alternative dispute resolution (ADR) a early as possible in the care delivery process. Negotiation is often the main dispute resolution method used by local healthcare providers, failing which litigation would usually follow. The role of mediation in resolving malpractice disputes has been minimal. Healthcare providers, administrators, and regulators should therefore look toward a post-event communication-cum-mediation framework as the key national strategy to resolving malpractice disputes.

  12. Geometric constrained variational calculus. II: The second variation (Part I)

    NASA Astrophysics Data System (ADS)

    Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico

    2016-10-01

    Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.

  13. A Method for Testing the Dynamic Accuracy of Micro-Electro-Mechanical Systems (MEMS) Magnetic, Angular Rate, and Gravity (MARG) Sensors for Inertial Navigation Systems (INS) and Human Motion Tracking Applications

    DTIC Science & Technology

    2010-06-01

    32 2. Low-Cost Framework........................................................................33 3. Low Magnetic Field ...that have a significant impact on the magnetic field measured by a MARG, which could potentially add errors that are due entirely to the test...minimize the impact on the local magnetic field , and the apparatus was made as rigidly as possible using 2 x 4s to minimize any out of plane motions that

  14. Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em

    2015-09-01

    Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM).

  15. Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Yu-Hang, E-mail: yuhang_tang@brown.edu; Kudo, Shuhei, E-mail: shuhei-kudo@outlook.jp; Bian, Xin, E-mail: xin_bian@brown.edu

    2015-09-15

    Graphical abstract: - Abstract: Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create anmore » easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM)« less

  16. Multiscale Simulation of Microbe Structure and Dynamics

    PubMed Central

    Joshi, Harshad; Singharoy, Abhishek; Sereda, Yuriy V.; Cheluvaraja, Srinath C.; Ortoleva, Peter J.

    2012-01-01

    A multiscale mathematical and computational approach is developed that captures the hierarchical organization of a microbe. It is found that a natural perspective for understanding a microbe is in terms of a hierarchy of variables at various levels of resolution. This hierarchy starts with the N -atom description and terminates with order parameters characterizing a whole microbe. This conceptual framework is used to guide the analysis of the Liouville equation for the probability density of the positions and momenta of the N atoms constituting the microbe and its environment. Using multiscale mathematical techniques, we derive equations for the co-evolution of the order parameters and the probability density of the N-atom state. This approach yields a rigorous way to transfer information between variables on different space-time scales. It elucidates the interplay between equilibrium and far-from-equilibrium processes underlying microbial behavior. It also provides framework for using coarse-grained nanocharacterization data to guide microbial simulation. It enables a methodical search for free-energy minimizing structures, many of which are typically supported by the set of macromolecules and membranes constituting a given microbe. This suite of capabilities provides a natural framework for arriving at a fundamental understanding of microbial behavior, the analysis of nanocharacterization data, and the computer-aided design of nanostructures for biotechnical and medical purposes. Selected features of the methodology are demonstrated using our multiscale bionanosystem simulator DeductiveMultiscaleSimulator. Systems used to demonstrate the approach are structural transitions in the cowpea chlorotic mosaic virus, RNA of satellite tobacco mosaic virus, virus-like particles related to human papillomavirus, and iron-binding protein lactoferrin. PMID:21802438

  17. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  18. Patients Should Define Value in Health Care: A Conceptual Framework.

    PubMed

    Kamal, Robin N; Lindsay, Sarah E; Eppler, Sara L

    2018-05-10

    The main tenet of value-based health care is delivering high-quality care that is centered on the patient, improving health, and minimizing cost. Collaborative decision-making frameworks have been developed to help facilitate delivering care based on patient preferences (patient-centered care). The current value-based health care model, however, focuses on improving population health and overlooks the individuality of patients and their preferences for care. We highlight the importance of eliciting patient preferences in collaborative decision making and describe a conceptual framework that incorporates individual patients' preferences when defining value. Copyright © 2018 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  19. Intelligent demand side management of residential building energy systems

    NASA Astrophysics Data System (ADS)

    Sinha, Maruti N.

    Advent of modern sensing technologies, data processing capabilities and rising cost of energy are driving the implementation of intelligent systems in buildings and houses which constitute 41% of total energy consumption. The primary motivation has been to provide a framework for demand-side management and to improve overall reliability. The entire formulation is to be implemented on NILM (Non-Intrusive Load Monitoring System), a smart meter. This is going to play a vital role in the future of demand side management. Utilities have started deploying smart meters throughout the world which will essentially help to establish communication between utility and consumers. This research is focused on investigation of a suitable thermal model of residential house, building up control system and developing diagnostic and energy usage forecast tool. The present work has considered measurement based approach to pursue. Identification of building thermal parameters is the very first step towards developing performance measurement and controls. The proposed identification technique is PEM (Prediction Error Method) based, discrete state-space model. The two different models have been devised. First model is focused toward energy usage forecast and diagnostics. Here one of the novel idea has been investigated which takes integral of thermal capacity to identify thermal model of house. The purpose of second identification is to build up a model for control strategy. The controller should be able to take into account the weather forecast information, deal with the operating point constraints and at the same time minimize the energy consumption. To design an optimal controller, MPC (Model Predictive Control) scheme has been implemented instead of present thermostatic/hysteretic control. This is a receding horizon approach. Capability of the proposed schemes has also been investigated.

  20. Multielectron-Transfer-based Rechargeable Energy Storage of Two-Dimensional Coordination Frameworks with Non-Innocent Ligands.

    PubMed

    Wada, Keisuke; Sakaushi, Ken; Sasaki, Sono; Nishihara, Hiroshi

    2018-04-19

    The metallically conductive bis(diimino)nickel framework (NiDI), an emerging class of metal-organic framework (MOF) analogues consisting of two-dimensional (2D) coordination networks, was found to have an energy storage principle that uses both cation and anion insertion. This principle gives high energy led by a multielectron transfer reaction: Its specific capacity is one of the highest among MOF-based cathode materials in rechargeable energy storage devices, with stable cycling performance up to 300 cycles. This mechanism was studied by a wide spectrum of electrochemical techniques combined with density-functional calculations. This work shows that a rationally designed material system of conductive 2D coordination networks can be promising electrode materials for many types of energy devices. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Integration of agricultural and energy system models for biofuel assessment

    EPA Science Inventory

    This paper presents a coupled modeling framework to capture the dynamic linkages between agricultural and energy markets that have been enhanced through the expansion of biofuel production, as well as the environmental impacts resulting from this expansion. The framework incorpor...

  2. On post-inflation validity of perturbation theory in Horndeski scalar-tensor models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germani, Cristiano; Kudryashova, Nina; Watanabe, Yuki, E-mail: germani@icc.ub.edu, E-mail: nina.kudryashova@campus.lmu.de, E-mail: yuki.watanabe@nat.gunma-ct.ac.jp

    By using the newtonian gauge, we re-confirm that, as in the minimal case, the re-scaled Mukhanov-Sasaki variable is conserved leading to a constraint equation for the Newtonian potential. However, conversely to the minimal case, in Horndeski theories, the super-horizon Newtonian potential can potentially grow to very large values after inflation exit. If that happens, inflationary predictability is lost during the oscillating period. When this does not happen, the perturbations generated during inflation can be standardly related to the CMB, if the theory chosen is minimal at low energies. As a concrete example, we analytically and numerically discuss the new Higgsmore » inflationary case. There, the Inflaton is the Higgs boson that is non-minimally kinetically coupled to gravity. During the high-energy part of the post-inflationary oscillations, the system is anisotropic and the Newtonian potential is largely amplified. Thanks to the smallness of today's amplitude of curvature perturbations, however, the system stays in the linear regime, so that inflationary predictions are not lost. At low energies, when the system relaxes to the minimal case, the anisotropies disappear and the Newtonian potential converges to a constant value. We show that the constant value to which the Newtonian potential converges is related to the frozen part of curvature perturbations during inflation, precisely like in the minimal case.« less

  3. System integration of wind and solar power in integrated assessment models: A cross-model evaluation of new approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pietzcker, Robert C.; Ueckerdt, Falko; Carrara, Samuel

    Mitigation-Process Integrated Assessment Models (MP-IAMs) are used to analyze long-term transformation pathways of the energy system required to achieve stringent climate change mitigation targets. Due to their substantial temporal and spatial aggregation, IAMs cannot explicitly represent all detailed challenges of integrating the variable renewable energies (VRE) wind and solar in power systems, but rather rely on parameterized modeling approaches. In the ADVANCE project, six international modeling teams have developed new approaches to improve the representation of power sector dynamics and VRE integration in IAMs. In this study, we qualitatively and quantitatively evaluate the last years' modeling progress and study themore » impact of VRE integration modeling on VRE deployment in IAM scenarios. For a comprehensive and transparent qualitative evaluation, we first develop a framework of 18 features of power sector dynamics and VRE integration. We then apply this framework to the newly-developed modeling approaches to derive a detailed map of strengths and limitations of the different approaches. For the quantitative evaluation, we compare the IAMs to the detailed hourly-resolution power sector model REMIX. We find that the new modeling approaches manage to represent a large number of features of the power sector, and the numerical results are in reasonable agreement with those derived from the detailed power sector model. Updating the power sector representation and the cost and resources of wind and solar substantially increased wind and solar shares across models: Under a carbon price of 30$/tCO2 in 2020 (increasing by 5% per year), the model-average cost-minimizing VRE share over the period 2050-2100 is 62% of electricity generation, 24%-points higher than with the old model version.« less

  4. Finite-deformation phase-field chemomechanics for multiphase, multicomponent solids

    NASA Astrophysics Data System (ADS)

    Svendsen, Bob; Shanthraj, Pratheek; Raabe, Dierk

    2018-03-01

    The purpose of this work is the development of a framework for the formulation of geometrically non-linear inelastic chemomechanical models for a mixture of multiple chemical components diffusing among multiple transforming solid phases. The focus here is on general model formulation. No specific model or application is pursued in this work. To this end, basic balance and constitutive relations from non-equilibrium thermodynamics and continuum mixture theory are combined with a phase-field-based description of multicomponent solid phases and their interfaces. Solid phase modeling is based in particular on a chemomechanical free energy and stress relaxation via the evolution of phase-specific concentration fields, order-parameter fields (e.g., related to chemical ordering, structural ordering, or defects), and local internal variables. At the mixture level, differences or contrasts in phase composition and phase local deformation in phase interface regions are treated as mixture internal variables. In this context, various phase interface models are considered. In the equilibrium limit, phase contrasts in composition and local deformation in the phase interface region are determined via bulk energy minimization. On the chemical side, the equilibrium limit of the current model formulation reduces to a multicomponent, multiphase, generalization of existing two-phase binary alloy interface equilibrium conditions (e.g., KKS). On the mechanical side, the equilibrium limit of one interface model considered represents a multiphase generalization of Reuss-Sachs conditions from mechanical homogenization theory. Analogously, other interface models considered represent generalizations of interface equilibrium conditions consistent with laminate and sharp-interface theory. In the last part of the work, selected existing models are formulated within the current framework as special cases and discussed in detail.

  5. Real-time geometry-aware augmented reality in minimally invasive surgery.

    PubMed

    Chen, Long; Tang, Wen; John, Nigel W

    2017-10-01

    The potential of augmented reality (AR) technology to assist minimally invasive surgery (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this Letter, the authors present a novel real-time AR framework for MIS that achieves interactive geometric aware AR in endoscopic surgery with stereo views. The authors' framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the three-dimensional mesh is incrementally built by a dense zero mean normalised cross-correlation stereo-matching method to improve the accuracy of the surface reconstruction. The proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real time. With the geometric information available, the proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state-of-the-art approaches.

  6. Exploiting the spatial locality of electron correlation within the parametric two-electron reduced-density-matrix method

    NASA Astrophysics Data System (ADS)

    DePrince, A. Eugene; Mazziotti, David A.

    2010-01-01

    The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.

  7. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.

    Alkyldiamine-functionalized variants of the metal–organic framework Mg 2(dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behaviormore » likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2(dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2(dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2(pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg 2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.« less

  9. Nonlinear transient analysis via energy minimization

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    The formulation basis for nonlinear transient analysis of finite element models of structures using energy minimization is provided. Geometric and material nonlinearities are included. The development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. The results indicate the effectiveness of the technique as a viable tool for this purpose.

  10. Inertial Sea Wave Energy Converter from Mediterranean Sea to Ocean - Design Optimization

    NASA Astrophysics Data System (ADS)

    Calleri, Marco

    Optimization of the number of gyroscopes and flywheel rotational speed of a Wave Energy Converter able to produce 725 kW as the nominal power, in the chosen installation site, respecting some imposed constraints and some dimensions from the previous design, by minimizing the cost of the device and the bearing power losses, through the minimization of the LCOE of the device.

  11. Providing Focus for Financial Management.

    ERIC Educational Resources Information Center

    Falender, Andrew J.

    1983-01-01

    A case study of financial turnaround at the highly specialized New England Conservatory of Music describes five strategies to balance costs and resources within the framework of the school's objectives. Areas of cost minimizing and revenue maximizing are outlined and discussed. (MSE)

  12. Energy Performance Monitoring and Optimization System for DoD Campuses

    DTIC Science & Technology

    2014-02-01

    EPMO system exceeded the energy consumption reduction target of 20% and improved occupant thermal comfort by reducing the number of instances outside... thermal comfort constraints, and plant efficiency EW2011-42 Final Report 8 February 2014 in the same framework [30-33]. In this framework, 4-hour...conjunction with information such as: thermal comfort constraints, equipment constraints, energy performance objectives. All the information is

  13. Spatial optimization of cropping pattern for sustainable food and biofuel production with minimal downstream pollution.

    PubMed

    Femeena, P V; Sudheer, K P; Cibin, R; Chaubey, I

    2018-04-15

    Biofuel has emerged as a substantial source of energy in many countries. In order to avoid the 'food versus fuel competition', arising from grain-based ethanol production, the United States has passed regulations that require second generation or cellulosic biofeedstocks to be used for majority of the biofuel production by 2022. Agricultural residue, such as corn stover, is currently the largest source of cellulosic feedstock. However, increased harvesting of crops residue may lead to increased application of fertilizers in order to recover the soil nutrients lost from the residue removal. Alternatively, introduction of less-fertilizer intensive perennial grasses such as switchgrass (Panicum virgatum L.) and Miscanthus (Miscanthus x giganteus Greef et Deu.) can be a viable source for biofuel production. Even though these grasses are shown to reduce nutrient loads to a great extent, high production cost have constrained their wide adoptability to be used as a viable feedstock. Nonetheless, there is an opportunity to optimize feedstock production to meet bioenergy demand while improving water quality. This study presents a multi-objective simulation optimization framework using Soil and Water Assessment Tool (SWAT) and Multi Algorithm Genetically Adaptive Method (AMALGAM) to develop optimal cropping pattern with minimum nutrient delivery and minimum biomass production cost. Computational time required for optimization was significantly reduced by loose coupling SWAT with an external in-stream solute transport model. Optimization was constrained by food security and biofuel production targets that ensured not more than 10% reduction in grain yield and at least 100 million gallons of ethanol production. A case study was carried out in St. Joseph River Watershed that covers 280,000 ha area in the Midwest U.S. Results of the study indicated that introduction of corn stover removal and perennial grass production reduce nitrate and total phosphorus loads without compromising on food and biofuel production. Optimization runs yielded an optimal cropping pattern with 32% of watershed area in stover removal, 15% in switchgrass and 2% in Miscanthus. The optimal scenario resulted in 14% reduction in nitrate and 22% reduction in total phosphorus from the baseline. This framework can be used as an effective tool to take decisions regarding environmentally and economically sustainable strategies to minimize the nutrient delivery at minimal biomass production cost, while simultaneously meeting food and biofuel production targets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Overcoming double-step CO2 adsorption and minimizing water co-adsorption in bulky diamine-appended variants of Mg2(dobpdc)† †Electronic supplementary information (ESI) available: Additional experimental details, and full characterization (powder X-ray diffraction, infrared spectra, diamine loadings, dry N2 decomposition profiles, and CO2 adsorption data) for all new adsorbents. CCDC 1577354. For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c7sc04266c

    PubMed Central

    Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.; Gygi, David; Weston, Simon C.

    2017-01-01

    Alkyldiamine-functionalized variants of the metal–organic framework Mg2(dobpdc) (dobpdc4– = 4,4′-dioxidobiphenyl-3,3′-dicarboxylate) are promising for CO2 capture applications owing to their unique step-shaped CO2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO2 adsorption/desorption profiles. This two-step behavior likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg2(dobpdc) and leads to decreased CO2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg2(dotpdc) (dotpdc4– = 4,4′′-dioxido-[1,1′:4′,1′′-terphenyl]-3,3′′-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg2(pc-dobpdc) (pc-dobpdc4– = 3,3′-dioxidobiphenyl-4,4′-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications. PMID:29629084

  15. GLIMPSE: a rapid decision framework for energy and environmental policy

    EPA Science Inventory

    Over the coming decades, new energy production technologies and the policies that oversee them will affect human health, the vitality of our ecosystems, and the stability of the global climate. The GLIMPSE decision model framework provides insights about the implications of techn...

  16. Most energetic passive states.

    PubMed

    Perarnau-Llobet, Martí; Hovhannisyan, Karen V; Huber, Marcus; Skrzypczyk, Paul; Tura, Jordi; Acín, Antonio

    2015-10-01

    Passive states are defined as those states that do not allow for work extraction in a cyclic (unitary) process. Within the set of passive states, thermal states are the most stable ones: they maximize the entropy for a given energy, and similarly they minimize the energy for a given entropy. Here we find the passive states lying in the other extreme, i.e., those that maximize the energy for a given entropy, which we show also minimize the entropy when the energy is fixed. These extremal properties make these states useful to obtain fundamental bounds for the thermodynamics of finite-dimensional quantum systems, which we show in several scenarios.

  17. Waste biomass toward hydrogen fuel supply chain management for electricity: Malaysia perspective

    NASA Astrophysics Data System (ADS)

    Zakaria, Izatul Husna; Ibrahim, Jafni Azhan; Othman, Abdul Aziz

    2016-08-01

    Green energy is becoming an important aspect of every country in the world toward energy security by reducing dependence on fossil fuel import and enhancing better life quality by living in the healthy environment. This conceptual paper is an approach toward determining physical flow's characteristic of waste wood biomass in high scale plantation toward producing gas fuel for electricity using gasification technique. The scope of this study is supply chain management of syngas fuel from wood waste biomass using direct gasification conversion technology. Literature review on energy security, Malaysia's energy mix, Biomass SCM and technology. This paper uses the theoretical framework of a model of transportation (Lumsden, 2006) and the function of the terminal (Hulten, 1997) for research purpose. To incorporate biomass unique properties, Biomass Element Life Cycle Analysis (BELCA) which is a novel technique develop to understand the behaviour of biomass supply. Theoretical framework used to answer the research questions are Supply Chain Operations Reference (SCOR) framework and Sustainable strategy development in supply chain management framework

  18. On Reliable and Efficient Data Gathering Based Routing in Underwater Wireless Sensor Networks.

    PubMed

    Liaqat, Tayyaba; Akbar, Mariam; Javaid, Nadeem; Qasim, Umar; Khan, Zahoor Ali; Javaid, Qaisar; Alghamdi, Turki Ali; Niaz, Iftikhar Azim

    2016-08-30

    This paper presents cooperative routing scheme to improve data reliability. The proposed protocol achieves its objective, however, at the cost of surplus energy consumption. Thus sink mobility is introduced to minimize the energy consumption cost of nodes as it directly collects data from the network nodes at minimized communication distance. We also present delay and energy optimized versions of our proposed RE-AEDG to further enhance its performance. Simulation results prove the effectiveness of our proposed RE-AEDG in terms of the selected performance matrics.

  19. SU-F-I-41: Calibration-Free Material Decomposition for Dual-Energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, W; Xing, L; Zhang, Q

    2016-06-15

    Purpose: To eliminate tedious phantom calibration or manually region of interest (ROI) selection as required in dual-energy CT material decomposition, we establish a new projection-domain material decomposition framework with incorporation of energy spectrum. Methods: Similar to the case of dual-energy CT, the integral of the basis material image in our model is expressed as a linear combination of basis functions, which are the polynomials of high- and low-energy raw projection data. To yield the unknown coefficients of the linear combination, the proposed algorithm minimizes the quadratic error between the high- and low-energy raw projection data and the projection calculated usingmore » material images. We evaluate the algorithm with an iodine concentration numerical phantom at different dose and iodine concentration levels. The x-ray energy spectra of the high and low energy are estimated using an indirect transmission method. The derived monochromatic images are compared with the high- and low-energy CT images to demonstrate beam hardening artifacts reduction. Quantitative results were measured and compared to the true values. Results: The differences between the true density value used for simulation and that were obtained from the monochromatic images, are 1.8%, 1.3%, 2.3%, and 2.9% for the dose levels from standard dose to 1/8 dose, and are 0.4%, 0.7%, 1.5%, and 1.8% for the four iodine concentration levels from 6 mg/mL to 24 mg/mL. For all of the cases, beam hardening artifacts, especially streaks shown between dense inserts, are almost completely removed in the monochromatic images. Conclusion: The proposed algorithm provides an effective way to yield material images and artifacts-free monochromatic images at different dose levels without the need for phantom calibration or ROI selection. Furthermore, the approach also yields accurate results when the concentration of the iodine concentrate insert is very low, suggesting the algorithm is robust with respect to the low-contrast scenario.« less

  20. Understanding Energy Impacts of Oversized Air Conditioners; NREL Highlights, Research & Development, NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-06-01

    This NREL highlight describes a simulation-based study that analyzes the energy impacts of oversized residential air conditioners. Researchers found that, if parasitic power losses are minimal, there is very little increase in energy use for oversizing an air conditioner. The research demonstrates that new residential air conditioners can be sized primarily based on comfort considerations, because capacity typically has minimal impact on energy efficiency. The results of this research can be useful for contractors and homeowners when choosing a new air conditioner or heat pump during retrofits of existing homes. If the selected unit has a crankcase heater, performing propermore » load calculations to be sure the new unit is not oversized will help avoid excessive energy use.« less

  1. Multi-threaded Event Processing with DANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Lawrence; Elliott Wolin

    2007-05-14

    The C++ data analysis framework DANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab commensurate with the anticipated 12GeV upgrade. The DANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software. This document describes how DANA implements multi-threaded event processing and compares it to simply running multiple instances of a program. Also presented are relative reconstruction rates for Pentium4, Xeon, and Opteron based machines.

  2. Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Wu, Di; Kalsi, Karanjit

    This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less

  3. The simplest non-minimal matter-geometry coupling in the f( R, T) cosmology

    NASA Astrophysics Data System (ADS)

    Moraes, P. H. R. S.; Sahoo, P. K.

    2017-07-01

    f( R, T) gravity is an extended theory of gravity in which the gravitational action contains general terms of both the Ricci scalar R and the trace of the energy-momentum tensor T. In this way, f( R, T) models are capable of describing a non-minimal coupling between geometry (through terms in R) and matter (through terms in T). In this article we construct a cosmological model from the simplest non-minimal matter-geometry coupling within the f( R, T) gravity formalism, by means of an effective energy-momentum tensor, given by the sum of the usual matter energy-momentum tensor with a dark energy contribution, with the latter coming from the matter-geometry coupling terms. We apply the energy conditions to our solutions in order to obtain a range of values for the free parameters of the model which yield a healthy and well-behaved scenario. For some values of the free parameters which are submissive to the energy conditions application, it is possible to predict a transition from a decelerated period of the expansion of the universe to a period of acceleration (dark energy era). We also propose further applications of this particular case of the f( R, T) formalism in order to check its reliability in other fields, rather than cosmology.

  4. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  5. Energy and time determine scaling in biological and computer designs

    PubMed Central

    Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-01-01

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524

  6. The Dominant Folding Route Minimizes Backbone Distortion in SH3

    PubMed Central

    Lammert, Heiko; Noel, Jeffrey K.; Onuchic, José N.

    2012-01-01

    Energetic frustration in protein folding is minimized by evolution to create a smooth and robust energy landscape. As a result the geometry of the native structure provides key constraints that shape protein folding mechanisms. Chain connectivity in particular has been identified as an essential component for realistic behavior of protein folding models. We study the quantitative balance of energetic and geometrical influences on the folding of SH3 in a structure-based model with minimal energetic frustration. A decomposition of the two-dimensional free energy landscape for the folding reaction into relevant energy and entropy contributions reveals that the entropy of the chain is not responsible for the folding mechanism. Instead the preferred folding route through the transition state arises from a cooperative energetic effect. Off-pathway structures are penalized by excess distortion in local backbone configurations and contact pair distances. This energy cost is a new ingredient in the malleable balance of interactions that controls the choice of routes during protein folding. PMID:23166485

  7. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  8. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals

    PubMed Central

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y.; Miller, Patrick J. O.

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals. PMID:23857645

  9. Energy Minimization of Molecular Features Observed on the (110) Face of Lysozyme Crystals

    NASA Technical Reports Server (NTRS)

    Perozzo, Mary A.; Konnert, John H.; Li, Huayu; Nadarajah, Arunan; Pusey, Marc

    1999-01-01

    Molecular dynamics and energy minimization have been carried out using the program XPLOR to check the plausibility of a model lysozyme crystal surface. The molecular features of the (110) face of lysozyme were observed using atomic force microscopy (AFM). A model of the crystal surface was constructed using the PDB file 193L, and was used to simulate an AFM image. Molecule translations, van der Waals radii, and assumed AFM tip shape were adjusted to maximize the correlation coefficient between the experimental and simulated images. The highest degree of 0 correlation (0.92) was obtained with the molecules displaced over 6 A from their positions within the bulk of the crystal. The quality of this starting model, the extent of energy minimization, and the correlation coefficient between the final model and the experimental data will be discussed.

  10. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals.

    PubMed

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y; Miller, Patrick J O

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals.

  11. Green Energy in New Construction: Maximize Energy Savings and Minimize Cost

    ERIC Educational Resources Information Center

    Ventresca, Joseph

    2010-01-01

    People often use the term "green energy" to refer to alternative energy technologies. But green energy doesn't guarantee maximum energy savings at a minimum cost--a common misconception. For school business officials, green energy means getting the lowest energy bills for the lowest construction cost, which translates into maximizing green energy…

  12. Full open-framework batteries for stationary energy storage

    NASA Astrophysics Data System (ADS)

    Pasta, Mauro; Wessells, Colin D.; Liu, Nian; Nelson, Johanna; McDowell, Matthew T.; Huggins, Robert A.; Toney, Michael F.; Cui, Yi

    2014-01-01

    New types of energy storage are needed in conjunction with the deployment of renewable energy sources and their integration with the electrical grid. We have recently introduced a family of cathodes involving the reversible insertion of cations into materials with the Prussian Blue open-framework crystal structure. Here we report a newly developed manganese hexacyanomanganate open-framework anode that has the same crystal structure. By combining it with the previously reported copper hexacyanoferrate cathode we demonstrate a safe, fast, inexpensive, long-cycle life aqueous electrolyte battery, which involves the insertion of sodium ions. This high rate, high efficiency cell shows a 96.7% round trip energy efficiency when cycled at a 5C rate and an 84.2% energy efficiency at a 50C rate. There is no measurable capacity loss after 1,000 deep-discharge cycles. Bulk quantities of the electrode materials can be produced by a room temperature chemical synthesis from earth-abundant precursors.

  13. Full open-framework batteries for stationary energy storage.

    PubMed

    Pasta, Mauro; Wessells, Colin D; Liu, Nian; Nelson, Johanna; McDowell, Matthew T; Huggins, Robert A; Toney, Michael F; Cui, Yi

    2014-01-01

    New types of energy storage are needed in conjunction with the deployment of renewable energy sources and their integration with the electrical grid. We have recently introduced a family of cathodes involving the reversible insertion of cations into materials with the Prussian Blue open-framework crystal structure. Here we report a newly developed manganese hexacyanomanganate open-framework anode that has the same crystal structure. By combining it with the previously reported copper hexacyanoferrate cathode we demonstrate a safe, fast, inexpensive, long-cycle life aqueous electrolyte battery, which involves the insertion of sodium ions. This high rate, high efficiency cell shows a 96.7% round trip energy efficiency when cycled at a 5C rate and an 84.2% energy efficiency at a 50C rate. There is no measurable capacity loss after 1,000 deep-discharge cycles. Bulk quantities of the electrode materials can be produced by a room temperature chemical synthesis from earth-abundant precursors.

  14. A framework to analyze emissions implications of ...

    EPA Pesticide Factsheets

    Future year emissions depend highly on the evolution of the economy, technology and current and future regulatory drivers. A scenario framework was adopted to analyze various technology development pathways and societal change while considering existing regulations and future uncertainty in regulations and evaluate resulting emissions growth patterns. The framework integrates EPA’s energy systems model with an economic Input-Output (I/O) Life Cycle Assessment model. The EPAUS9r MARKAL database is assembled from a set of technologies to represent the U.S. energy system within MARKAL bottom-up technology rich energy modeling framework. The general state of the economy and consequent demands for goods and services from these sectors are taken exogenously in MARKAL. It is important to characterize exogenous inputs about the economy to appropriately represent the industrial sector outlook for each of the scenarios and case studies evaluated. An economic input-output (I/O) model of the US economy is constructed to link up with MARKAL. The I/O model enables user to change input requirements (e.g. energy intensity) for different sectors or the share of consumer income expended on a given good. This gives end-users a mechanism for modeling change in the two dimensions of technological progress and consumer preferences that define the future scenarios. The framework will then be extended to include environmental I/O framework to track life cycle emissions associated

  15. Beyond the Standard Model: The pragmatic approach to the gauge hierarchy problem

    NASA Astrophysics Data System (ADS)

    Mahbubani, Rakhi

    The current favorite solution to the gauge hierarchy problem, the Minimal Supersymmetric Standard Model (MSSM), is looking increasingly fine tuned as recent results from LEP-II have pushed it to regions of its parameter space where a light higgs seems unnatural. Given this fact it seems sensible to explore other approaches to this problem; we study three alternatives here. The first is a Little Higgs theory, in which the Higgs particle is realized as the pseudo-Goldstone boson of an approximate global chiral symmetry and so is naturally light. We analyze precision electroweak observables in the Minimal Moose model, one example of such a theory, and look for regions in its parameter space that are consistent with current limits on these. It is also possible to find a solution within a supersymmetric framework by adding to the MSSM superpotential a lambdaSHuH d term and UV completing with new strong dynamics under which S is a composite before lambda becomes non-perturbative. This allows us to increase the MSSM tree level higgs mass bound to a value that alleviates the supersymmetric fine-tuning problem with elementary higgs fields, maintaining gauge coupling unification in a natural way. Finally we try an entirely different tack, in which we do not attempt to solve the hierarchy problem, but rather assume that the tuning of the higgs can be explained in some unnatural way, from environmental considerations for instance. With this philosophy in mind we study in detail the low-energy phenomenology of the minimal extension to the Standard Model with a dark matter candidate and gauge coupling unification, consisting of additional fermions with the quantum numbers of SUSY higgsinos, and a singlet.

  16. A technical framework to describe occupant behavior for building energy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, William; Hong, Tianzhen

    2013-12-20

    Green buildings that fail to meet expected design performance criteria indicate that technology alone does not guarantee high performance. Human influences are quite often simplified and ignored in the design, construction, and operation of buildings. Energy-conscious human behavior has been demonstrated to be a significant positive factor for improving the indoor environment while reducing the energy use of buildings. In our study we developed a new technical framework to describe energy-related human behavior in buildings. The energy-related behavior includes accounting for individuals and groups of occupants and their interactions with building energy services systems, appliances and facilities. The technical frameworkmore » consists of four key components: i. the drivers behind energy-related occupant behavior, which are biological, societal, environmental, physical, and economical in nature ii. the needs of the occupants are based on satisfying criteria that are either physical (e.g. thermal, visual and acoustic comfort) or non-physical (e.g. entertainment, privacy, and social reward) iii. the actions that building occupants perform when their needs are not fulfilled iv. the systems with which an occupant can interact to satisfy their needs The technical framework aims to provide a standardized description of a complete set of human energy-related behaviors in the form of an XML schema. For each type of behavior (e.g., occupants opening/closing windows, switching on/off lights etc.) we identify a set of common behaviors based on a literature review, survey data, and our own field study and analysis. Stochastic models are adopted or developed for each type of behavior to enable the evaluation of the impact of human behavior on energy use in buildings, during either the design or operation phase. We will also demonstrate the use of the technical framework in assessing the impact of occupancy behavior on energy saving technologies. The technical framework presented is part of our human behavior research, a 5-year program under the U.S. - China Clean Energy Research Center for Building Energy Efficiency.« less

  17. Shape optimization of self-avoiding curves

    NASA Astrophysics Data System (ADS)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  18. Simulation of minimally invasive vascular interventions for training purposes.

    PubMed

    Alderliesten, Tanja; Konings, Maurits K; Niessen, Wiro J

    2004-01-01

    To master the skills required to perform minimally invasive vascular interventions, proper training is essential. A computer simulation environment has been developed to provide such training. The simulation is based on an algorithm specifically developed to simulate the motion of a guide wire--the main instrument used during these interventions--in the human vasculature. In this paper, the design and model of the computer simulation environment is described and first results obtained with phantom and patient data are presented. To simulate minimally invasive vascular interventions, a discrete representation of a guide wire is used which allows modeling of guide wires with different physical properties. An algorithm for simulating the propagation of a guide wire within a vascular system, on the basis of the principle of minimization of energy, has been developed. Both longitudinal translation and rotation are incorporated as possibilities for manipulating the guide wire. The simulation is based on quasi-static mechanics. Two types of energy are introduced: internal energy related to the bending of the guide wire, and external energy resulting from the elastic deformation of the vessel wall. A series of experiments were performed on phantom and patient data. Simulation results are qualitatively compared with 3D rotational angiography data. The results indicate plausible behavior of the simulation.

  19. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  20. Eating patterns, diet quality and energy balance: a perspective about applications and future directions for the food industry.

    PubMed

    Layman, Donald K

    2014-07-01

    The food industry is the point of final integration of consumer food choices with dietary guidelines. For more than 40 years, nutrition recommendations emphasized reducing dietary intake of animal fats, cholesterol, and protein and increasing intake of cereal grains. The food industry responded by creating a convenient, low cost and diverse food supply that featured fat-free cookies, cholesterol-free margarines, and spaghetti with artificial meat sauce. However, research focused on obesity, aging, and Metabolic Syndrome has demonstrated merits of increased dietary protein and reduced amounts of carbohydrates. Dietary guidelines have changed from a conceptual framework of a daily balance of food groups represented as building blocks in a pyramid designed to encourage consumers to avoid fat, to a plate design that creates a meal approach to nutrition and highlights protein and vegetables and minimizes grain carbohydrates. Coincident with the changing dietary guidelines, consumers are placing higher priority on foods for health and seeking foods with more protein, less sugars and minimal processing that are fresh, natural, and with fewer added ingredients. Individual food companies must adapt to changing nutrition knowledge, dietary guidelines, and consumer priorities. The impact on the food industry will be specific to each company based on their products, culture and capacity to adapt. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Cargo tank incident study (CTIS) : rollover data and risk framework.

    DOT National Transportation Integrated Search

    2017-03-01

    It is critical to our nations safety to minimize the risk of accidents involving the transportation of hazardous materials on our nations roadways via commercial cargo tank trucks. This research included a detailed human factors analysis of car...

  2. Exposure Reconstruction: A Framework of Advancing Exposure Assessment

    EPA Science Inventory

    The U.S. Environmental Protection Agency’s (EPA) primary goal for environmental protection is to eliminate or minimize the exposure of humans and ecosystems to potential contaminants. With the number of environmental contaminants increasing annually – more than 2000 new chemical...

  3. Inherent Structure versus Geometric Metric for State Space Discretization

    PubMed Central

    Liu, Hanzhong; Li, Minghai; Fan, Jue; Huo, Shuanghong

    2016-01-01

    Inherent structure (IS) and geometry-based clustering methods are commonly used for analyzing molecular dynamics trajectories. ISs are obtained by minimizing the sampled conformations into local minima on potential/effective energy surface. The conformations that are minimized into the same energy basin belong to one cluster. We investigate the influence of the applications of these two methods of trajectory decomposition on our understanding of the thermodynamics and kinetics of alanine tetrapeptide. We find that at the micro cluster level, the IS approach and root-mean-square deviation (RMSD) based clustering method give totally different results. Depending on the local features of energy landscape, the conformations with close RMSDs can be minimized into different minima, while the conformations with large RMSDs could be minimized into the same basin. However, the relaxation timescales calculated based on the transition matrices built from the micro clusters are similar. The discrepancy at the micro cluster level leads to different macro clusters. Although the dynamic models established through both clustering methods are validated approximately Markovian, the IS approach seems to give a meaningful state space discretization at the macro cluster level. PMID:26915811

  4. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models.

    PubMed

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson-Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation.

  5. Theory of Disk-to-Vesicle Transformation

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Shi, An-Chang

    2009-03-01

    Self-assembled membranes from amphiphilic molecules, such as lipids and block copolymers, can assume a variety of morphologies dictated by energy minimization of system. The membrane energy is characterized by a bending modulus (κ), a Gaussian modulus (κG), and the line tension (γ) of the edge. Two basic morphologies of membranes are flat disks that minimize the bending energy at the cost of the edge energy, and enclosed vesicles that minimize the edge energy at the cost of bending energy. In our work, the transition from disk to vesicle is studied theoretically using the string method, which is designed to find the minimum energy path (MEP) or the most probable transition path between two local minima of an energy landscape. Previous studies of disk-to-vesicle transition usually approximate the transitional states by a series of spherical cups, and found that the spherical cups do not correspond to stable or meta-stable states of the system. Our calculation demonstrates that the intermediate shapes along the MEP are very different from spherical cups. Furthermore, some of these transitional states can be meta-stable. The disk-to-vesicle transition pathways are governed by two scaled parameters, κG/κ and γR0/4κ, where R0 is the radius of the disk. In particular, a meta-stable intermediate state is predicted, which may correspond to the open morphologies observed in experiments and simulations.

  6. 78 FR 12251 - Energy Efficiency Program for Commercial and Industrial Equipment: Public Meeting and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ... Efficiency Program for Commercial and Industrial Equipment: Public Meeting and Availability of the Framework... the notice of public meeting and availability of the Framework Document pertaining to the development of energy conservation standards for commercial and industrial fan and blower equipment published on...

  7. Industrial Sector Energy Efficiency Modeling (ISEEM) Framework Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karali, Nihan; Xu, Tengfang; Sathaye, Jayant

    2012-12-12

    The goal of this study is to develop a new bottom-up industry sector energy-modeling framework with an agenda of addressing least cost regional and global carbon reduction strategies, improving the capabilities and limitations of the existing models that allows trading across regions and countries as an alternative.

  8. Establishing a Commercial Buildings Energy Data Framework for India: A Comprehensive Look at Data Collection Approaches, Use Cases and Institutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iyer, Maithili; Kumar, Satish; Mathew, Sangeeta

    Enhancing energy efficiency of the commercial building stock is an important aspect of any national energy policy. Understanding how buildings use energy is critical to formulating any new policy that may impact energy use, underscoring the importance of credible data. Data enables informed decision making and good quality data is essential for policy makers to prioritize energy saving strategies and track implementation. Given the uniqueness of the buildings sector and challenges to collecting relevant energy data, this study characterizes various elements involved in pertinent data collection and management, with the specific focus on well-defined data requirements, appropriate methodologies and processes,more » feasible data collection mechanisms, and approaches to institutionalizing the collection process. This report starts with a comprehensive review of available examples of energy data collection frameworks for buildings across different countries. The review covers the U.S. experience in the commercial buildings sector, the European experience in the buildings sector and other data collection initiatives in Singapore and China to capture the more systematic efforts in Asia in the commercial sector. To provide context, the review includes a summary and status of disparate efforts in India to collect and use commercial building energy data. Using this review as a key input, the study developed a data collection framework for India with specific consideration to relevant use cases. Continuing with the framework for data collection, this study outlines the key performance indicators applicable to the use cases and their collection feasibility, as well as immediate priorities of the participating stakeholders. It also discusses potential considerations for data collection and the possible approaches for survey design. With the specific purpose of laying out the possible ways to structure and organize data collection institutionally, the study collates existing mechanisms to analyze building energy performance in India and opportunities for standardizing data collection. This report describes the existing capacities and resources for establishing an institutional framework for data collection, the legislation and mandates that support such activity, and identifies roles and responsibilities of the relevant ministries and organizations. Finally, the study presents conclusions and identifies two major data collection strategies within the existing legal framework.« less

  9. Contributions of metabolic and temporal costs to human gait selection.

    PubMed

    Summerside, Erik M; Kram, Rodger; Ahmed, Alaa A

    2018-06-01

    Humans naturally select several parameters within a gait that correspond with minimizing metabolic cost. Much less is understood about the role of metabolic cost in selecting between gaits. Here, we asked participants to decide between walking or running out and back to different gait specific markers. The distance of the walking marker was adjusted after each decision to identify relative distances where individuals switched gait preferences. We found that neither minimizing solely metabolic energy nor minimizing solely movement time could predict how the group decided between gaits. Of our twenty participants, six behaved in a way that tended towards minimizing metabolic energy, while eight favoured strategies that tended more towards minimizing movement time. The remaining six participants could not be explained by minimizing a single cost. We provide evidence that humans consider not just a single movement cost, but instead a weighted combination of these conflicting costs with their relative contributions varying across participants. Individuals who placed a higher relative value on time ran faster than individuals who placed a higher relative value on metabolic energy. Sensitivity to temporal costs also explained variability in an individual's preferred velocity as a function of increasing running distance. Interestingly, these differences in velocity both within and across participants were absent in walking, possibly due to a steeper metabolic cost of transport curve. We conclude that metabolic cost plays an essential, but not exclusive role in gait decisions. © 2018 The Author(s).

  10. Minimizers with Bounded Action for the High-Dimensional Frenkel-Kontorova Model

    NASA Astrophysics Data System (ADS)

    Miao, Xue-Qing; Wang, Ya-Nan; Qin, Wen-Xin

    In Aubry-Mather theory for monotone twist maps or for one-dimensional Frenkel-Kontorova (FK) model with nearest neighbor interactions, each global minimizer (minimal energy configuration) is naturally Birkhoff. However, this is not true for the one-dimensional FK model with non-nearest neighbor interactions or for the high-dimensional FK model. In this paper, we study the Birkhoff property of minimizers with bounded action for the high-dimensional FK model.

  11. Correlated natural transition orbital framework for low-scaling excitation energy calculations (CorNFLEx).

    PubMed

    Baudin, Pablo; Kristensen, Kasper

    2017-06-07

    We present a new framework for calculating coupled cluster (CC) excitation energies at a reduced computational cost. It relies on correlated natural transition orbitals (NTOs), denoted CIS(D')-NTOs, which are obtained by diagonalizing generalized hole and particle density matrices determined from configuration interaction singles (CIS) information and additional terms that represent correlation effects. A transition-specific reduced orbital space is determined based on the eigenvalues of the CIS(D')-NTOs, and a standard CC excitation energy calculation is then performed in that reduced orbital space. The new method is denoted CorNFLEx (Correlated Natural transition orbital Framework for Low-scaling Excitation energy calculations). We calculate second-order approximate CC singles and doubles (CC2) excitation energies for a test set of organic molecules and demonstrate that CorNFLEx yields excitation energies of CC2 quality at a significantly reduced computational cost, even for relatively small systems and delocalized electronic transitions. In order to illustrate the potential of the method for large molecules, we also apply CorNFLEx to calculate CC2 excitation energies for a series of solvated formamide clusters (up to 4836 basis functions).

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A

    Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective geneticmore » algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.« less

  13. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  14. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    PubMed

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and (2) they help the user understand how different energy terms interact to stabilize a given conformation. The Sculpt paradigm combines many of the best features of interactive graphical modeling, energy minimization, and actual physical models, and we propose it as an especially productive way to use current and future increases in computer speed.

  15. LIFE CYCLE DESIGN FRAMEWORK AND DEMONSTRATION PROJECTS - PROFILES OF AT&T AND ALLIED SIGNAL

    EPA Science Inventory

    This document offers guidance and practical experience for integrating environmental considerations into product system development. Life cycle design seeks to minimize the environmental burden associated with a product's life cycle from raw materials acquisition through manufact...

  16. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  17. Minimal Left-Right Symmetric Dark Matter.

    PubMed

    Heeck, Julian; Patra, Sudhanwa

    2015-09-18

    We show that left-right symmetric models can easily accommodate stable TeV-scale dark matter particles without the need for an ad hoc stabilizing symmetry. The stability of a newly introduced multiplet either arises accidentally as in the minimal dark matter framework or comes courtesy of the remaining unbroken Z_{2} subgroup of B-L. Only one new parameter is introduced: the mass of the new multiplet. As minimal examples, we study left-right fermion triplets and quintuplets and show that they can form viable two-component dark matter. This approach is, in particular, valid for SU(2)×SU(2)×U(1) models that explain the recent diboson excess at ATLAS in terms of a new charged gauge boson of mass 2 TeV.

  18. An intertemporal decision framework for electrochemical energy storage management

    NASA Astrophysics Data System (ADS)

    He, Guannan; Chen, Qixin; Moutis, Panayiotis; Kar, Soummya; Whitacre, Jay F.

    2018-05-01

    Dispatchable energy storage is necessary to enable renewable-based power systems that have zero or very low carbon emissions. The inherent degradation behaviour of electrochemical energy storage (EES) is a major concern for both EES operational decisions and EES economic assessments. Here, we propose a decision framework that addresses the intertemporal trade-offs in terms of EES degradation by deriving, implementing and optimizing two metrics: the marginal benefit of usage and the average benefit of usage. These metrics are independent of the capital cost of the EES system, and, as such, separate the value of EES use from the initial cost, which provides a different perspective on storage valuation and operation. Our framework is proved to produce the optimal solution for EES life-cycle profit maximization. We show that the proposed framework offers effective ways to assess the economic values of EES, to make investment decisions for various applications and to inform related subsidy policies.

  19. Exploring methodological frameworks for a mental task-based near-infrared spectroscopy brain-computer interface.

    PubMed

    Weyand, Sabine; Takehara-Nishiuchi, Kaori; Chau, Tom

    2015-10-30

    Near-infrared spectroscopy (NIRS) brain-computer interfaces (BCIs) enable users to interact with their environment using only cognitive activities. This paper presents the results of a comparison of four methodological frameworks used to select a pair of tasks to control a binary NIRS-BCI; specifically, three novel personalized task paradigms and the state-of-the-art prescribed task framework were explored. Three types of personalized task selection approaches were compared, including: user-selected mental tasks using weighted slope scores (WS-scores), user-selected mental tasks using pair-wise accuracy rankings (PWAR), and researcher-selected mental tasks using PWAR. These paradigms, along with the state-of-the-art prescribed mental task framework, where mental tasks are selected based on the most commonly used tasks in literature, were tested by ten able-bodied participants who took part in five NIRS-BCI sessions. The frameworks were compared in terms of their accuracy, perceived ease-of-use, computational time, user preference, and length of training. Most notably, researcher-selected personalized tasks resulted in significantly higher accuracies, while user-selected personalized tasks resulted in significantly higher perceived ease-of-use. It was also concluded that PWAR minimized the amount of data that needed to be collected; while, WS-scores maximized user satisfaction and minimized computational time. In comparison to the state-of-the-art prescribed mental tasks, our findings show that overall, personalized tasks appear to be superior to prescribed tasks with respect to accuracy and perceived ease-of-use. The deployment of personalized rather than prescribed mental tasks ought to be considered and further investigated in future NIRS-BCI studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. ENERGY AND SCIENCE: Five-Year Bibliography 1990-1994

    DTIC Science & Technology

    1995-12-01

    reviews the U.S. government’s efforts to support Venezuela’s energy sector. Sector de Energia en Venezuela: La Prodnccion Petrolera y las Condiciones... renovate existing laboratories or build new ones is often minimal. Four of the eight agencies recently started up task forces to reexamine their research...laboratory repairs. Moreover, funding to renovate existing laboratories or build new ones is often minimal. Four of the eight agencies recently started up

Top