Sample records for energy minimization algorithm

  1. Efficient data communication protocols for wireless networks

    NASA Astrophysics Data System (ADS)

    Zeydan, Engin

    In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.

  2. Transformation of general binary MRF minimization to the first-order case.

    PubMed

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  3. Amber Plug-In for Protein Shop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliva, Ricardo

    2004-05-10

    The Amber Plug-in for ProteinShop has two main components: an AmberEngine library to compute the protein energy models, and a module to solve the energy minimization problem using an optimization algorithm in the OPTI-+ library. Together, these components allow the visualization of the protein folding process in ProteinShop. AmberEngine is a object-oriented library to compute molecular energies based on the Amber model. The main class is called ProteinEnergy. Its main interface methods are (1) "init" to initialize internal variables needed to compute the energy. (2) "eval" to evaluate the total energy given a vector of coordinates. Additional methods allow themore » user to evaluate the individual components of the energy model (bond, angle, dihedral, non-bonded-1-4, and non-bonded energies) and to obtain the energy of each individual atom. The Amber Engine library source code includes examples and test routines that illustrate the use of the library in stand alone programs. The energy minimization module uses the AmberEngine library and the nonlinear optimization library OPT++. OPT++ is open source software available under the GNU Lesser General Public License. The minimization module currently makes use of the LBFGS optimization algorithm in OPT++ to perform the energy minimization. Future releases may give the user a choice of other algorithms available in OPT++.« less

  4. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  5. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  6. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  7. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  8. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  9. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    NASA Astrophysics Data System (ADS)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  10. Energy minimization on manifolds for docking flexible molecules

    PubMed Central

    Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima

    2015-01-01

    In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722

  11. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  12. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization.

    PubMed

    Jiang, Ailian; Zheng, Lihong

    2018-03-29

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.

  13. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization

    PubMed Central

    2018-01-01

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336

  14. A Novel Energy Saving Algorithm with Frame Response Delay Constraint in IEEE 802.16e

    NASA Astrophysics Data System (ADS)

    Nga, Dinh Thi Thuy; Kim, Mingon; Kang, Minho

    Sleep-mode operation of a Mobile Subscriber Station (MSS) in IEEE 802.16e effectively saves energy consumption; however, it induces frame response delay. In this letter, we propose an algorithm to quickly find the optimal value of the final sleep interval in sleep-mode in order to minimize energy consumption with respect to a given frame response delay constraint. The validations of our proposed algorithm through analytical results and simulation results suggest that our algorithm provide a potential guidance to energy saving.

  15. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  16. Seizure Control in a Computational Model Using a Reinforcement Learning Stimulation Paradigm.

    PubMed

    Nagaraj, Vivek; Lamperski, Andrew; Netoff, Theoden I

    2017-11-01

    Neuromodulation technologies such as vagus nerve stimulation and deep brain stimulation, have shown some efficacy in controlling seizures in medically intractable patients. However, inherent patient-to-patient variability of seizure disorders leads to a wide range of therapeutic efficacy. A patient specific approach to determining stimulation parameters may lead to increased therapeutic efficacy while minimizing stimulation energy and side effects. This paper presents a reinforcement learning algorithm that optimizes stimulation frequency for controlling seizures with minimum stimulation energy. We apply our method to a computational model called the epileptor. The epileptor model simulates inter-ictal and ictal local field potential data. In order to apply reinforcement learning to the Epileptor, we introduce a specialized reward function and state-space discretization. With the reward function and discretization fixed, we test the effectiveness of the temporal difference reinforcement learning algorithm (TD(0)). For periodic pulsatile stimulation, we derive a relation that describes, for any stimulation frequency, the minimal pulse amplitude required to suppress seizures. The TD(0) algorithm is able to identify parameters that control seizures quickly. Additionally, our results show that the TD(0) algorithm refines the stimulation frequency to minimize stimulation energy thereby converging to optimal parameters reliably. An advantage of the TD(0) algorithm is that it is adaptive so that the parameters necessary to control the seizures can change over time. We show that the algorithm can converge on the optimal solution in simulation with slow and fast inter-seizure intervals.

  17. Energy minimization of mobile video devices with a hardware H.264/AVC encoder based on energy-rate-distortion optimization

    NASA Astrophysics Data System (ADS)

    Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min

    2014-09-01

    In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.

  18. Beam control in the ETA-II linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu-Jiuan

    1992-08-21

    Corkscrew beam motion is caused by chromatic aberration and misalignment of a focusing system. We have taken some measures to control the corkscrew motion on the ETA-11 induction accelerator. To minimize chromatic aberration, we have developed an energy compensation scheme which reduces energy sweep and differential phase advance within a beam pulse. To minimize the misalignment errors, we have developed a time-independent steering algorithm which minimizes the observed corkscrew amplitude averaged over the beam pulse. The steering algorithm can be used even if the monitor spacing is much greater than the system`s cyclotron wavelength and the corkscrew motion caused bymore » a given misaligned magnet is fully developed, i.e., the relative phase advance is greater than 27{pi}.« less

  19. Beam control in the ETA-II linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu-Jiuan.

    1992-08-21

    Corkscrew beam motion is caused by chromatic aberration and misalignment of a focusing system. We have taken some measures to control the corkscrew motion on the ETA-11 induction accelerator. To minimize chromatic aberration, we have developed an energy compensation scheme which reduces energy sweep and differential phase advance within a beam pulse. To minimize the misalignment errors, we have developed a time-independent steering algorithm which minimizes the observed corkscrew amplitude averaged over the beam pulse. The steering algorithm can be used even if the monitor spacing is much greater than the system's cyclotron wavelength and the corkscrew motion caused bymore » a given misaligned magnet is fully developed, i.e., the relative phase advance is greater than 27[pi].« less

  20. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  1. A method for generating reliable atomistic models of amorphous polymers based on a random search of energy minima

    NASA Astrophysics Data System (ADS)

    Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos

    2005-07-01

    A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.

  2. A convex optimization method for self-organization in dynamic (FSO/RF) wireless networks

    NASA Astrophysics Data System (ADS)

    Llorca, Jaime; Davis, Christopher C.; Milner, Stuart D.

    2008-08-01

    Next generation communication networks are becoming increasingly complex systems. Previously, we presented a novel physics-based approach to model dynamic wireless networks as physical systems which react to local forces exerted on network nodes. We showed that under clear atmospheric conditions the network communication energy can be modeled as the potential energy of an analogous spring system and presented a distributed mobility control algorithm where nodes react to local forces driving the network to energy minimizing configurations. This paper extends our previous work by including the effects of atmospheric attenuation and transmitted power constraints in the optimization problem. We show how our new formulation still results in a convex energy minimization problem. Accordingly, an updated force-driven mobility control algorithm is presented. Forces on mobile backbone nodes are computed as the negative gradient of the new energy function. Results show how in the presence of atmospheric obscuration stronger forces are exerted on network nodes that make them move closer to each other, avoiding loss of connectivity. We show results in terms of network coverage and backbone connectivity and compare the developed algorithms for different scenarios.

  3. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  4. The threshold algorithm: Description of the methodology and new developments

    NASA Astrophysics Data System (ADS)

    Neelamraju, Sridhar; Oligschleger, Christina; Schön, J. Christian

    2017-10-01

    Understanding the dynamics of complex systems requires the investigation of their energy landscape. In particular, the flow of probability on such landscapes is a central feature in visualizing the time evolution of complex systems. To obtain such flows, and the concomitant stable states of the systems and the generalized barriers among them, the threshold algorithm has been developed. Here, we describe the methodology of this approach starting from the fundamental concepts in complex energy landscapes and present recent new developments, the threshold-minimization algorithm and the molecular dynamics threshold algorithm. For applications of these new algorithms, we draw on landscape studies of three disaccharide molecules: lactose, maltose, and sucrose.

  5. Energy-Efficient Routing and Spectrum Assignment Algorithm with Physical-Layer Impairments Constraint in Flexible Optical Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jijun; Zhang, Nawa; Ren, Danping; Hu, Jinhua

    2017-12-01

    The recently proposed flexible optical network can provide more efficient accommodation of multiple data rates than the current wavelength-routed optical networks. Meanwhile, the energy efficiency has also been a hot topic because of the serious energy consumption problem. In this paper, the energy efficiency problem of flexible optical networks with physical-layer impairments constraint is studied. We propose a combined impairment-aware and energy-efficient routing and spectrum assignment (RSA) algorithm based on the link availability, in which the impact of power consumption minimization on signal quality is considered. By applying the proposed algorithm, the connection requests are established on a subset of network topology, reducing the number of transitions from sleep to active state. The simulation results demonstrate that our proposed algorithm can improve the energy efficiency and spectrum resources utilization with the acceptable blocking probability and average delay.

  6. Lung fissure detection in CT images using global minimal paths

    NASA Astrophysics Data System (ADS)

    Appia, Vikram; Patil, Uday; Das, Bipul

    2010-03-01

    Pulmonary fissures separate human lungs into five distinct regions called lobes. Detection of fissure is essential for localization of the lobar distribution of lung diseases, surgical planning and follow-up. Treatment planning also requires calculation of the lobe volume. This volume estimation mandates accurate segmentation of the fissures. Presence of other structures (like vessels) near the fissure, along with its high variational probability in terms of position, shape etc. makes the lobe segmentation a challenging task. Also, false incomplete fissures and occurrence of diseases add to the complications of fissure detection. In this paper, we propose a semi-automated fissure segmentation algorithm using a minimal path approach on CT images. An energy function is defined such that the path integral over the fissure is the global minimum. Based on a few user defined points on a single slice of the CT image, the proposed algorithm minimizes a 2D energy function on the sagital slice computed using (a) intensity (b) distance of the vasculature, (c) curvature in 2D, (d) continuity in 3D. The fissure is the infimum energy path between a representative point on the fissure and nearest lung boundary point in this energy domain. The algorithm has been tested on 10 CT volume datasets acquired from GE scanners at multiple clinical sites. The datasets span through different pathological conditions and varying imaging artifacts.

  7. Resolving the 180-degree ambiguity in vector magnetic field measurements: The 'minimum' energy solution

    NASA Technical Reports Server (NTRS)

    Metcalf, Thomas R.

    1994-01-01

    I present a robust algorithm that resolves the 180-deg ambiguity in measurements of the solar vector magnetic field. The technique simultaneously minimizes both the divergence of the magnetic field and the electric current density using a simulated annealing algorithm. This results in the field orientation with approximately minimum free energy. The technique is well-founded physically and is simple to implement.

  8. Simulation of minimally invasive vascular interventions for training purposes.

    PubMed

    Alderliesten, Tanja; Konings, Maurits K; Niessen, Wiro J

    2004-01-01

    To master the skills required to perform minimally invasive vascular interventions, proper training is essential. A computer simulation environment has been developed to provide such training. The simulation is based on an algorithm specifically developed to simulate the motion of a guide wire--the main instrument used during these interventions--in the human vasculature. In this paper, the design and model of the computer simulation environment is described and first results obtained with phantom and patient data are presented. To simulate minimally invasive vascular interventions, a discrete representation of a guide wire is used which allows modeling of guide wires with different physical properties. An algorithm for simulating the propagation of a guide wire within a vascular system, on the basis of the principle of minimization of energy, has been developed. Both longitudinal translation and rotation are incorporated as possibilities for manipulating the guide wire. The simulation is based on quasi-static mechanics. Two types of energy are introduced: internal energy related to the bending of the guide wire, and external energy resulting from the elastic deformation of the vessel wall. A series of experiments were performed on phantom and patient data. Simulation results are qualitatively compared with 3D rotational angiography data. The results indicate plausible behavior of the simulation.

  9. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  10. A Distance-based Energy Aware Routing algorithm for wireless sensor networks.

    PubMed

    Wang, Jin; Kim, Jeong-Uk; Shu, Lei; Niu, Yu; Lee, Sungyoung

    2010-01-01

    Energy efficiency and balancing is one of the primary challenges for wireless sensor networks (WSNs) since the tiny sensor nodes cannot be easily recharged once they are deployed. Up to now, many energy efficient routing algorithms or protocols have been proposed with techniques like clustering, data aggregation and location tracking etc. However, many of them aim to minimize parameters like total energy consumption, latency etc., which cause hotspot nodes and partitioned network due to the overuse of certain nodes. In this paper, a Distance-based Energy Aware Routing (DEAR) algorithm is proposed to ensure energy efficiency and energy balancing based on theoretical analysis of different energy and traffic models. During the routing process, we consider individual distance as the primary parameter in order to adjust and equalize the energy consumption among involved sensors. The residual energy is also considered as a secondary factor. In this way, all the intermediate nodes will consume their energy at similar rate, which maximizes network lifetime. Simulation results show that the DEAR algorithm can reduce and balance the energy consumption for all sensor nodes so network lifetime is greatly prolonged compared to other routing algorithms.

  11. Recovery Act: Energy Efficiency of Data Networks through Rate Adaptation (EEDNRA) - Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Andrews; Spyridon Antonakopoulos; Steve Fortune

    2011-07-12

    This Concept Definition Study focused on developing a scientific understanding of methods to reduce energy consumption in data networks using rate adaptation. Rate adaptation is a collection of techniques that reduce energy consumption when traffic is light, and only require full energy when traffic is at full provisioned capacity. Rate adaptation is a very promising technique for saving energy: modern data networks are typically operated at average rates well below capacity, but network equipment has not yet been designed to incorporate rate adaptation. The Study concerns packet-switching equipment, routers and switches; such equipment forms the backbone of the modern Internet.more » The focus of the study is on algorithms and protocols that can be implemented in software or firmware to exploit hardware power-control mechanisms. Hardware power-control mechanisms are widely used in the computer industry, and are beginning to be available for networking equipment as well. Network equipment has different performance requirements than computer equipment because of the very fast rate of packet arrival; hence novel power-control algorithms are required for networking. This study resulted in five published papers, one internal report, and two patent applications, documented below. The specific technical accomplishments are the following: • A model for the power consumption of switching equipment used in service-provider telecommunication networks as a function of operating state, and measured power-consumption values for typical current equipment. • An algorithm for use in a router that adapts packet processing rate and hence power consumption to traffic load while maintaining performance guarantees on delay and throughput. • An algorithm that performs network-wide traffic routing with the objective of minimizing energy consumption, assuming that routers have less-than-ideal rate adaptivity. • An estimate of the potential energy savings in service-provider networks using feasibly-implementable rate adaptivity. • A buffer-management algorithm that is designed to reduce the size of router buffers, and hence energy consumed. • A packet-scheduling algorithm designed to minimize packet-processing energy requirements. Additional research is recommended in at least two areas: further exploration of rate-adaptation in network switching equipment, including incorporation of rate-adaptation in actual hardware, allowing experimentation in operational networks; and development of control protocols that allow parts of networks to be shut down while minimizing disruption to traffic flow in the network. The research is an integral part of a large effort within Bell Laboratories, Alcatel-Lucent, aimed at dramatic improvements in the energy efficiency of telecommunication networks. This Study did not explicitly consider any commercialization opportunities.« less

  12. Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors.

    PubMed

    Dasgupta, Rumpa; Yoon, Seokhoon

    2017-04-01

    In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs' traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs' paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay.

  13. Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors

    PubMed Central

    Dasgupta, Rumpa; Yoon, Seokhoon

    2017-01-01

    In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs’ traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs’ paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay. PMID:28368300

  14. Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.

    PubMed

    Chang, Joshua; Paydarfar, David

    2014-12-01

    Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.

  15. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  16. Optimal trajectories for aeroassisted orbital transfer

    NASA Technical Reports Server (NTRS)

    Miele, A.; Venkataraman, P.

    1983-01-01

    Consideration is given to classical and minimax problems involved in aeroassisted transfer from high earth orbit (HEO) to low earth orbit (LEO). The transfer is restricted to coplanar operation, with trajectory control effected by means of lift modulation. The performance of the maneuver is indexed to the energy expenditure or, alternatively, the time integral of the heating rate. Firist-order optimality conditions are defined for the classical approach, as are a sequential gradient-restoration algorithm and a combined gradient-restoration algorithm. Minimization techniques are presented for the aeroassisted transfer energy consumption and time-delay integral of the heating rate, as well as minimization of the pressure. It is shown that the eigenvalues of the Jacobian matrix of the differential system is both stiff and unstable, implying that the sequential gradient restoration algorithm in its present version is unsuitable. A new method, involving a multipoint approach to the two-poing boundary value problem, is recommended.

  17. Sequential and parallel image restoration: neural network implementations.

    PubMed

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  18. An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks

    PubMed Central

    Penumalli, Chakradhar; Palanichamy, Yogesh

    2015-01-01

    A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results. PMID:26221627

  19. A Low Power Consumption Algorithm for Efficient Energy Consumption in ZigBee Motes

    PubMed Central

    Muñoz, Pablo; R-Moreno, María D.; F. Barrero, David

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming increasingly popular since they can gather information from different locations without wires. This advantage is exploited in applications such as robotic systems, telecare, domotic or smart cities, among others. To gain independence from the electricity grid, WSNs devices are equipped with batteries, therefore their operational time is determined by the time that the batteries can power on the device. As a consequence, engineers must consider low energy consumption as a critical objective to design WSNs. Several approaches can be taken to make efficient use of energy in WSNs, for instance low-duty-cycling sensor networks (LDC-WSN). Based on the LDC-WSNs, we present LOKA, a LOw power Konsumption Algorithm to minimize WSNs energy consumption using different power modes in a sensor mote. The contribution of the work is a novel algorithm called LOKA that implements two duty-cycling mechanisms using the end-device of the ZigBee protocol (of the Application Support Sublayer) and an external microcontroller (Cortex M0+) in order to minimize the energy consumption of a delay tolerant networking. Experiments show that using LOKA, the energy required by the sensor device is reduced to half with respect to the same sensor device without using LOKA. PMID:28937660

  20. A Low Power Consumption Algorithm for Efficient Energy Consumption in ZigBee Motes.

    PubMed

    Vaquerizo-Hdez, Daniel; Muñoz, Pablo; R-Moreno, María D; F Barrero, David

    2017-09-22

    Wireless Sensor Networks (WSNs) are becoming increasingly popular since they can gather information from different locations without wires. This advantage is exploited in applications such as robotic systems, telecare, domotic or smart cities, among others. To gain independence from the electricity grid, WSNs devices are equipped with batteries, therefore their operational time is determined by the time that the batteries can power on the device. As a consequence, engineers must consider low energy consumption as a critical objective to design WSNs. Several approaches can be taken to make efficient use of energy in WSNs, for instance low-duty-cycling sensor networks (LDC-WSN). Based on the LDC-WSNs, we present LOKA, a LOw power Konsumption Algorithm to minimize WSNs energy consumption using different power modes in a sensor mote. The contribution of the work is a novel algorithm called LOKA that implements two duty-cycling mechanisms using the end-device of the ZigBee protocol (of the Application Support Sublayer) and an external microcontroller (Cortex M0+) in order to minimize the energy consumption of a delay tolerant networking. Experiments show that using LOKA, the energy required by the sensor device is reduced to half with respect to the same sensor device without using LOKA.

  1. Shape-based diffeomorphic registration on hippocampal surfaces using Beltrami holomorphic flow.

    PubMed

    Lui, Lok Ming; Wong, Tsz Wai; Thompson, Paul; Chan, Tony; Gu, Xianfeng; Yau, Shing-Tung

    2010-01-01

    We develop a new algorithm to automatically register hippocampal (HP) surfaces with complete geometric matching, avoiding the need to manually label landmark features. A good registration depends on a reasonable choice of shape energy that measures the dissimilarity between surfaces. In our work, we first propose a complete shape index using the Beltrami coefficient and curvatures, which measures subtle local differences. The proposed shape energy is zero if and only if two shapes are identical up to a rigid motion. We then seek the best surface registration by minimizing the shape energy. We propose a simple representation of surface diffeomorphisms using Beltrami coefficients, which simplifies the optimization process. We then iteratively minimize the shape energy using the proposed Beltrami Holomorphic flow (BHF) method. Experimental results on 212 HP of normal and diseased (Alzheimer's disease) subjects show our proposed algorithm is effective in registering HP surfaces with complete geometric matching. The proposed shape energy can also capture local shape differences between HP for disease analysis.

  2. What energy functions can be minimized via graph cuts?

    PubMed

    Kolmogorov, Vladimir; Zabih, Ramin

    2004-02-01

    In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.

  3. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  4. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    PubMed

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  5. Digital pulse processing for planar TlBr detectors, optimized for ballistic deficit and charge-trapping effect

    NASA Astrophysics Data System (ADS)

    Nakhostin, M.; Hitomi, K.

    2012-05-01

    The energy resolution of thallium bromide (TlBr) detectors is significantly limited by charge-trapping effect and pulse ballistic deficit, caused by the slow charge collection time. A digital pulse processing algorithm has been developed aiming to compensate for charge-trapping effect, while minimizing pulse ballistic deficit. The algorithm is examined using a 1 mm thick TlBr detector and an excellent energy resolution of 3.37% at 662 keV is achieved at room temperature. The pulse processing algorithms are presented in recursive form, suitable for real-time implementations.

  6. An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.

    PubMed

    Vimalarani, C; Subramanian, R; Sivanandam, S N

    2016-01-01

    Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.

  7. Free energy minimization to predict RNA secondary structures and computational RNA design.

    PubMed

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  8. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  10. Energy aware path planning in complex four dimensional environments

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Anjan

    This dissertation addresses the problem of energy-aware path planning for small autonomous vehicles. While small autonomous vehicles can perform missions that are too risky (or infeasible) for larger vehicles, the missions are limited by the amount of energy that can be carried on board the vehicle. Path planning techniques that either minimize energy consumption or exploit energy available in the environment can thus increase range and endurance. Path planning is complicated by significant spatial (and potentially temporal) variations in the environment. While the main focus is on autonomous aircraft, this research also addresses autonomous ground vehicles. Range and endurance of small unmanned aerial vehicles (UAVs) can be greatly improved by utilizing energy from the atmosphere. Wind can be exploited to minimize energy consumption of a small UAV. But wind, like any other atmospheric component , is a space and time varying phenomenon. To effectively use wind for long range missions, both exploration and exploitation of wind is critical. This research presents a kinematics based tree algorithm which efficiently handles the four dimensional (three spatial and time) path planning problem. The Kinematic Tree algorithm provides a sequence of waypoints, airspeeds, heading and bank angle commands for each segment of the path. The planner is shown to be resolution complete and computationally efficient. Global optimality of the cost function cannot be claimed, as energy is gained from the atmosphere, making the cost function inadmissible. However the Kinematic Tree is shown to be optimal up to resolution if the cost function is admissible. Simulation results show the efficacy of this planning method for a glider in complex real wind data. Simulation results verify that the planner is able to extract energy from the atmosphere enabling long range missions. The Kinematic Tree planning framework, developed to minimize energy consumption of UAVs, is applied for path planning in ground robots. In traditional path planning problem the focus is on obstacle avoidance and navigation. The optimal Kinematic Tree algorithm named Kinematic Tree* is shown to find optimal paths to reach the destination while avoiding obstacles. A more challenging path planning scenario arises for planning in complex terrain. This research shows how the Kinematic Tree* algorithm can be extended to find minimum energy paths for a ground vehicle in difficult mountainous terrain.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hen, Itay; Karliner, Marek

    We study the zero-temperature crystalline structure of baby Skyrmions by applying a full-field numerical minimization algorithm to baby Skyrmions placed inside different parallelogramic unit cells and imposing periodic boundary conditions. We find that within this setup, the minimal energy is obtained for the hexagonal lattice, and that in the resulting configuration the Skyrmion splits into quarter Skyrmions. In particular, we find that the energy in the hexagonal case is lower than the one obtained on the well-studied rectangular lattice, in which splitting into half Skyrmions is observed.

  12. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  13. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  14. Energy management and cooperation in microgrids

    NASA Astrophysics Data System (ADS)

    Rahbar, Katayoun

    Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.

  15. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks

    PubMed Central

    Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal

    2015-01-01

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191

  16. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks.

    PubMed

    Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal

    2015-08-13

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.

  17. Energy-efficient routing, modulation and spectrum allocation in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Tan, Yanxia; Gu, Rentao; Ji, Yuefeng

    2017-07-01

    With tremendous growth in bandwidth demand, energy consumption problem in elastic optical networks (EONs) becomes a hot topic with wide concern. The sliceable bandwidth-variable transponder in EON, which can transmit/receive multiple optical flows, was recently proposed to improve a transponder's flexibility and save energy. In this paper, energy-efficient routing, modulation and spectrum allocation (EE-RMSA) in EONs with sliceable bandwidth-variable transponder is studied. To decrease the energy consumption, we develop a Mixed Integer Linear Programming (MILP) model with corresponding EE-RMSA algorithm for EONs. The MILP model jointly considers the modulation format and optical grooming in the process of routing and spectrum allocation with the objective of minimizing the energy consumption. With the help of genetic operators, the EE-RMSA algorithm iteratively optimizes the feasible routing path, modulation format and spectrum resources solutions by explore the whole search space. In order to save energy, the optical-layer grooming strategy is designed to transmit the lightpath requests. Finally, simulation results verify that the proposed scheme is able to reduce the energy consumption of the network while maintaining the blocking probability (BP) performance compare with the existing First-Fit-KSP algorithm, Iterative Flipping algorithm and EAMGSP algorithm especially in large network topology. Our results also demonstrate that the proposed EE-RMSA algorithm achieves almost the same performance as MILP on an 8-node network.

  18. Adaptive Particle Swarm Optimizer with Varying Acceleration Coefficients for Finding the Most Stable Conformer of Small Molecules.

    PubMed

    Agrawal, Shikha; Silakari, Sanjay; Agrawal, Jitendra

    2015-11-01

    A novel parameter automation strategy for Particle Swarm Optimization called APSO (Adaptive PSO) is proposed. The algorithm is designed to efficiently control the local search and convergence to the global optimum solution. Parameters c1 controls the impact of the cognitive component on the particle trajectory and c2 controls the impact of the social component. Instead of fixing the value of c1 and c2 , this paper updates the value of these acceleration coefficients by considering time variation of evaluation function along with varying inertia weight factor in PSO. Here the maximum and minimum value of evaluation function is use to gradually decrease and increase the value of c1 and c2 respectively. Molecular energy minimization is one of the most challenging unsolved problems and it can be formulated as a global optimization problem. The aim of the present paper is to investigate the effect of newly developed APSO on the highly complex molecular potential energy function and to check the efficiency of the proposed algorithm to find the global minimum of the function under consideration. The proposed algorithm APSO is therefore applied in two cases: Firstly, for the minimization of a potential energy of small molecules with up to 100 degrees of freedom and finally for finding the global minimum energy conformation of 1,2,3-trichloro-1-flouro-propane molecule based on a realistic potential energy function. The computational results of all the cases show that the proposed method performs significantly better than the other algorithms. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1987-01-01

    An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.

  20. Energy-Efficient ZigBee-Based Wireless Sensor Network for Track Bicycle Performance Monitoring

    PubMed Central

    Gharghan, Sadik K.; Nordin, Rosdiadee; Ismail, Mahamod

    2014-01-01

    In a wireless sensor network (WSN), saving power is a vital requirement. In this paper, a simple point-to-point bike WSN was considered. The data of bike parameters, speed and cadence, were monitored and transmitted via a wireless communication based on the ZigBee protocol. Since the bike parameters are monitored and transmitted on every bike wheel rotation, this means the sensor node does not sleep for a long time, causing power consumption to rise. Therefore, a newly proposed algorithm, known as the Redundancy and Converged Data (RCD) algorithm, was implemented for this application to put the sensor node into sleep mode while maintaining the performance measurements. This is achieved by minimizing the data packets transmitted as much as possible and fusing the data of speed and cadence by utilizing the correlation measurements between them to minimize the number of sensor nodes in the network to one node, which results in reduced power consumption, cost, and size, in addition to simpler hardware implementation. Execution of the proposed RCD algorithm shows that this approach can reduce the current consumption to 1.69 mA, and save 95% of the sensor node energy. Also, the comparison results with different wireless standard technologies demonstrate minimal current consumption in the sensor node. PMID:25153141

  1. Energy-efficient ZigBee-based wireless sensor network for track bicycle performance monitoring.

    PubMed

    Gharghan, Sadik K; Nordin, Rosdiadee; Ismail, Mahamod

    2014-08-22

    In a wireless sensor network (WSN), saving power is a vital requirement. In this paper, a simple point-to-point bike WSN was considered. The data of bike parameters, speed and cadence, were monitored and transmitted via a wireless communication based on the ZigBee protocol. Since the bike parameters are monitored and transmitted on every bike wheel rotation, this means the sensor node does not sleep for a long time, causing power consumption to rise. Therefore, a newly proposed algorithm, known as the Redundancy and Converged Data (RCD) algorithm, was implemented for this application to put the sensor node into sleep mode while maintaining the performance measurements. This is achieved by minimizing the data packets transmitted as much as possible and fusing the data of speed and cadence by utilizing the correlation measurements between them to minimize the number of sensor nodes in the network to one node, which results in reduced power consumption, cost, and size, in addition to simpler hardware implementation. Execution of the proposed RCD algorithm shows that this approach can reduce the current consumption to 1.69 mA, and save 95% of the sensor node energy. Also, the comparison results with different wireless standard technologies demonstrate minimal current consumption in the sensor node.

  2. [Preliminary application of an improved Demons deformable registration algorithm in tumor radiotherapy].

    PubMed

    Zhou, Lu; Zhen, Xin; Lu, Wenting; Dou, Jianhong; Zhou, Linghong

    2012-01-01

    To validate the efficiency of an improved Demons deformable registration algorithm and evaluate its application in registration of the treatment image and the planning image in image-guided radiotherapy (IGRT). Based on Brox's gradient constancy assumption and Malis's efficient second-order minimization algorithm, a grey value gradient similarity term was added into the original energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function for automatic determination of the iteration number. The proposed algorithm was validated using mathematically deformed images, physically deformed phantom images and clinical tumor images. Compared with the original Additive Demons algorithm, the improved Demons algorithm achieved a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. The improved Demons algorithm can achieve faster and more accurate radiotherapy.

  3. Energy-aware virtual network embedding in flexi-grid optical networks

    NASA Astrophysics Data System (ADS)

    Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng; Chen, Bin

    2018-01-01

    Virtual network embedding (VNE) problem is to map multiple heterogeneous virtual networks (VN) on a shared substrate network, which mitigate the ossification of the substrate network. Meanwhile, energy efficiency has been widely considered in the network design. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the power increment of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low energy consumption. Numerical results show the functionality of the heuristic algorithm in a 24-node network.

  4. Geometric versus numerical optimal control of a dissipative spin-(1/2) particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapert, M.; Sugny, D.; Zhang, Y.

    2010-12-15

    We analyze the saturation of a nuclear magnetic resonance (NMR) signal using optimal magnetic fields. We consider both the problems of minimizing the duration of the control and its energy for a fixed duration. We solve the optimal control problems by using geometric methods and a purely numerical approach, the grape algorithm, the two methods being based on the application of the Pontryagin maximum principle. A very good agreement is obtained between the two results. The optimal solutions for the energy-minimization problem are finally implemented experimentally with available NMR techniques.

  5. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  6. Reliable Adaptive Data Aggregation Route Strategy for a Trade-off between Energy and Lifetime in WSNs

    PubMed Central

    Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue

    2014-01-01

    Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944

  7. Voltage scheduling for low power/energy

    NASA Astrophysics Data System (ADS)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.

  8. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    PubMed

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  9. Theoretical Bounds of Direct Binary Search Halftoning.

    PubMed

    Liao, Jan-Ray

    2015-11-01

    Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.

  10. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  11. A new mathematical modeling approach for the energy of threonine molecule

    NASA Astrophysics Data System (ADS)

    Sahiner, Ahmet; Kapusuz, Gulden; Yilmaz, Nurullah

    2017-07-01

    In this paper, we propose an improved new methodology in energy conformation problems for finding optimum energy values. First, we construct the Bezier surfaces near local minimizers based on the data obtained from Density Functional Theory (DFT) calculations. Second, we blend the constructed surfaces in order to obtain a single smooth model. Finally, we apply the global optimization algorithm to find two torsion angles those make the energy of the molecule minimum.

  12. FPGA design for constrained energy minimization

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Chang, Chein-I.; Cao, Mang

    2004-02-01

    The Constrained Energy Minimization (CEM) has been widely used for hyperspectral detection and classification. The feasibility of implementing the CEM as a real-time processing algorithm in systolic arrays has been also demonstrated. The main challenge of realizing the CEM in hardware architecture in the computation of the inverse of the data correlation matrix performed in the CEM, which requires a complete set of data samples. In order to cope with this problem, the data correlation matrix must be calculated in a causal manner which only needs data samples up to the sample at the time it is processed. This paper presents a Field Programmable Gate Arrays (FPGA) design of such a causal CEM. The main feature of the proposed FPGA design is to use the Coordinate Rotation DIgital Computer (CORDIC) algorithm that can convert a Givens rotation of a vector to a set of shift-add operations. As a result, the CORDIC algorithm can be easily implemented in hardware architecture, therefore in FPGA. Since the computation of the inverse of the data correlction involves a series of Givens rotations, the utility of the CORDIC algorithm allows the causal CEM to perform real-time processing in FPGA. In this paper, an FPGA implementation of the causal CEM will be studied and its detailed architecture will be also described.

  13. Spot-shadowing optimization to mitigate damage growth in a high-energy-laser amplifier chain.

    PubMed

    Bahk, Seung-Whan; Zuegel, Jonathan D; Fienup, James R; Widmayer, C Clay; Heebner, John

    2008-12-10

    A spot-shadowing technique to mitigate damage growth in a high-energy laser is studied. Its goal is to minimize the energy loss and undesirable hot spots in intermediate planes of the laser. A nonlinear optimization algorithm solves for the complex fields required to mitigate damage growth in the National Ignition Facility amplifier chain. The method is generally applicable to any large fusion laser.

  14. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  15. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    DTIC Science & Technology

    2002-03-07

    like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a

  16. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-03-28

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.

  17. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  18. Energy Efficient Approach in RFID Network

    NASA Astrophysics Data System (ADS)

    Mahdin, Hairulnizam; Abawajy, Jemal; Salwani Yaacob, Siti

    2016-11-01

    Radio Frequency Identification (RFID) technology is among the key technology of Internet of Things (IOT). It is a sensor device that can monitor, identify, locate and tracking physical objects via its tag. The energy in RFID is commonly being used unwisely because they do repeated readings on the same tag as long it resides in the reader vicinity. Repeated readings are unnecessary because it only generate duplicate data that does not contain new information. The reading process need to be schedule accordingly to minimize the chances of repeated readings to save the energy. This will reduce operational cost and can prolong the tag's battery lifetime that cannot be replaced. In this paper, we propose an approach named SELECT to minimize energy spent during reading processes. Experiments conducted shows that proposed algorithm contribute towards significant energy savings in RFID compared to other approaches.

  19. Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles

    NASA Astrophysics Data System (ADS)

    Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi

    2012-09-01

    In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.

  20. EV Charging Algorithm Implementation with User Price Preference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Hu, Boyang; Qiu, Charlie

    2015-02-17

    in this paper, we propose and implement a smart Electric Vehicle (EV) charging algorithm to control the EV charging infrastructures according to users’ price preferences. EVSE (Electric Vehicle Supply Equipment), equipped with bidirectional communication devices and smart meters, can be remotely monitored by the proposed charging algorithm applied to EV control center and mobile app. On the server side, ARIMA model is utilized to fit historical charging load data and perform day-ahead prediction. A pricing strategy with energy bidding policy is proposed and implemented to generate a charging price list to be broadcasted to EV users through mobile app. Onmore » the user side, EV drivers can submit their price preferences and daily travel schedules to negotiate with Control Center to consume the expected energy and minimize charging cost simultaneously. The proposed algorithm is tested and validated through the experimental implementations in UCLA parking lots.« less

  1. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    PubMed

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  2. Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.

    PubMed

    Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza

    2012-05-01

    Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. A multi-populations multi-strategies differential evolution algorithm for structural optimization of metal nanoclusters

    NASA Astrophysics Data System (ADS)

    Fan, Tian-E.; Shao, Gui-Fang; Ji, Qing-Shuang; Zheng, Ji-Wen; Liu, Tun-dong; Wen, Yu-Hua

    2016-11-01

    Theoretically, the determination of the structure of a cluster is to search the global minimum on its potential energy surface. The global minimization problem is often nondeterministic-polynomial-time (NP) hard and the number of local minima grows exponentially with the cluster size. In this article, a multi-populations multi-strategies differential evolution algorithm has been proposed to search the globally stable structure of Fe and Cr nanoclusters. The algorithm combines a multi-populations differential evolution with an elite pool scheme to keep the diversity of the solutions and avoid prematurely trapping into local optima. Moreover, multi-strategies such as growing method in initialization and three differential strategies in mutation are introduced to improve the convergence speed and lower the computational cost. The accuracy and effectiveness of our algorithm have been verified by comparing the results of Fe clusters with Cambridge Cluster Database. Meanwhile, the performance of our algorithm has been analyzed by comparing the convergence rate and energy evaluations with the classical DE algorithm. The multi-populations, multi-strategies mutation and growing method in initialization in our algorithm have been considered respectively. Furthermore, the structural growth pattern of Cr clusters has been predicted by this algorithm. The results show that the lowest-energy structure of Cr clusters contains many icosahedra, and the number of the icosahedral rings rises with increasing size.

  4. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    PubMed

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  5. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which verticesmore » (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.« less

  6. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    PubMed Central

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.

    2016-01-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174

  7. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  8. Simultaneous optimization of the cavity heat load and trip rates in linacs using a genetic algorithm

    DOE PAGES

    Terzić, Balša; Hofler, Alicia S.; Reeves, Cody J.; ...

    2014-10-15

    In this paper, a genetic algorithm-based optimization is used to simultaneously minimize two competing objectives guiding the operation of the Jefferson Lab's Continuous Electron Beam Accelerator Facility linacs: cavity heat load and radio frequency cavity trip rates. The results represent a significant improvement to the standard linac energy management tool and thereby could lead to a more efficient Continuous Electron Beam Accelerator Facility configuration. This study also serves as a proof of principle of how a genetic algorithm can be used for optimizing other linac-based machines.

  9. Algorithm For Hypersonic Flow In Chemical Equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  10. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    PubMed Central

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  11. Alternative definitions of the frozen energy in energy decomposition analysis of density functional theory calculations.

    PubMed

    Horn, Paul R; Head-Gordon, Martin

    2016-02-28

    In energy decomposition analysis (EDA) of intermolecular interactions calculated via density functional theory, the initial supersystem wavefunction defines the so-called "frozen energy" including contributions such as permanent electrostatics, steric repulsions, and dispersion. This work explores the consequences of the choices that must be made to define the frozen energy. The critical choice is whether the energy should be minimized subject to the constraint of fixed density. Numerical results for Ne2, (H2O)2, BH3-NH3, and ethane dissociation show that there can be a large energy lowering associated with constant density orbital relaxation. By far the most important contribution is constant density inter-fragment relaxation, corresponding to charge transfer (CT). This is unwanted in an EDA that attempts to separate CT effects, but it may be useful in other contexts such as force field development. An algorithm is presented for minimizing single determinant energies at constant density both with and without CT by employing a penalty function that approximately enforces the density constraint.

  12. Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zappala, D.; Tavner, P.; Crabtree, C.

    2013-01-01

    Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data representmore » one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.« less

  13. A Holistic Approach to Networked Information Systems Design and Analysis

    DTIC Science & Technology

    2016-04-15

    attain quite substantial savings. 11. Optimal algorithms for energy harvesting in wireless networks. We use a Markov- decision-process (MDP) based...approach to obtain optimal policies for transmissions . The key advantage of our approach is that it holistically considers information and energy in a...Coding technique to minimize delays and the number of transmissions in Wireless Systems. As we approach an era of ubiquitous computing with information

  14. Distributed Optimal Dispatch of Distributed Energy Resources Over Lossy Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Junfeng; Yang, Tao; Wu, Di

    In this paper, we consider the economic dispatch problem (EDP), where a cost function that is assumed to be strictly convex is assigned to each of distributed energy resources (DERs), over packet dropping networks. The goal of a standard EDP is to minimize the total generation cost while meeting total demand and satisfying individual generator output limit. We propose a distributed algorithm for solving the EDP over networks. The proposed algorithm is resilient against packet drops over communication links. Under the assumption that the underlying communication network is strongly connected with a positive probability and the packet drops are independentmore » and identically distributed (i.i.d.), we show that the proposed algorithm is able to solve the EDP. Numerical simulation results are used to validate and illustrate the main results of the paper.« less

  15. Optimizing Motion Planning for Hyper Dynamic Manipulator

    NASA Astrophysics Data System (ADS)

    Aboura, Souhila; Omari, Abdelhafid; Meguenni, Kadda Zemalache

    2012-01-01

    This paper investigates the optimal motion planning for an hyper dynamic manipulator. As case study, we consider a golf swing robot which is consisting with two actuated joint and a mechanical stoppers. Genetic Algorithm (GA) technique is proposed to solve the optimal golf swing motion which is generated by Fourier series approximation. The objective function for GA approach is to minimizing the intermediate and final state, minimizing the robot's energy consummation and maximizing the robot's speed. Obtained simulation results show the effectiveness of the proposed scheme.

  16. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE PAGES

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster heads tomore » minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  17. Stochastic Routing and Scheduling Policies for Energy Harvesting Communication Networks

    NASA Astrophysics Data System (ADS)

    Calvo-Fullana, Miguel; Anton-Haro, Carles; Matamoros, Javier; Ribeiro, Alejandro

    2018-07-01

    In this paper, we study the joint routing-scheduling problem in energy harvesting communication networks. Our policies, which are based on stochastic subgradient methods on the dual domain, act as an energy harvesting variant of the stochastic family of backpresure algorithms. Specifically, we propose two policies: (i) the Stochastic Backpressure with Energy Harvesting (SBP-EH), in which a node's routing-scheduling decisions are determined by the difference between the Lagrange multipliers associated to their queue stability constraints and their neighbors'; and (ii) the Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH), an improved algorithm where the routing-scheduling decision is of a probabilistic nature. For both policies, we show that given sustainable data and energy arrival rates, the stability of the data queues over all network nodes is guaranteed. Numerical results corroborate the stability guarantees and illustrate the minimal gap in performance that our policies offer with respect to classical ones which work with an unlimited energy supply.

  18. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  19. Novel optimization technique of isolated microgrid with hydrogen energy storage.

    PubMed

    Beshr, Eman Hassan; Abdelghany, Hazem; Eteiba, Mahmoud

    2018-01-01

    This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs), Diesel Generator (DG), a Wind Turbine Generator (WTG), Photovoltaic (PV) arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm.

  20. On Deployment of Multiple Base Stations for Energy-Efficient Communication in Wireless Sensor Networks

    DOE PAGES

    Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...

    2010-01-01

    Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less

  1. Novel optimization technique of isolated microgrid with hydrogen energy storage

    PubMed Central

    Abdelghany, Hazem; Eteiba, Mahmoud

    2018-01-01

    This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs), Diesel Generator (DG), a Wind Turbine Generator (WTG), Photovoltaic (PV) arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm. PMID:29466433

  2. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  3. Low Power S-Box Architecture for AES Algorithm using Programmable Second Order Reversible Cellular Automata: An Application to WBAN.

    PubMed

    Gangadari, Bhoopal Rao; Ahamed, Shaik Rafi

    2016-12-01

    In this paper, we presented a novel approach of low energy consumption architecture of S-Box used in Advanced Encryption Standard (AES) algorithm using programmable second order reversible cellular automata (RCA 2 ). The architecture entails a low power implementation with minimal delay overhead and the performance of proposed RCA 2 based S-Box in terms of security is evaluated using the cryptographic properties such as nonlinearity, correlation immunity bias, strict avalanche criteria, entropy and also found that the proposed architecture is secure enough for cryptographic applications. Moreover, the proposed AES algorithm architecture simulation studies show that energy consumption of 68.726 nJ, power dissipation of 3.856 mW for 0.18- μm at 13.69 MHz and energy consumption of 29.408 nJ, power dissipation of 1.65 mW for 0.13- μm at 13.69 MHz. The proposed AES algorithm with RCA 2 based S-Box shows a reduction power consumption by 50 % and energy consumption by 5 % compared to best classical S-Box and composite field arithmetic based AES algorithm. Apart from that, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible, low power dissipation compared to that of LUT based S-Box and hence suitable for Wireless Body Area Network (WBAN) applications.

  4. Universal Darwinism As a Process of Bayesian Inference.

    PubMed

    Campbell, John O

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.

  5. Universal Darwinism As a Process of Bayesian Inference

    PubMed Central

    Campbell, John O.

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an “experiment” in the external world environment, and the results of that “experiment” or the “surprise” entailed by predicted and actual outcomes of the “experiment.” Minimization of free energy implies that the implicit measure of “surprise” experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature. PMID:27375438

  6. Smart HVAC Control in IoT: Energy Consumption Minimization with User Comfort Constraints

    PubMed Central

    Verikoukis, Christos

    2014-01-01

    Smart grid is one of the main applications of the Internet of Things (IoT) paradigm. Within this context, this paper addresses the efficient energy consumption management of heating, ventilation, and air conditioning (HVAC) systems in smart grids with variable energy price. To that end, first, we propose an energy scheduling method that minimizes the energy consumption cost for a particular time interval, taking into account the energy price and a set of comfort constraints, that is, a range of temperatures according to user's preferences for a given room. Then, we propose an energy scheduler where the user may select to relax the temperature constraints to save more energy. Moreover, thanks to the IoT paradigm, the user may interact remotely with the HVAC control system. In particular, the user may decide remotely the temperature of comfort, while the temperature and energy consumption information is sent through Internet and displayed at the end user's device. The proposed algorithms have been implemented in a real testbed, highlighting the potential gains that can be achieved in terms of both energy and cost. PMID:25054163

  7. Smart HVAC control in IoT: energy consumption minimization with user comfort constraints.

    PubMed

    Serra, Jordi; Pubill, David; Antonopoulos, Angelos; Verikoukis, Christos

    2014-01-01

    Smart grid is one of the main applications of the Internet of Things (IoT) paradigm. Within this context, this paper addresses the efficient energy consumption management of heating, ventilation, and air conditioning (HVAC) systems in smart grids with variable energy price. To that end, first, we propose an energy scheduling method that minimizes the energy consumption cost for a particular time interval, taking into account the energy price and a set of comfort constraints, that is, a range of temperatures according to user's preferences for a given room. Then, we propose an energy scheduler where the user may select to relax the temperature constraints to save more energy. Moreover, thanks to the IoT paradigm, the user may interact remotely with the HVAC control system. In particular, the user may decide remotely the temperature of comfort, while the temperature and energy consumption information is sent through Internet and displayed at the end user's device. The proposed algorithms have been implemented in a real testbed, highlighting the potential gains that can be achieved in terms of both energy and cost.

  8. Design and simulation of a fast-charging station for plug-in hybrid electric vehicle (PHEV) batteries

    NASA Astrophysics Data System (ADS)

    de Leon, Nathalie Pulmones

    2011-12-01

    With the increasing interest in green technologies in transportation, plug-in hybrid electric vehicles (PHEV) have proven to be the best short-term solution to minimize greenhouse gas emissions. Despite such interest, conventional vehicle drivers are still reluctant in using such a new technology, mainly because of the long duration (4-8 hours) required to charge PHEV batteries with the currently existing Level I and II chargers. For this reason, Level III fast-charging stations capable of reducing the charging duration to 10-15 minutes are being considered. The present thesis focuses on the design of a fast-charging station that uses, in addition to the electrical grid, two stationary energy storage devices: a flywheel energy storage and a supercapacitor. The power electronic converters used for the interface of the energy sources with the charging station are designed. The design also focuses on the energy management that will minimize the PHEV battery charging duration as well as the duration required to recharge the energy storage devices. For this reason, an algorithm that minimizes durations along with its mathematical formulation is proposed, and its application in fast charging environment will be illustrated by means of two scenarios.

  9. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  10. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  11. Penalized weighted least-squares approach for multienergy computed tomography image reconstruction via structure tensor total variation regularization.

    PubMed

    Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua

    2016-10-01

    Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Iterative image reconstruction for multienergy computed tomography via structure tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.

  13. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  14. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-07

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  15. Shape optimization of self-avoiding curves

    NASA Astrophysics Data System (ADS)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  16. Invasion Percolation and Global Optimization

    NASA Astrophysics Data System (ADS)

    Barabási, Albert-László

    1996-05-01

    Invasion bond percolation (IBP) is mapped exactly into Prim's algorithm for finding the shortest spanning tree of a weighted random graph. Exploring this mapping, which is valid for arbitrary dimensions and lattices, we introduce a new IBP model that belongs to the same universality class as IBP and generates the minimal energy tree spanning the IBP cluster.

  17. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  18. Energy-efficient algorithm for broadcasting in ad hoc wireless sensor networks.

    PubMed

    Xiong, Naixue; Huang, Xingbo; Cheng, Hongju; Wan, Zheng

    2013-04-12

    Broadcasting is a common and basic operation used to support various network protocols in wireless networks. To achieve energy-efficient broadcasting is especially important for ad hoc wireless sensor networks because sensors are generally powered by batteries with limited lifetimes. Energy consumption for broadcast operations can be reduced by minimizing the number of relay nodes based on the observation that data transmission processes consume more energy than data reception processes in the sensor nodes, and how to improve the network lifetime is always an interesting issue in sensor network research. The minimum-energy broadcast problem is then equivalent to the problem of finding the minimum Connected Dominating Set (CDS) for a connected graph that is proved NP-complete. In this paper, we introduce an Efficient Minimum CDS algorithm (EMCDS) with help of a proposed ordered sequence list. EMCDS does not concern itself with node energy and broadcast operations might fail if relay nodes are out of energy. Next we have proposed a Minimum Energy-consumption Broadcast Scheme (MEBS) with a modified version of EMCDS, and aimed at providing an efficient scheduling scheme with maximized network lifetime. The simulation results show that the proposed EMCDS algorithm can find smaller CDS compared with related works, and the MEBS can help to increase the network lifetime by efficiently balancing energy among nodes in the networks.

  19. Free-energy analysis of spin models on hyperbolic lattice geometries.

    PubMed

    Serina, Marcel; Genzor, Jozef; Lee, Yoju; Gendiar, Andrej

    2016-04-01

    We investigate relations between spatial properties of the free energy and the radius of Gaussian curvature of the underlying curved lattice geometries. For this purpose we derive recurrence relations for the analysis of the free energy normalized per lattice site of various multistate spin models in the thermal equilibrium on distinct non-Euclidean surface lattices of the infinite sizes. Whereas the free energy is calculated numerically by means of the corner transfer matrix renormalization group algorithm, the radius of curvature has an analytic expression. Two tasks are considered in this work. First, we search for such a lattice geometry, which minimizes the free energy per site. We conjecture that the only Euclidean flat geometry results in the minimal free energy per site regardless of the spin model. Second, the relations among the free energy, the radius of curvature, and the phase transition temperatures are analyzed. We found out that both the free energy and the phase transition temperature inherit the structure of the lattice geometry and asymptotically approach the profile of the Gaussian radius of curvature. This achievement opens new perspectives in the AdS-CFT correspondence theories.

  20. Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks

    PubMed Central

    Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo

    2012-01-01

    Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190

  1. Spectral CT Reconstruction with Image Sparsity and Spectral Mean

    PubMed Central

    Zhang, Yi; Xi, Yan; Yang, Qingsong; Cong, Wenxiang; Zhou, Jiliu

    2017-01-01

    Photon-counting detectors can acquire x-ray intensity data in different energy bins. The signal to noise ratio of resultant raw data in each energy bin is generally low due to the narrow bin width and quantum noise. To address this problem, here we propose an image reconstruction approach for spectral CT to simultaneously reconstructs x-ray attenuation coefficients in all the energy bins. Because the measured spectral data are highly correlated among the x-ray energy bins, the intra-image sparsity and inter-image similarity are important prior acknowledge for image reconstruction. Inspired by this observation, the total variation (TV) and spectral mean (SM) measures are combined to improve the quality of reconstructed images. For this purpose, a linear mapping function is used to minimalize image differences between energy bins. The split Bregman technique is applied to perform image reconstruction. Our numerical and experimental results show that the proposed algorithms outperform competing iterative algorithms in this context. PMID:29034267

  2. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.

  3. Metal implants on CT: comparison of iterative reconstruction algorithms for reduction of metal artifacts with single energy and spectral CT scanning in a phantom model.

    PubMed

    Fang, Jieming; Zhang, Da; Wilcox, Carol; Heidinger, Benedikt; Raptopoulos, Vassilios; Brook, Alexander; Brook, Olga R

    2017-03-01

    To assess single energy metal artifact reduction (SEMAR) and spectral energy metal artifact reduction (MARS) algorithms in reducing artifacts generated by different metal implants. Phantom was scanned with and without SEMAR (Aquilion One, Toshiba) and MARS (Discovery CT750 HD, GE), with various metal implants. Images were evaluated objectively by measuring standard deviation in regions of interests and subjectively by two independent reviewers grading on a scale of 0 (no artifact) to 4 (severe artifact). Reviewers also graded new artifacts introduced by metal artifact reduction algorithms. SEMAR and MARS significantly decreased variability of the density measurement adjacent to the metal implant, with median SD (standard deviation of density measurement) of 52.1 HU without SEMAR, vs. 12.3 HU with SEMAR, p < 0.001. Median SD without MARS of 63.1 HU decreased to 25.9 HU with MARS, p < 0.001. Median SD with SEMAR is significantly lower than median SD with MARS (p = 0.0011). SEMAR improved subjective image quality with reduction in overall artifacts grading from 3.2 ± 0.7 to 1.4 ± 0.9, p < 0.001. Improvement of overall image quality by MARS has not reached statistical significance (3.2 ± 0.6 to 2.6 ± 0.8, p = 0.088). There was a significant introduction of artifacts introduced by metal artifact reduction algorithm for MARS with 2.4 ± 1.0, but minimal with SEMAR 0.4 ± 0.7, p < 0.001. CT iterative reconstruction algorithms with single and spectral energy are both effective in reduction of metal artifacts. Single energy-based algorithm provides better overall image quality than spectral CT-based algorithm. Spectral metal artifact reduction algorithm introduces mild to moderate artifacts in the far field.

  4. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.

  5. Comparative Analysis of Particle Swarm and Differential Evolution via Tuning on Ultrasmall Titanium Oxide Nanoclusters

    NASA Astrophysics Data System (ADS)

    Inclan, Eric; Lassester, Jack; Geohegan, David; Yoon, Mina

    Optimization algorithms (OA) coupled with numerical methods enable researchers to identify and study (meta) stable nanoclusters without the control restrictions of empirical methods. An algorithm's performance is governed by two factors: (1) its compatibility with an objective function, (2) the dimension of a design space, which increases with cluster size. Although researchers often tune an algorithm's user-defined parameters (UDP), tuning is not guaranteed to improve performance. In this research, Particle Swarm (PSO) and Differential Evolution (DE), are compared by tuning their UDP in a multi-objective optimization environment (MOE). Combined with a Kolmogorov Smirnov test for statistical significance, the MOE enables the study of the Pareto Front (PF), made of the UDP settings that trade-off between best performance in energy minimization (``effectiveness'') based on force-field potential energy, and best convergence rate (``efficiency''). By studying the PF, this research finds that UDP values frequently suggested in the literature do not provide best effectiveness for these methods. Additionally, monotonic convergence is found to significantly improve efficiency without sacrificing effectiveness for very small systems, suggesting better compatibility. Work is supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.

  6. Force modeling for incisions into various tissues with MRF haptic master

    NASA Astrophysics Data System (ADS)

    Kim, Pyunghwa; Kim, Soomin; Park, Young-Dai; Choi, Seung-Bok

    2016-03-01

    This study proposes a new model to predict the reaction force that occurs in incisions during robot-assisted minimally invasive surgery. The reaction force is fed back to the manipulator by a magneto-rheological fluid (MRF) haptic master, which is featured by a bi-directional clutch actuator. The reaction force feedback provides similar sensations to laparotomy that cannot be provided by a conventional master for surgery. This advantage shortens the training period for robot-assisted minimally invasive surgery and can improve the accuracy of operations. The reaction force modeling of incisions can be utilized in a surgical simulator that provides a virtual reaction force. In this work, in order to model the reaction force during incisions, the energy aspect of the incision process is adopted and analyzed. Each mode of the incision process is classified by the tendency of the energy change, and modeled for realistic real-time application. The reaction force model uses actual reaction force information with three types of actual tissues: hard tissue, medium tissue, and soft tissue. This modeled force is realized by the MRF haptic master through an algorithm based on the position and velocity of a scalpel using two different control methods: an open-loop algorithm and a closed-loop algorithm. The reaction forces obtained from the proposed model are compared with a desired force in time domain.

  7. Design of long induction linacs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caporaso, G.J.; Cole, A.G.

    1990-09-06

    A self-consistent design strategy for induction linacs is presented which addresses the issues of brightness preservation against space charge induced emittance growth, minimization of the beam breakup instability and the suppression of beam centroid motion due to chromatic effects (corkscrew) and misaligned focusing elements. A simple steering algorithm is described that widens the effective energy bandwidth of the transport system.

  8. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  9. Accelerating atomic structure search with cluster regularization

    NASA Astrophysics Data System (ADS)

    Sørensen, K. H.; Jørgensen, M. S.; Bruix, A.; Hammer, B.

    2018-06-01

    We present a method for accelerating the global structure optimization of atomic compounds. The method is demonstrated to speed up the finding of the anatase TiO2(001)-(1 × 4) surface reconstruction within a density functional tight-binding theory framework using an evolutionary algorithm. As a key element of the method, we use unsupervised machine learning techniques to categorize atoms present in a diverse set of partially disordered surface structures into clusters of atoms having similar local atomic environments. Analysis of more than 1000 different structures shows that the total energy of the structures correlates with the summed distances of the atomic environments to their respective cluster centers in feature space, where the sum runs over all atoms in each structure. Our method is formulated as a gradient based minimization of this summed cluster distance for a given structure and alternates with a standard gradient based energy minimization. While the latter minimization ensures local relaxation within a given energy basin, the former enables escapes from meta-stable basins and hence increases the overall performance of the global optimization.

  10. A Survey on Next-Generation Mixed Line Rate (MLR) and Energy-Driven Wavelength-Division Multiplexed (WDM) Optical Networks

    NASA Astrophysics Data System (ADS)

    Iyer, Sridhar

    2015-06-01

    With the ever-increasing traffic demands, infrastructure of the current 10 Gbps optical network needs to be enhanced. Further, since the energy crisis is gaining increasing concerns, new research topics need to be devised and technological solutions for energy conservation need to be investigated. In all-optical mixed line rate (MLR) network, feasibility of a lightpath is determined by the physical layer impairment (PLI) accumulation. Contrary to PLI-aware routing and wavelength assignment (PLIA-RWA) algorithm applicable for a 10 Gbps wavelength-division multiplexed (WDM) network, a new Routing, Wavelength, Modulation format assignment (RWMFA) algorithm is required for the MLR optical network. With the rapid growth of energy consumption in Information and Communication Technologies (ICT), recently, lot of attention is being devoted toward "green" ICT solutions. This article presents a review of different RWMFA (PLIA-RWA) algorithms for MLR networks, and surveys the most relevant research activities aimed at minimizing energy consumption in optical networks. In essence, this article presents a comprehensive and timely survey on a growing field of research, as it covers most aspects of MLR and energy-driven optical networks. Hence, the author aims at providing a comprehensive reference for the growing base of researchers who will work on MLR and energy-driven optical networks in the upcoming years. Finally, the article also identifies several open problems for future research.

  11. An algorithm to locate optimal bond breaking points on a potential energy surface for applications in mechanochemistry and catalysis.

    PubMed

    Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang

    2017-10-21

    The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ-function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.

  12. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  13. Energy-aware virtual network embedding in flexi-grid networks.

    PubMed

    Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng

    2017-11-27

    Network virtualization technology has been proposed to allow multiple heterogeneous virtual networks (VNs) to coexist on a shared substrate network, which increases the utilization of the substrate network. Efficiently mapping VNs on the substrate network is a major challenge on account of the VN embedding (VNE) problem. Meanwhile, energy efficiency has been widely considered in the network design in terms of operation expenses and the ecological awareness. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the electricity cost of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low electricity cost. Numerical results show that the heuristic algorithm performs closely to the ILP for a small size network, and we also demonstrate its applicability to larger networks.

  14. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    PubMed Central

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872

  15. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    PubMed

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  16. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  17. Method for non-intrusively identifying a contained material utilizing uncollided nuclear transmission measurements

    DOEpatents

    Morrison, John L.; Stephens, Alan G.; Grover, S. Blaine

    2001-11-20

    An improved nuclear diagnostic method identifies a contained target material by measuring on-axis, mono-energetic uncollided particle radiation transmitted through a target material for two penetrating radiation beam energies, and applying specially developed algorithms to estimate a ratio of macroscopic neutron cross-sections for the uncollided particle radiation at the two energies, where the penetrating radiation is a neutron beam, or a ratio of linear attenuation coefficients for the uncollided particle radiation at the two energies, where the penetrating radiation is a gamma-ray beam. Alternatively, the measurements are used to derive a minimization formula based on the macroscopic neutron cross-sections for the uncollided particle radiation at the two neutron beam energies, or the linear attenuation coefficients for the uncollided particle radiation at the two gamma-ray beam energies. A candidate target material database, including known macroscopic neutron cross-sections or linear attenuation coefficients for target materials at the selected neutron or gamma-ray beam energies, is used to approximate the estimated ratio or to solve the minimization formula, such that the identity of the contained target material is discovered.

  18. A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications

    NASA Astrophysics Data System (ADS)

    Entezari-Maleki, Reza; Movaghar, Ali

    Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.

  19. Optimal Design of Wind-PV-Diesel-Battery System using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Suryoatmojo, Heri; Hiyama, Takashi; Elbaset, Adel A.; Ashari, Mochamad

    Application of diesel generators to supply the load demand on isolated islands in Indonesia has widely spread. With increases in oil price and the concerns about global warming, the integration of diesel generators with renewable energy systems have become an attractive energy sources for supplying the load demand. This paper performs an optimal design of integrated system involving Wind-PV-Diesel-Battery system for isolated island with CO2 emission evaluation by using genetic algorithm. The proposed system has been designed for the hybrid power generation in East Nusa Tenggara, Indonesia-latitude 09.30S, longitude 122.0E. From simulation results, the proposed system is able to minimize the total annual cost of the system under study and reduce CO2 emission generated by diesel generators.

  20. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  1. Naturally Inspired Firefly Controller For Stabilization Of Double Inverted Pendulum

    NASA Astrophysics Data System (ADS)

    Srikanth, Kavirayani; Nagesh, Gundavarapu

    2015-12-01

    A double inverted pendulum plant as an established model that is analyzed as part of this work was tested under the influence of time delay, where the controller was fine tuned using a firefly algorithm taking into considering the fitness function of variation of the cart position and to minimize the cart position displacement and still stabilize it effectively. The naturally inspired algorithm which imitates the fireflies definitely is an energy efficient method owing to the inherent logic of the way the fireflies respond collectively and has shown that critical time delays makes the system healthy.

  2. Offshore wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Elkinton, Christopher Neil

    Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of the farm. A more thorough optimization highlighted the features of the area that would result in a minimized COE. The results showed reasonable layout designs and COE estimates that are consistent with existing offshore wind farms.

  3. DNASynth: a software application to optimization of artificial gene synthesis

    NASA Astrophysics Data System (ADS)

    Muczyński, Jan; Nowak, Robert M.

    2017-08-01

    DNASynth is a client-server software application in which the client runs in a web browser. The aim of this program is to support and optimize process of artificial gene synthesizing using Ligase Chain Reaction. Thanks to LCR it is possible to obtain DNA strand coding defined by user peptide. The DNA sequence is calculated by optimization algorithm that consider optimal codon usage, minimal energy of secondary structures and minimal number of required LCR. Additionally absence of sequences characteristic for defined by user set of restriction enzymes is guaranteed. The presented software was tested on synthetic and real data.

  4. Thermal energy storage to minimize cost and improve efficiency of a polygeneration district energy system in a real-time electricity market

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Kody M.; Kim, Jong Suk; Cole, Wesley J.

    2016-10-01

    District energy systems can produce low-cost utilities for large energy networks, but can also be a resource for the electric grid by their ability to ramp production or to store thermal energy by responding to real-time market signals. In this work, dynamic optimization exploits the flexibility of thermal energy storage by determining optimal times to store and extract excess energy. This concept is applied to a polygeneration distributed energy system with combined heat and power, district heating, district cooling, and chilled water thermal energy storage. The system is a university campus responsible for meeting the energy needs of tens ofmore » thousands of people. The objective for the dynamic optimization problem is to minimize cost over a 24-h period while meeting multiple loads in real time. The paper presents a novel algorithm to solve this dynamic optimization problem with energy storage by decomposing the problem into multiple static mixed-integer nonlinear programming (MINLP) problems. Another innovative feature of this work is the study of a large, complex energy network which includes the interrelations of a wide variety of energy technologies. Results indicate that a cost savings of 16.5% is realized when the system can participate in the wholesale electricity market.« less

  5. Integrating atlas and graph cut methods for right ventricle blood-pool segmentation from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Dangi, Shusil; Linte, Cristian A.

    2017-03-01

    Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  6. Cross-Layer Algorithms for QoS Enhancement in Wireless Multimedia Sensor Networks

    NASA Astrophysics Data System (ADS)

    Saxena, Navrati; Roy, Abhishek; Shin, Jitae

    A lot of emerging applications like advanced telemedicine and surveillance systems, demand sensors to deliver multimedia content with precise level of QoS enhancement. Minimizing energy in sensor networks has been a much explored research area but guaranteeing QoS over sensor networks still remains an open issue. In this letter we propose a cross-layer approach combining Network and MAC layers, for QoS enhancement in wireless multimedia sensor networks. In the network layer a statistical estimate of sensory QoS parameters is performed and a nearoptimal genetic algorithmic solution is proposed to solve the NP-complete QoS-routing problem. On the other hand the objective of the proposed MAC algorithm is to perform the QoS-based packet classification and automatic adaptation of the contention window. Simulation results demonstrate that the proposed protocol is capable of providing lower delay and better throughput, at the cost of reasonable energy consumption, in comparison with other existing sensory QoS protocols.

  7. Efficiency of unconstrained minimization techniques in nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution.

  8. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  9. AMLSA Algorithm for Hybrid Precoding in Millimeter Wave MIMO Systems

    NASA Astrophysics Data System (ADS)

    Liu, Fulai; Sun, Zhenxing; Du, Ruiyan; Bai, Xiaoyu

    2017-10-01

    In this paper, an effective algorithm will be proposed for hybrid precoding in mmWave MIMO systems, referred to as alternating minimization algorithm with the least squares amendment (AMLSA algorithm). To be specific, for the fully-connected structure, the presented algorithm is exploited to minimize the classical objective function and obtain the hybrid precoding matrix. It introduces an orthogonal constraint to the digital precoding matrix which is amended subsequently by the least squares after obtaining its alternating minimization iterative result. Simulation results confirm that the achievable spectral efficiency of our proposed algorithm is better to some extent than that of the existing algorithm without the least squares amendment. Furthermore, the number of iterations is reduced slightly via improving the initialization procedure.

  10. Bio-mimic optimization strategies in wireless sensor networks: a survey.

    PubMed

    Adnan, Md Akhtaruzzaman; Abdur Razzaque, Mohammd; Ahmed, Ishtiaque; Isnin, Ismail Fauzi

    2013-12-24

    For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted.

  11. Pre-Scheduled and Self Organized Sleep-Scheduling Algorithms for Efficient K-Coverage in Wireless Sensor Networks

    PubMed Central

    Hwang, I-Shyan

    2017-01-01

    The K-coverage configuration that guarantees coverage of each location by at least K sensors is highly popular and is extensively used to monitor diversified applications in wireless sensor networks. Long network lifetime and high detection quality are the essentials of such K-covered sleep-scheduling algorithms. However, the existing sleep-scheduling algorithms either cause high cost or cannot preserve the detection quality effectively. In this paper, the Pre-Scheduling-based K-coverage Group Scheduling (PSKGS) and Self-Organized K-coverage Scheduling (SKS) algorithms are proposed to settle the problems in the existing sleep-scheduling algorithms. Simulation results show that our pre-scheduled-based KGS approach enhances the detection quality and network lifetime, whereas the self-organized-based SKS algorithm minimizes the computation and communication cost of the nodes and thereby is energy efficient. Besides, SKS outperforms PSKGS in terms of network lifetime and detection quality as it is self-organized. PMID:29257078

  12. [Accurate 3D free-form registration between fan-beam CT and cone-beam CT].

    PubMed

    Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun

    2012-06-01

    Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.

  13. Real time target allocation in cooperative unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kudleppanavar, Ganesh

    The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.

  14. redMaGiC: selecting luminous red galaxies from the DES Science Verification data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozo, E.

    We introduce redMaGiC, an automated algorithm for selecting Luminous Red Galaxies (LRGs). The algorithm was developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the color-cuts necessary to produce a luminosity-thresholded LRG sam- ple of constant comoving density. Additionally, we demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine-learning based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalog sampling the redshiftmore » range z ϵ [0.2,0.8]. Our fiducial sample has a comoving space density of 10 -3 (h -1Mpc) -3, and a median photo-z bias (z spec z photo) and scatter (σ z=(1 + z)) of 0.005 and 0.017 respectively.The corresponding 5σ outlier fraction is 1.4%. We also test our algorithm with Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8) and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1% level.« less

  15. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  16. Cooperative and Integrated Vehicle and Intersection Control for Energy Efficiency (CIVIC-E²)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Yunfei; Seliman, Salaheldeen M. S.; Wang, Enshu

    Recent advances in connected vehicle technologies enable vehicles and signal controllers to cooperate and improve the traffic management at intersections. This paper explores the opportunity for cooperative and integrated vehicle and intersection control for energy efficiency (CIVIC-E 2) to contribute to a more sustainable transportation system. We propose a two-level approach that jointly optimizes the traffic signal timing and vehicles' approach speed, with the objective being to minimize total energy consumption for all vehicles passing through an isolated intersection. More specifically, at the intersection level, a dynamic programming algorithm is designed to find the optimal signal timing by explicitly consideringmore » the arrival time and energy profile of each vehicle. At the vehicle level, a model predictive control strategy is adopted to ensure that vehicles pass through the intersection in a timely fashion. Our simulation study has shown that the proposed CIVIC-E 2 system can significantly improve intersection performance under various traffic conditions. Compared with conventional fixed-time and actuated signal control strategies, the proposed algorithm can reduce energy consumption and queue length by up to 31% and 95%, respectively.« less

  17. Cooperative and Integrated Vehicle and Intersection Control for Energy Efficiency (CIVIC-E²)

    DOE PAGES

    Hou, Yunfei; Seliman, Salaheldeen M. S.; Wang, Enshu; ...

    2018-02-15

    Recent advances in connected vehicle technologies enable vehicles and signal controllers to cooperate and improve the traffic management at intersections. This paper explores the opportunity for cooperative and integrated vehicle and intersection control for energy efficiency (CIVIC-E 2) to contribute to a more sustainable transportation system. We propose a two-level approach that jointly optimizes the traffic signal timing and vehicles' approach speed, with the objective being to minimize total energy consumption for all vehicles passing through an isolated intersection. More specifically, at the intersection level, a dynamic programming algorithm is designed to find the optimal signal timing by explicitly consideringmore » the arrival time and energy profile of each vehicle. At the vehicle level, a model predictive control strategy is adopted to ensure that vehicles pass through the intersection in a timely fashion. Our simulation study has shown that the proposed CIVIC-E 2 system can significantly improve intersection performance under various traffic conditions. Compared with conventional fixed-time and actuated signal control strategies, the proposed algorithm can reduce energy consumption and queue length by up to 31% and 95%, respectively.« less

  18. Knotty: Efficient and Accurate Prediction of Complex RNA Pseudoknot Structures.

    PubMed

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo; Will, Sebastian

    2018-06-01

    The computational prediction of RNA secondary structure by free energy minimization has become an important tool in RNA research. However in practice, energy minimization is mostly limited to pseudoknot-free structures or rather simple pseudoknots, not covering many biologically important structures such as kissing hairpins. Algorithms capable of predicting sufficiently complex pseudoknots (for sequences of length n) used to have extreme complexities, e.g. Pknots (Rivas and Eddy, 1999) has O(n6) time and O(n4) space complexity. The algorithm CCJ (Chen et al., 2009) dramatically improves the asymptotic run time for predicting complex pseudoknots (handling almost all relevant pseudoknots, while being slightly less general than Pknots), but this came at the cost of large constant factors in space and time, which strongly limited its practical application (∼200 bases already require 256GB space). We present a CCJ-type algorithm, Knotty, that handles the same comprehensive pseudoknot class of structures as CCJ with improved space complexity of Θ(n3 + Z)-due to the applied technique of sparsification, the number of "candidates", Z, appears to grow significantly slower than n4 on our benchmark set (which include pseudoknotted RNAs up to 400 nucleotides). In terms of run time over this benchmark, Knotty clearly outperforms Pknots and the original CCJ implementation, CCJ 1.0; Knotty's space consumption fundamentally improves over CCJ 1.0, being on a par with the space-economic Pknots. By comparing to CCJ 2.0, our unsparsified Knotty variant, we demonstrate the isolated effect of sparsification. Moreover, Knotty employs the state-of-the-art energy model of "HotKnots DP09", which results in superior prediction accuracy over Pknots. Our software is available at https://github.com/HosnaJabbari/Knotty. will@tbi.unvie.ac.at. Supplementary data are available at Bioinformatics online.

  19. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    PubMed

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  20. Smart building temperature control using occupant feedback

    NASA Astrophysics Data System (ADS)

    Gupta, Santosh K.

    This work was motivated by the problem of computing optimal commonly-agreeable thermal settings in spaces with multiple occupants. In this work we propose algorithms that take into account each occupant's preferences along with the thermal correlations between different zones in a building, to arrive at optimal thermal settings for all zones of the building in a coordinated manner. In the first part of this work we incorporate active occupant feedback to minimize aggregate user discomfort and total energy cost. User feedback is used to estimate the users comfort range, taking into account possible inaccuracies in the feedback. The control algorithm takes the energy cost into account, trading it off optimally with the aggregate user discomfort. A lumped heat transfer model based on thermal resistance and capacitance is used to model a multi-zone building. We provide a stability analysis and establish convergence of the proposed solution to a desired temperature that minimizes the sum of energy cost and aggregate user discomfort. However, for convergence to the optimal, sufficient separation between the user feedback frequency and the dynamics of the system is necessary; otherwise, the user feedback provided do not correctly reflect the effect of current control input value on user discomfort. The algorithm is further extended using singular perturbation theory to determine the minimum time between successive user feedback solicitations. Under sufficient time scale separation, we establish convergence of the proposed solution. Simulation study and experimental runs on the Watervliet based test facility demonstrates performance of the algorithm. In the second part we develop a consensus algorithm for attaining a common temperature set-point that is agreeable to all occupants of a zone in a typical multi-occupant space. The information on the comfort range functions is indeed held privately by each occupant. Using occupant differentiated dynamically adjusted prices as feedback signals, we propose a distributed solution, which ensures that a consensus is attained among all occupants upon convergence, irrespective of their temperature preferences being in coherence or conflicting. Occupants are only assumed to be rational, in that they choose their own temperature set-points so as to minimize their individual energy cost plus discomfort. We use Alternating Direction Method of Multipliers ( ADMM) to solve our consensus problem. We further establish the convergence of the proposed algorithm to the optimal thermal set point values that minimize the sum of the energy cost and the aggregate discomfort of all occupants in a multi-zone building. For simulating our consensus algorithm we use realistic building parameters based on the Watervliet test facility. The simulation study based on real world building parameters establish the validity of our theoretical model and provide insights on the dynamics of the system with a mobile user population. In the third part we present a game-theoretic (auction) mechanism, that requires occupants to "purchase" their individualized comfort levels beyond what is provided by default by the building operator. The comfort pricing policy, derived as an extension of Vickrey-Clarke-Groves (VCG) pricing, ensures incentive-compatibility of the mechanism, i.e., an occupant acting in self-interest cannot benefit from declaring their comfort function untruthfully, irrespective of the choices made by other occupants. The declared (or estimated) occupant comfort ranges (functions) are then utilized by the building operator---along with the energy cost information---to set the environment controls to optimally balance the aggregate discomfort of the occupants and the energy cost of the building operator. We use realistic building model and parameters based on our test facility to demonstrate the convergence of the actual temperatures in different zones to the desired temperatures, and provide insight to the pricing structure necessary for truthful comfort feedback from the occupants. Finally, we present an end-to-end framework designed for enabling occupant feedback collection and incorporating the feedback data towards energy efficient operation of a building. We have designed a mobile application that occupants can use on their smart phones to provide their thermal preference feedback. When relaying the occupant feedback to the central server the mobile application also uses indoor localization techniques to tie the occupant preference to their current thermal zone. Texas Instruments sensortags are used for real time zonal temperature readings. The mobile application relays the occupant preference along with the location to a central server that also hosts our learning algorithm to learn the environment and using occupant feedback calculates the optimal temperature set point. The entire process is triggered upon change of occupancy, environmental conditions, and or occupant preference. The learning algorithm is scheduled to run at regular intervals to respond dynamically to environmental and occupancy changes. We describe results from experimental studies in two different settings: a single family residential home setting and in a university based laboratory space setting. (Abstract shortened by UMI.).

  1. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  2. A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2008-01-01

    An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent

  3. Model-based minimization algorithm of a supercritical helium loop consumption subject to operational constraints

    NASA Astrophysics Data System (ADS)

    Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.

    2017-12-01

    Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.

  4. Active shape models unleashed

    NASA Astrophysics Data System (ADS)

    Kirschner, Matthias; Wesarg, Stefan

    2011-03-01

    Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym

  5. Detailed analysis of grid-based molecular docking: A case study of CDOCKER-A CHARMm-based MD docking algorithm.

    PubMed

    Wu, Guosheng; Robertson, Daniel H; Brooks, Charles L; Vieth, Michal

    2003-10-01

    The influence of various factors on the accuracy of protein-ligand docking is examined. The factors investigated include the role of a grid representation of protein-ligand interactions, the initial ligand conformation and orientation, the sampling rate of the energy hyper-surface, and the final minimization. A representative docking method is used to study these factors, namely, CDOCKER, a molecular dynamics (MD) simulated-annealing-based algorithm. A major emphasis in these studies is to compare the relative performance and accuracy of various grid-based approximations to explicit all-atom force field calculations. In these docking studies, the protein is kept rigid while the ligands are treated as fully flexible and a final minimization step is used to refine the docked poses. A docking success rate of 74% is observed when an explicit all-atom representation of the protein (full force field) is used, while a lower accuracy of 66-76% is observed for grid-based methods. All docking experiments considered a 41-member protein-ligand validation set. A significant improvement in accuracy (76 vs. 66%) for the grid-based docking is achieved if the explicit all-atom force field is used in a final minimization step to refine the docking poses. Statistical analysis shows that even lower-accuracy grid-based energy representations can be effectively used when followed with full force field minimization. The results of these grid-based protocols are statistically indistinguishable from the detailed atomic dockings and provide up to a sixfold reduction in computation time. For the test case examined here, improving the docking accuracy did not necessarily enhance the ability to estimate binding affinities using the docked structures. Copyright 2003 Wiley Periodicals, Inc.

  6. A strategy to find minimal energy nanocluster structures.

    PubMed

    Rogan, José; Varas, Alejandro; Valdivia, Juan Alejandro; Kiwi, Miguel

    2013-11-05

    An unbiased strategy to search for the global and local minimal energy structures of free standing nanoclusters is presented. Our objectives are twofold: to find a diverse set of low lying local minima, as well as the global minimum. To do so, we use massively the fast inertial relaxation engine algorithm as an efficient local minimizer. This procedure turns out to be quite efficient to reach the global minimum, and also most of the local minima. We test the method with the Lennard-Jones (LJ) potential, for which an abundant literature does exist, and obtain novel results, which include a new local minimum for LJ13 , 10 new local minima for LJ14 , and thousands of new local minima for 15≤N≤65. Insights on how to choose the initial configurations, analyzing the effectiveness of the method in reaching low-energy structures, including the global minimum, are developed as a function of the number of atoms of the cluster. Also, a novel characterization of the potential energy surface, analyzing properties of the local minima basins, is provided. The procedure constitutes a promising tool to generate a diverse set of cluster conformations, both two- and three-dimensional, that can be used as an input for refinement by means of ab initio methods. Copyright © 2013 Wiley Periodicals, Inc.

  7. Optimal RTP Based Power Scheduling for Residential Load in Smart Grid

    NASA Astrophysics Data System (ADS)

    Joshi, Hemant I.; Pandya, Vivek J.

    2015-12-01

    To match supply and demand, shifting of load from peak period to off-peak period is one of the effective solutions. Presently flat rate tariff is used in major part of the world. This type of tariff doesn't give incentives to the customers if they use electrical energy during off-peak period. If real time pricing (RTP) tariff is used, consumers can be encouraged to use energy during off-peak period. Due to advancement in information and communication technology, two-way communications is possible between consumers and utility. To implement this technique in smart grid, home energy controller (HEC), smart meters, home area network (HAN) and communication link between consumers and utility are required. HEC interacts automatically by running an algorithm to find optimal energy consumption schedule for each consumer. However, all the consumers are not allowed to shift their load simultaneously during off-peak period to avoid rebound peak condition. Peak to average ratio (PAR) is considered while carrying out minimization problem. Linear programming problem (LPP) method is used for minimization. The simulation results of this work show the effectiveness of the minimization method adopted. The hardware work is in progress and the program based on the method described here will be made to solve real problem.

  8. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    PubMed

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  9. Fine-grained parallel RNAalifold algorithm for RNA secondary structure prediction on FPGA

    PubMed Central

    Xia, Fei; Dou, Yong; Zhou, Xingming; Yang, Xuejun; Xu, Jiaqing; Zhang, Yang

    2009-01-01

    Background In the field of RNA secondary structure prediction, the RNAalifold algorithm is one of the most popular methods using free energy minimization. However, general-purpose computers including parallel computers or multi-core computers exhibit parallel efficiency of no more than 50%. Field Programmable Gate-Array (FPGA) chips provide a new approach to accelerate RNAalifold by exploiting fine-grained custom design. Results RNAalifold shows complicated data dependences, in which the dependence distance is variable, and the dependence direction is also across two dimensions. We propose a systolic array structure including one master Processing Element (PE) and multiple slave PEs for fine grain hardware implementation on FPGA. We exploit data reuse schemes to reduce the need to load energy matrices from external memory. We also propose several methods to reduce energy table parameter size by 80%. Conclusion To our knowledge, our implementation with 16 PEs is the only FPGA accelerator implementing the complete RNAalifold algorithm. The experimental results show a factor of 12.2 speedup over the RNAalifold (ViennaPackage – 1.6.5) software for a group of aligned RNA sequences with 2981-residue running on a Personal Computer (PC) platform with Pentium 4 2.6 GHz CPU. PMID:19208138

  10. Specimen charging in X-ray absorption spectroscopy: correction of total electron yield data from stabilized zirconia in the energy range 250-915 eV.

    PubMed

    Vlachos, Dimitrios; Craven, Alan J; McComb, David W

    2005-03-01

    The effects of specimen charging on X-ray absorption spectroscopy using total electron yield have been investigated using powder samples of zirconia stabilized by a range of oxides. The stabilized zirconia powder was mixed with graphite to minimize the charging but significant modifications of the intensities of features in the X-ray absorption near-edge fine structure (XANES) still occurred. The time dependence of the charging was measured experimentally using a time scan, and an algorithm was developed to use this measured time dependence to correct the effects of the charging. The algorithm assumes that the system approaches the equilibrium state by an exponential decay. The corrected XANES show improved agreement with the electron energy-loss near-edge fine structure obtained from the same samples.

  11. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  12. Skylight energy performance and design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arasteh, D.; Johnson, R.; Selkowitz, S.

    1984-02-01

    Proper skylight utilization can significantly lower energy requirements and peak electrical loads for space conditioning and lighting in commercial buildings. In this study we systematically explore the energy effects of skylight systems in a prototypical office building and examine the savings from daylighting. The DOE-2.1B energy analysis computer program with its newly incorporated daylighting algorithms was used to generate more than 2000 parametric simulations for seven US climates. The parameters varied include skylight-to-roof ratio, shading coefficient, visible transmittance, skylight well light loss, electric lighting power density, roof heat transfer coefficient, and type of electric lighting control. For specific climates wemore » identify roof/skylight characteristics that minimize total energy or peak electrical load requirements.« less

  13. Control strategy optimization of HVAC plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio

    In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less

  14. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  15. Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems

    NASA Technical Reports Server (NTRS)

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-01-01

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.

  16. Tunnel Ventilation Control Using Reinforcement Learning Methodology

    NASA Astrophysics Data System (ADS)

    Chu, Baeksuk; Kim, Dongnam; Hong, Daehie; Park, Jooyoung; Chung, Jin Taek; Kim, Tae-Hyung

    The main purpose of tunnel ventilation system is to maintain CO pollutant concentration and VI (visibility index) under an adequate level to provide drivers with comfortable and safe driving environment. Moreover, it is necessary to minimize power consumption used to operate ventilation system. To achieve the objectives, the control algorithm used in this research is reinforcement learning (RL) method. RL is a goal-directed learning of a mapping from situations to actions without relying on exemplary supervision or complete models of the environment. The goal of RL is to maximize a reward which is an evaluative feedback from the environment. In the process of constructing the reward of the tunnel ventilation system, two objectives listed above are included, that is, maintaining an adequate level of pollutants and minimizing power consumption. RL algorithm based on actor-critic architecture and gradient-following algorithm is adopted to the tunnel ventilation system. The simulations results performed with real data collected from existing tunnel ventilation system and real experimental verification are provided in this paper. It is confirmed that with the suggested controller, the pollutant level inside the tunnel was well maintained under allowable limit and the performance of energy consumption was improved compared to conventional control scheme.

  17. Nonlinear transient analysis by energy minimization: A theoretical basis for the ACTION computer code. [predicting the response of a lightweight aircraft during a crash

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1980-01-01

    The formulation basis for establishing the static or dynamic equilibrium configurations of finite element models of structures which may behave in the nonlinear range are provided. With both geometric and time independent material nonlinearities included, the development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. Representations of a rigid link and an impenetrable contact plane are added to the deformation model so that any number of nodes of the finite element model may be connected by a rigid link or may contact the plane. Equilibrium configurations are derived as the stationary conditions of a potential function of the generalized nodal variables of the model. Minimization of the nonlinear potential function is achieved by using the best current variable metric update formula for use in unconstrained minimization. Powell's conjugate gradient algorithm, which offers very low storage requirements at some slight increase in the total number of calculations, is the other alternative algorithm to be used for extremely large scale problems.

  18. Molecular system identification for enzyme directed evolution and design

    NASA Astrophysics Data System (ADS)

    Guan, Xiangying; Chakrabarti, Raj

    2017-09-01

    The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.

  19. A random-key encoded harmony search approach for energy-efficient production scheduling with shared resources

    NASA Astrophysics Data System (ADS)

    Garcia-Santiago, C. A.; Del Ser, J.; Upton, C.; Quilligan, F.; Gil-Lopez, S.; Salcedo-Sanz, S.

    2015-11-01

    When seeking near-optimal solutions for complex scheduling problems, meta-heuristics demonstrate good performance with affordable computational effort. This has resulted in a gravitation towards these approaches when researching industrial use-cases such as energy-efficient production planning. However, much of the previous research makes assumptions about softer constraints that affect planning strategies and about how human planners interact with the algorithm in a live production environment. This article describes a job-shop problem that focuses on minimizing energy consumption across a production facility of shared resources. The application scenario is based on real facilities made available by the Irish Center for Manufacturing Research. The formulated problem is tackled via harmony search heuristics with random keys encoding. Simulation results are compared to a genetic algorithm, a simulated annealing approach and a first-come-first-served scheduling. The superior performance obtained by the proposed scheduler paves the way towards its practical implementation over industrial production chains.

  20. The Multiple-Minima Problem in Protein Folding

    NASA Astrophysics Data System (ADS)

    Scheraga, Harold A.

    1991-10-01

    The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.

  1. SU-F-I-41: Calibration-Free Material Decomposition for Dual-Energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, W; Xing, L; Zhang, Q

    2016-06-15

    Purpose: To eliminate tedious phantom calibration or manually region of interest (ROI) selection as required in dual-energy CT material decomposition, we establish a new projection-domain material decomposition framework with incorporation of energy spectrum. Methods: Similar to the case of dual-energy CT, the integral of the basis material image in our model is expressed as a linear combination of basis functions, which are the polynomials of high- and low-energy raw projection data. To yield the unknown coefficients of the linear combination, the proposed algorithm minimizes the quadratic error between the high- and low-energy raw projection data and the projection calculated usingmore » material images. We evaluate the algorithm with an iodine concentration numerical phantom at different dose and iodine concentration levels. The x-ray energy spectra of the high and low energy are estimated using an indirect transmission method. The derived monochromatic images are compared with the high- and low-energy CT images to demonstrate beam hardening artifacts reduction. Quantitative results were measured and compared to the true values. Results: The differences between the true density value used for simulation and that were obtained from the monochromatic images, are 1.8%, 1.3%, 2.3%, and 2.9% for the dose levels from standard dose to 1/8 dose, and are 0.4%, 0.7%, 1.5%, and 1.8% for the four iodine concentration levels from 6 mg/mL to 24 mg/mL. For all of the cases, beam hardening artifacts, especially streaks shown between dense inserts, are almost completely removed in the monochromatic images. Conclusion: The proposed algorithm provides an effective way to yield material images and artifacts-free monochromatic images at different dose levels without the need for phantom calibration or ROI selection. Furthermore, the approach also yields accurate results when the concentration of the iodine concentrate insert is very low, suggesting the algorithm is robust with respect to the low-contrast scenario.« less

  2. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  3. On Connected Target k-Coverage in Heterogeneous Wireless Sensor Networks.

    PubMed

    Yu, Jiguo; Chen, Ying; Ma, Liran; Huang, Baogui; Cheng, Xiuzhen

    2016-01-15

    Coverage and connectivity are two important performance evaluation indices for wireless sensor networks (WSNs). In this paper, we focus on the connected target k-coverage (CTC k) problem in heterogeneous wireless sensor networks (HWSNs). A centralized connected target k-coverage algorithm (CCTC k) and a distributed connected target k-coverage algorithm (DCTC k) are proposed so as to generate connected cover sets for energy-efficient connectivity and coverage maintenance. To be specific, our proposed algorithms aim at achieving minimum connected target k-coverage, where each target in the monitored region is covered by at least k active sensor nodes. In addition, these two algorithms strive to minimize the total number of active sensor nodes and guarantee that each sensor node is connected to a sink, such that the sensed data can be forwarded to the sink. Our theoretical analysis and simulation results show that our proposed algorithms outperform a state-of-art connected k-coverage protocol for HWSNs.

  4. 3-D shape estimation of DNA molecules from stereo cryo-electron micro-graphs using a projection-steerable snake.

    PubMed

    Jacob, Mathews; Blu, Thierry; Vaillant, Cedric; Maddocks, John H; Unser, Michael

    2006-01-01

    We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.

  5. Bio-Mimic Optimization Strategies in Wireless Sensor Networks: A Survey

    PubMed Central

    Adnan, Md. Akhtaruzzaman; Razzaque, Mohammd Abdur; Ahmed, Ishtiaque; Isnin, Ismail Fauzi

    2014-01-01

    For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted. PMID:24368702

  6. Automated Dynamic Demand Response Implementation on a Micro-grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Chelmis, Charalampos

    In this paper, we describe a system for real-time automated Dynamic and Sustainable Demand Response with sparse data consumption prediction implemented on the University of Southern California campus microgrid. Supply side approaches to resolving energy supply-load imbalance do not work at high levels of renewable energy penetration. Dynamic Demand Response (D 2R) is a widely used demand-side technique to dynamically adjust electricity consumption during peak load periods. Our D 2R system consists of accurate machine learning based energy consumption forecasting models that work with sparse data coupled with fast and sustainable load curtailment optimization algorithms that provide the ability tomore » dynamically adapt to changing supply-load imbalances in near real-time. Our Sustainable DR (SDR) algorithms attempt to distribute customer curtailment evenly across sub-intervals during a DR event and avoid expensive demand peaks during a few sub-intervals. It also ensures that each customer is penalized fairly in order to achieve the targeted curtailment. We develop near linear-time constant-factor approximation algorithms along with Polynomial Time Approximation Schemes (PTAS) for SDR curtailment that minimizes the curtailment error defined as the difference between the target and achieved curtailment values. Our SDR curtailment problem is formulated as an Integer Linear Program that optimally matches customers to curtailment strategies during a DR event while also explicitly accounting for customer strategy switching overhead as a constraint. We demonstrate the results of our D 2R system using real data from experiments performed on the USC smartgrid and show that 1) our prediction algorithms can very accurately predict energy consumption even with noisy or missing data and 2) our curtailment algorithms deliver DR with extremely low curtailment errors in the 0.01-0.05 kWh range.« less

  7. A Dynamic Programming Approach for Base Station Sleeping in Cellular Networks

    NASA Astrophysics Data System (ADS)

    Gong, Jie; Zhou, Sheng; Niu, Zhisheng

    The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.

  8. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE PAGES

    Yaw, Sean; Mumey, Brendan

    2017-10-28

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  9. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaw, Sean; Mumey, Brendan

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  10. SU-E-J-02: 4D Digital Tomosynthesis Based On Algebraic Image Reconstruction and Total-Variation Minimization for the Improvement of Image Quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, D; Kang, S; Kim, T

    2014-06-01

    Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studiesmore » to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)« less

  11. Resource Control in Large-Scale Mobile-Agents Systems

    DTIC Science & Technology

    2005-07-01

    wakeup node schedule , much energy can be conserved. We also designed several protocols for global clock synchronization. The most interesting one is...choice as to which remote hosts to visit and in which order. Scheduling mobile-agent migration in a way that minimizes bandwidth and other resource...use, therefore, is both feasible and attractive. Dartmouth considered several variations of the scheduling problem, and devel- oped an algorithm for

  12. Wireless Power Transfer for Distributed Estimation in Sensor Networks

    NASA Astrophysics Data System (ADS)

    Mai, Vien V.; Shin, Won-Yong; Ishibashi, Koji

    2017-04-01

    This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.

  13. redMaGiC: Selecting luminous red galaxies from the DES Science Verification data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozo, E.; Rykoff, E. S.; Abate, A.

    Here, we introduce redMaGiC, an automated algorithm for selecting luminous red galaxies (LRGs). The algorithm was specifically developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the colour cuts necessary to produce a luminosity-thresholded LRG sample of constant comoving density. We demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine learning-based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalogue sampling themore » redshift range z ϵ [0.2, 0.8]. Our fiducial sample has a comoving space density of 10 –3 (h –1 Mpc) –3, and a median photo-z bias (zspec – zphoto) and scatter (σz/(1 + z)) of 0.005 and 0.017, respectively. The corresponding 5σ outlier fraction is 1.4 per cent. We also test our algorithm with Sloan Digital Sky Survey Data Release 8 and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1 per cent level.« less

  14. redMaGiC: Selecting luminous red galaxies from the DES Science Verification data

    DOE PAGES

    Rozo, E.; Rykoff, E. S.; Abate, A.; ...

    2016-05-30

    Here, we introduce redMaGiC, an automated algorithm for selecting luminous red galaxies (LRGs). The algorithm was specifically developed to minimize photometric redshift uncertainties in photometric large-scale structure studies. redMaGiC achieves this by self-training the colour cuts necessary to produce a luminosity-thresholded LRG sample of constant comoving density. We demonstrate that redMaGiC photo-zs are very nearly as accurate as the best machine learning-based methods, yet they require minimal spectroscopic training, do not suffer from extrapolation biases, and are very nearly Gaussian. We apply our algorithm to Dark Energy Survey (DES) Science Verification (SV) data to produce a redMaGiC catalogue sampling themore » redshift range z ϵ [0.2, 0.8]. Our fiducial sample has a comoving space density of 10 –3 (h –1 Mpc) –3, and a median photo-z bias (zspec – zphoto) and scatter (σz/(1 + z)) of 0.005 and 0.017, respectively. The corresponding 5σ outlier fraction is 1.4 per cent. We also test our algorithm with Sloan Digital Sky Survey Data Release 8 and Stripe 82 data, and discuss how spectroscopic training can be used to control photo-z biases at the 0.1 per cent level.« less

  15. A Comparison of Three Algorithms for Orion Drogue Parachute Release

    NASA Technical Reports Server (NTRS)

    Matz, Daniel A.; Braun, Robert D.

    2015-01-01

    The Orion Multi-Purpose Crew Vehicle is susceptible to ipping apex forward between drogue parachute release and main parachute in ation. A smart drogue release algorithm is required to select a drogue release condition that will not result in an apex forward main parachute deployment. The baseline algorithm is simple and elegant, but does not perform as well as desired in drogue failure cases. A simple modi cation to the baseline algorithm can improve performance, but can also sometimes fail to identify a good release condition. A new algorithm employing simpli ed rotational dynamics and a numeric predictor to minimize a rotational energy metric is proposed. A Monte Carlo analysis of a drogue failure scenario is used to compare the performance of the algorithms. The numeric predictor prevents more of the cases from ipping apex forward, and also results in an improvement in the capsule attitude at main bag extraction. The sensitivity of the numeric predictor to aerodynamic dispersions, errors in the navigated state, and execution rate is investigated, showing little degradation in performance.

  16. Extracting the differential inverse inelastic mean free path and differential surface excitation probability of Tungsten from X-ray photoelectron spectra and electron energy loss spectra

    NASA Astrophysics Data System (ADS)

    Afanas'ev, V. P.; Gryazev, A. S.; Efremenko, D. S.; Kaplya, P. S.; Kuznetcova, A. V.

    2017-12-01

    Precise knowledge of the differential inverse inelastic mean free path (DIIMFP) and differential surface excitation probability (DSEP) of Tungsten is essential for many fields of material science. In this paper, a fitting algorithm is applied for extracting DIIMFP and DSEP from X-ray photoelectron spectra and electron energy loss spectra. The algorithm uses the partial intensity approach as a forward model, in which a spectrum is given as a weighted sum of cross-convolved DIIMFPs and DSEPs. The weights are obtained as solutions of the Riccati and Lyapunov equations derived from the invariant imbedding principle. The inversion algorithm utilizes the parametrization of DIIMFPs and DSEPs on the base of a classical Lorentz oscillator. Unknown parameters of the model are found by using the fitting procedure, which minimizes the residual between measured spectra and forward simulations. It is found that the surface layer of Tungsten contains several sublayers with corresponding Langmuir resonances. The thicknesses of these sublayers are proportional to the periods of corresponding Langmuir oscillations, as predicted by the theory of R.H. Ritchie.

  17. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  18. A Cost-Effective Electric Vehicle Charging Method Designed For Residential Homes with Renewable Energy

    NASA Astrophysics Data System (ADS)

    Lie, T. T.; Liang, Xiuli; Haque, M. H.

    2015-03-01

    Most of the electrical infrastructure in use around the world today is decades old, and may be illsuited to widespread proliferation of personal Electric Vehicles (EVs) whose charging requirements will place increasing strain on grid demand. In order to reduce the pressure on the grid and taking benefits of off peak charging, this paper presents a smart and cost effective EV charging methodology for residential homes equipped with renewable energy resources such as Photovoltaic (PV) panels and battery. The proposed method ensures slower battery degradation and prevents overcharging. The performance of the proposed algorithm is verified by conducting simulation studies utilizing running data of Nissan Altra. From the simulation study results, the algorithm is shown to be effective and feasible which minimizes not only the charging cost but also can shift the charging time from peak value to off-peak time.

  19. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    PubMed

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.

  20. Determination of optimum allocation and pricing of distributed generation using genetic algorithm methodology

    NASA Astrophysics Data System (ADS)

    Mwakabuta, Ndaga Stanslaus

    Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.

  1. The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers

    NASA Astrophysics Data System (ADS)

    Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.

    1992-01-01

    Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.

  2. Improved pressure-velocity coupling algorithm based on minimization of global residual norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatwani, A.U.; Turan, A.

    1991-01-01

    In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.

  3. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks.

    PubMed

    Khan, Anwar; Ahmedy, Ismail; Anisi, Mohammad Hossein; Javaid, Nadeem; Ali, Ihsan; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan

    2018-01-09

    Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay.

  4. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks

    PubMed Central

    Khan, Anwar; Anisi, Mohammad Hossein; Javaid, Nadeem; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan

    2018-01-01

    Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay. PMID:29315247

  5. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  6. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  7. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  8. Optimization of Energy Efficiency and Conservation in Green Building Design Using Duelist, Killer-Whale and Rain-Water Algorithms

    NASA Astrophysics Data System (ADS)

    Biyanto, T. R.; Matradji; Syamsi, M. N.; Fibrianto, H. Y.; Afdanny, N.; Rahman, A. H.; Gunawan, K. S.; Pratama, J. A. D.; Malwindasari, A.; Abdillah, A. I.; Bethiana, T. N.; Putra, Y. A.

    2017-11-01

    The development of green building has been growing in both design and quality. The development of green building was limited by the issue of expensive investment. Actually, green building can reduce the energy usage inside the building especially in utilization of cooling system. External load plays major role in reducing the usage of cooling system. External load is affected by type of wall sheathing, glass and roof. The proper selection of wall, type of glass and roof material are very important to reduce external load. Hence, the optimization of energy efficiency and conservation in green building design is required. Since this optimization consist of integer and non-linear equations, this problem falls into Mixed-Integer-Non-Linear-Programming (MINLP) that required global optimization technique such as stochastic optimization algorithms. In this paper the optimized variables i.e. type of glass and roof were chosen using Duelist, Killer-Whale and Rain-Water Algorithms to obtain the optimum energy and considering the minimal investment. The optimization results exhibited the single glass Planibel-G with the 3.2 mm thickness and glass wool insulation provided maximum ROI of 36.8486%, EUI reduction of 54 kWh/m2·year, CO2 emission reduction of 486.8971 tons/year and reduce investment of 4,078,905,465 IDR.

  9. Single and multiple objective biomass-to-biofuel supply chain optimization considering environmental impacts

    NASA Astrophysics Data System (ADS)

    Valles Sosa, Claudia Evangelina

    Bioenergy has become an important alternative source of energy to alleviate the reliance on petroleum energy. Bioenergy offers diminishing climate change by reducing Green House Gas Emissions, as well as providing energy security and enhancing rural development. The Energy Independence and Security Act mandate the use of 21 billion gallons of advanced biofuels including 16 billion gallons of cellulosic biofuels by the year 2022. It is clear that Biomass can make a substantial contribution to supply future energy demand in a sustainable way. However, the supply of sustainable energy is one of the main challenges that mankind will face over the coming decades. For instance, many logistical challenges will be faced in order to provide an efficient and reliable supply of quality feedstock to biorefineries. 700 million tons of biomass will be required to be sustainably delivered to biorefineries annually to meet the projected use of biofuels by the year of 2022. Approaching this complex logistic problem as a multi-commodity network flow structure, the present work proposes the use of a genetic algorithm as a single objective optimization problem that considers the maximization of profit and the present work also proposes the use of a Multiple Objective Evolutionary Algorithm to simultaneously maximize profit while minimizing global warming potential. Most transportation optimization problems available in the literature have mostly considered the maximization of profit or the minimization of total travel time as potential objectives to be optimized. However, on this research work, we take a more conscious and sustainable approach for this logistic problem. Planners are increasingly expected to adopt a multi-disciplinary approach, especially due to the rising importance of environmental stewardship. The role of a transportation planner and designer is shifting from simple economic analysis to promoting sustainability through the integration of environmental objectives. To respond to these new challenges, the Modified Multiple Objective Evolutionary Algorithm for the design optimization of a biomass to bio-refinery logistic system that considers the simultaneous maximization of the total profit and the minimization of three environmental impacts is presented. Sustainability balances economic, social and environmental goals and objectives. There exist several works in the literature that have considered economic and environmental objectives for the presented supply chain problem. However, there is a lack of research performed in the social aspect of a sustainable logistics system. This work proposes a methodology to integrate social aspect assessment, based on employment creation. Finally, most of the assessment methodologies considered in the literature only contemplate deterministic values, when in realistic situations uncertainties in the supply chain are present. In this work, Value-at-Risk, an advanced risk measure commonly used in portfolio optimization is included to consider the uncertainties in biofuel prices, among the others.

  10. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies.

    PubMed

    Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad

    2018-02-01

    The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications

    NASA Astrophysics Data System (ADS)

    Choi, Jinseok; Evans, Brian L.; Gatherer, Alan

    2017-12-01

    In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.

  12. New Additions to the ClusPro Server Motivated by CAPRI

    PubMed Central

    Vajda, Sandor; Yueh, Christine; Beglov, Dmitri; Bohnuud, Tanggis; Mottarella, Scott E.; Xia, Bing; Hall, David R.; Kozakov, Dima

    2016-01-01

    The heavily used protein-protein docking server ClusPro performs three computational steps as follows: (1) rigid body docking, (2) RMSD based clustering of the 1000 lowest energy structures, and (3) the removal of steric clashes by energy minimization. In response to challenges encountered in recent CAPRI targets, we added three new options to ClusPro. These are (1) accounting for Small Angle X-ray Scattering (SAXS) data in docking; (2) considering pairwise interaction data as restraints; and (3) enabling discrimination between biological and crystallographic dimers. In addition, we have developed an extremely fast docking algorithm based on 5D rotational manifold FFT, and an algorithm for docking flexible peptides that include known sequence motifs. We feel that these developments will further improve the utility of ClusPro. However, CAPRI emphasized several shortcomings of the current server, including the problem of selecting the right energy parameters among the five options provided, and the problem of selecting the best models among the 10 generated for each parameter set. In addition, results convinced us that further development is needed for docking homology models. Finally we discuss the difficulties we have encountered when attempting to develop a refinement algorithm that would be computationally efficient enough for inclusion in a heavily used server. PMID:27936493

  13. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    PubMed

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and (2) they help the user understand how different energy terms interact to stabilize a given conformation. The Sculpt paradigm combines many of the best features of interactive graphical modeling, energy minimization, and actual physical models, and we propose it as an especially productive way to use current and future increases in computer speed.

  14. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-01-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  15. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-11-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  16. An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1988-01-01

    An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.

  17. Automated Optimization of Potential Parameters

    PubMed Central

    Michele, Di Pierro; Ron, Elber

    2013-01-01

    An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115

  18. A mathematical model for the deformation of the eyeball by an elastic band.

    PubMed

    Keeling, Stephen L; Propst, Georg; Stadler, Georg; Wackernagel, Werner

    2009-06-01

    In a certain kind of eye surgery, the human eyeball is deformed sustainably by the application of an elastic band. This article presents a mathematical model for the mechanics of the combined eye/band structure along with an algorithm to compute the model solutions. These predict the immediate and the lasting indentation of the eyeball. The model is derived from basic physical principles by minimizing a potential energy subject to a volume constraint. Assuming spherical symmetry, this leads to a two-point boundary-value problem for a non-linear second-order ordinary differential equation that describes the minimizing static equilibrium. By comparison with laboratory data, a preliminary validation of the model is given.

  19. Parallel deterministic neutronics with AMR in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C.; Ferguson, J.; Hendrickson, C.

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  20. Monitoring Churn in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Holzer, Stephan; Pignolet, Yvonne Anne; Smula, Jasmin; Wattenhofer, Roger

    Wireless networks often experience a significant amount of churn, the arrival and departure of nodes. In this paper we propose a distributed algorithm for single-hop networks that detects churn and is resilient to a worst-case adversary. The nodes of the network are notified about changes quickly, in asymptotically optimal time up to an additive logarithmic overhead. We establish a trade-off between saving energy and minimizing the delay until notification for single- and multi-channel networks.

  1. Energy and operation management of a microgrid using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Radosavljević, Jordan; Jevtić, Miroljub; Klimenta, Dardan

    2016-05-01

    This article presents an efficient algorithm based on particle swarm optimization (PSO) for energy and operation management (EOM) of a microgrid including different distributed generation units and energy storage devices. The proposed approach employs PSO to minimize the total energy and operating cost of the microgrid via optimal adjustment of the control variables of the EOM, while satisfying various operating constraints. Owing to the stochastic nature of energy produced from renewable sources, i.e. wind turbines and photovoltaic systems, as well as load uncertainties and market prices, a probabilistic approach in the EOM is introduced. The proposed method is examined and tested on a typical grid-connected microgrid including fuel cell, gas-fired microturbine, wind turbine, photovoltaic and energy storage devices. The obtained results prove the efficiency of the proposed approach to solve the EOM of the microgrids.

  2. Next Day Building Load Predictions based on Limited Input Features Using an On-Line Laterally Primed Adaptive Resonance Theory Artificial Neural Network.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Christian Birk; Robinson, Matt; Yasaei, Yasser

    Optimal integration of thermal energy storage within commercial building applications requires accurate load predictions. Several methods exist that provide an estimate of a buildings future needs. Methods include component-based models and data-driven algorithms. This work implemented a previously untested algorithm for this application that is called a Laterally Primed Adaptive Resonance Theory (LAPART) artificial neural network (ANN). The LAPART algorithm provided accurate results over a two month period where minimal historical data and a small amount of input types were available. These results are significant, because common practice has often overlooked the implementation of an ANN. ANN have often beenmore » perceived to be too complex and require large amounts of data to provide accurate results. The LAPART neural network was implemented in an on-line learning manner. On-line learning refers to the continuous updating of training data as time occurs. For this experiment, training began with a singe day and grew to two months of data. This approach provides a platform for immediate implementation that requires minimal time and effort. The results from the LAPART algorithm were compared with statistical regression and a component-based model. The comparison was based on the predictions linear relationship with the measured data, mean squared error, mean bias error, and cost savings achieved by the respective prediction techniques. The results show that the LAPART algorithm provided a reliable and cost effective means to predict the building load for the next day.« less

  3. An energy-efficient rate adaptive media access protocol (RA-MAC) for long-lived sensor networks.

    PubMed

    Hu, Wen; Chen, Quanjun; Corke, Peter; O'Rourke, Damien

    2010-01-01

    We introduce an energy-efficient Rate Adaptive Media Access Control (RA-MAC) algorithm for long-lived Wireless Sensor Networks (WSNs). Previous research shows that the dynamic and lossy nature of wireless communications is one of the major challenges to reliable data delivery in WSNs. RA-MAC achieves high link reliability in such situations by dynamically trading off data rate for channel gain. The extra gain that can be achieved reduces the packet loss rate which contributes to reduced energy expenditure through a reduced numbers of retransmissions. We achieve this at the expense of raw bit rate which generally far exceeds the application's link requirement. To minimize communication energy consumption, RA-MAC selects the optimal data rate based on the estimated link quality at each data rate and an analytical model of the energy consumption. Our model shows how the selected data rate depends on different channel conditions in order to minimize energy consumption. We have implemented RA-MAC in TinyOS for an off-the-shelf sensor platform (the TinyNode) on top of a state-of-the-art WSN Media Access Control Protocol, SCP-MAC, and evaluated its performance by comparing our implementation with the original SCP-MAC using both simulation and experiment.

  4. FlexStem: improving predictions of RNA secondary structures with pseudoknots by reducing the search space.

    PubMed

    Chen, Xiang; He, Si-Min; Bu, Dongbo; Zhang, Fa; Wang, Zhiyong; Chen, Runsheng; Gao, Wen

    2008-09-15

    RNA secondary structures with pseudoknots are often predicted by minimizing free energy, which is proved to be NP-hard. Due to kinetic reasons the real RNA secondary structure often has local instead of global minimum free energy. This implies that we may improve the performance of RNA secondary structure prediction by taking kinetics into account and minimize free energy in a local area. we propose a novel algorithm named FlexStem to predict RNA secondary structures with pseudoknots. Still based on MFE criterion, FlexStem adopts comprehensive energy models that allow complex pseudoknots. Unlike classical thermodynamic methods, our approach aims to simulate the RNA folding process by successive addition of maximal stems, reducing the search space while maintaining or even improving the prediction accuracy. This reduced space is constructed by our maximal stem strategy and stem-adding rule induced from elaborate statistical experiments on real RNA secondary structures. The strategy and the rule also reflect the folding characteristic of RNA from a new angle and help compensate for the deficiency of merely relying on MFE in RNA structure prediction. We validate FlexStem by applying it to tRNAs, 5SrRNAs and a large number of pseudoknotted structures and compare it with the well-known algorithms such as RNAfold, PKNOTS, PknotsRG, HotKnots and ILM according to their overall sensitivities and specificities, as well as positive and negative controls on pseudoknots. The results show that FlexStem significantly increases the prediction accuracy through its local search strategy. Software is available at http://pfind.ict.ac.cn/FlexStem/. Supplementary data are available at Bioinformatics online.

  5. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  6. Data decomposition of Monte Carlo particle transport simulations via tally servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less

  7. Computational methods for reactive transport modeling: An extended law of mass-action, xLMA, method for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.

    2016-10-01

    We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a unified open-source framework for modeling chemically reactive systems.

  8. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  9. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  10. Methods for finding transition states on reduced potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Ayers, Paul W.

    2010-06-01

    Three new algorithms are presented for determining transition state (TS) structures on the reduced potential energy surface, that is, for problems in which a few important degrees of freedom can be isolated. All three methods use constrained optimization to rapidly find the TS without an initial Hessian evaluation. The algorithms highlight how efficiently the TS can be located on a reduced surface, where the rest of the degrees of freedom are minimized. The first method uses a nonpositive definite quasi-Newton update for the reduced degrees of freedom. The second uses Shepard interpolation to fit the Hessian and starts from a set of points that bound the TS. The third directly uses a finite difference scheme to calculate the reduced degrees of freedom of the Hessian of the entire system, and searches for the TS on the full potential energy surface. All three methods are tested on an epoxide hydrolase cluster, and the ring formations of cyclohexane and cyclobutenone. The results indicate that all the methods are able to converge quite rapidly to the correct TS, but that the finite difference approach is the most efficient.

  11. Methods for finding transition states on reduced potential energy surfaces.

    PubMed

    Burger, Steven K; Ayers, Paul W

    2010-06-21

    Three new algorithms are presented for determining transition state (TS) structures on the reduced potential energy surface, that is, for problems in which a few important degrees of freedom can be isolated. All three methods use constrained optimization to rapidly find the TS without an initial Hessian evaluation. The algorithms highlight how efficiently the TS can be located on a reduced surface, where the rest of the degrees of freedom are minimized. The first method uses a nonpositive definite quasi-Newton update for the reduced degrees of freedom. The second uses Shepard interpolation to fit the Hessian and starts from a set of points that bound the TS. The third directly uses a finite difference scheme to calculate the reduced degrees of freedom of the Hessian of the entire system, and searches for the TS on the full potential energy surface. All three methods are tested on an epoxide hydrolase cluster, and the ring formations of cyclohexane and cyclobutenone. The results indicate that all the methods are able to converge quite rapidly to the correct TS, but that the finite difference approach is the most efficient.

  12. Genetic Algorithm-Based Optimization to Match Asteroid Energy Deposition Curves

    NASA Technical Reports Server (NTRS)

    Tarano, Ana; Mathias, Donovan; Wheeler, Lorien; Close, Sigrid

    2018-01-01

    An asteroid entering Earth's atmosphere deposits energy along its path due to thermal ablation and dissipative forces that can be measured by ground-based and spaceborne instruments. Inference of pre-entry asteroid properties and characterization of the atmospheric breakup is facilitated by using an analytic fragment-cloud model (FCM) in conjunction with a Genetic Algorithm (GA). This optimization technique is used to inversely solve for the asteroid's entry properties, such as diameter, density, strength, velocity, entry angle, and strength scaling, from simulations using FCM. The previous parameters' fitness evaluation involves minimizing error to ascertain the best match between the physics-based calculated energy deposition and the observed meteors. This steady-state GA provided sets of solutions agreeing with literature, such as the meteor from Chelyabinsk, Russia in 2013 and Tagish Lake, Canada in 2000, which were used as case studies in order to validate the optimization routine. The assisted exploration and exploitation of this multi-dimensional search space enables inference and uncertainty analysis that can inform studies of near-Earth asteroids and consequently improve risk assessment.

  13. Fully implicit Particle-in-cell algorithms for multiscale plasma simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis

    The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less

  14. Optimal apodization design for medical ultrasound using constrained least squares part I: theory.

    PubMed

    Guenther, Drake A; Walker, William F

    2007-02-01

    Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.

  15. Chemotaxis can provide biological organisms with good solutions to the travelling salesman problem.

    PubMed

    Reynolds, A M

    2011-05-01

    The ability to find good solutions to the traveling salesman problem can benefit some biological organisms. Bacterial infection would, for instance, be eradicated most promptly if cells of the immune system minimized the total distance they traveled when moving between bacteria. Similarly, foragers would maximize their net energy gain if the distance that they traveled between multiple dispersed prey items was minimized. The traveling salesman problem is one of the most intensively studied problems in combinatorial optimization. There are no efficient algorithms for even solving the problem approximately (within a guaranteed constant factor from the optimum) because the problem is nondeterministic polynomial time complete. The best approximate algorithms can typically find solutions within 1%-2% of the optimal, but these are computationally intensive and can not be implemented by biological organisms. Biological organisms could, in principle, implement the less efficient greedy nearest-neighbor algorithm, i.e., always move to the nearest surviving target. Implementation of this strategy does, however, require quite sophisticated cognitive abilities and prior knowledge of the target locations. Here, with the aid of numerical simulations, it is shown that biological organisms can simply use chemotaxis to solve, or at worst provide good solutions (comparable to those found by the greedy algorithm) to, the traveling salesman problem when the targets are sources of a chemoattractant and are modest in number (n < 10). This applies to neutrophils and macrophages in microbial defense and to some predators.

  16. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  17. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    NASA Astrophysics Data System (ADS)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-04-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  18. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao; Huang, Xuhui, E-mail: xuhuihuang@ust.hk

    2016-04-21

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kineticsmore » are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.« less

  19. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    PubMed Central

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  20. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    PubMed

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  1. Spectrum Access In Cognitive Radio Using a Two-Stage Reinforcement Learning Approach

    NASA Astrophysics Data System (ADS)

    Raj, Vishnu; Dias, Irene; Tholeti, Thulasi; Kalyani, Sheetal

    2018-02-01

    With the advent of the 5th generation of wireless standards and an increasing demand for higher throughput, methods to improve the spectral efficiency of wireless systems have become very important. In the context of cognitive radio, a substantial increase in throughput is possible if the secondary user can make smart decisions regarding which channel to sense and when or how often to sense. Here, we propose an algorithm to not only select a channel for data transmission but also to predict how long the channel will remain unoccupied so that the time spent on channel sensing can be minimized. Our algorithm learns in two stages - a reinforcement learning approach for channel selection and a Bayesian approach to determine the optimal duration for which sensing can be skipped. Comparisons with other learning methods are provided through extensive simulations. We show that the number of sensing is minimized with negligible increase in primary interference; this implies that lesser energy is spent by the secondary user in sensing and also higher throughput is achieved by saving on sensing.

  2. Parallel Harmony Search Based Distributed Energy Resource Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electricalmore » power distribution systems operation.« less

  3. Energy-saving management modelling and optimization for lead-acid battery formation process

    NASA Astrophysics Data System (ADS)

    Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.

    2017-11-01

    In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.

  4. Performance of the Sleep-Mode Mechanism of the New IEEE 802.16m Proposal for Correlated Downlink Traffic

    NASA Astrophysics Data System (ADS)

    de Turck, Koen; de Vuyst, Stijn; Fiems, Dieter; Wittevrongel, Sabine; Bruneel, Herwig

    There is a considerable interest nowadays in making wireless telecommunication more energy-efficient. The sleep-mode mechanism in WiMAX (IEEE 802.16e) is one of such energy saving measures. Recently, Samsung proposed some modifications on the sleep-mode mechanism, scheduled to appear in the forthcoming IEEE 802.16m standard, aimed at minimizing the signaling overhead. In this work, we present a performance analysis of this proposal and clarify the differences with the standard mechanism included in IEEE 802.16e. We also propose some special algorithms aimed at reducing the computational complexity of the analysis.

  5. Use of density functional theory method to calculate structures of neutral carbon clusters C{sub n} (3 ≤ n ≤ 24) and study their variability of structural forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yen, T. W.; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw

    2015-02-28

    In this work, we present modifications to the well-known basin hopping (BH) optimization algorithm [D. J. Wales and J. P. Doye, J. Phys. Chem. A 101, 5111 (1997)] by incorporating in it the unique and specific nature of interactions among valence electrons and ions in carbon atoms through calculating the cluster’s total energy by the density functional tight-binding (DFTB) theory, using it to find the lowest energy structures of carbon clusters and, from these optimized atomic and electronic structures, studying their varied forms of topological transitions, which include a linear chain, a monocyclic to a polycyclic ring, and a fullerene/cage-likemore » geometry. In this modified BH (MBH) algorithm, we define a spatial volume within which the cluster’s lowest energy structure is to be searched, and introduce in addition a cut-and-splice genetic operator to increase the searching performance of the energy minimum than the original BH technique. The present MBH/DFTB algorithm is, therefore, characteristically distinguishable from the original BH technique commonly applied to nonmetallic and metallic clusters, technically more thorough and natural in describing the intricate couplings between valence electrons and ions in a carbon cluster, and thus theoretically sound in putting these two charged components on an equal footing. The proposed modified minimization algorithm should be more appropriate, accurate, and precise in the description of a carbon cluster. We evaluate the present algorithm, its energy-minimum searching in particular, by its optimization robustness. Specifically, we first check the MBH/DFTB technique for two representative carbon clusters of larger size, i.e., C{sub 60} and C{sub 72} against the popular cut-and-splice approach [D. M. Deaven and K. M. Ho, Phys. Rev. Lett. 75, 288 (1995)] that normally is combined with the genetic algorithm method for finding the cluster’s energy minimum, before employing it to investigate carbon clusters in the size range C{sub 3}-C{sub 24} studying their topological transitions. An effort was also made to compare our MBH/DFTB and its re-optimized results carried out by full density functional theory (DFT) calculations with some early DFT-based studies.« less

  6. An energy-efficient data gathering protocol in large wireless sensor network

    NASA Astrophysics Data System (ADS)

    Wang, Yamin; Zhang, Ruihua; Tao, Shizhong

    2006-11-01

    Wireless sensor network consisting of a large number of small sensors with low-power transceiver can be an effective tool for gathering data in a variety of environment. The collected data must be transmitted to the base station for further processing. Since a network consists of sensors with limited battery energy, the method for data gathering and routing must be energy efficient in order to prolong the lifetime of the network. In this paper, we presented an energy-efficient data gathering protocol in wireless sensor network. The new protocol used data fusion technology clusters nodes into groups and builds a chain among the cluster heads according to a hybrid of the residual energy and distance to the base station. Results in stochastic geometry are used to derive the optimum parameter of our algorithm that minimizes the total energy spent in the network. Simulation results show performance superiority of the new protocol.

  7. Natural Aggregation Approach based Home Energy Manage System with User Satisfaction Modelling

    NASA Astrophysics Data System (ADS)

    Luo, F. J.; Ranzi, G.; Dong, Z. Y.; Murata, J.

    2017-07-01

    With the prevalence of advanced sensing and two-way communication technologies, Home Energy Management System (HEMS) has attracted lots of attentions in recent years. This paper proposes a HEMS that optimally schedules the controllable Residential Energy Resources (RERs) in a Time-of-Use (TOU) pricing and high solar power penetrated environment. The HEMS aims to minimize the overall operational cost of the home, and the user’s satisfactions and requirements on the operation of different household appliances are modelled and considered in the HEMS. Further, a new biological self-aggregation intelligence based optimization technique previously proposed by the authors, i.e., Natural Aggregation Algorithm (NAA), is applied to solve the proposed HEMS optimization model. Simulations are conducted to validate the proposed method.

  8. Site-directed protein recombination as a shortest-path problem.

    PubMed

    Endelman, Jeffrey B; Silberg, Jonathan J; Wang, Zhen-Gang; Arnold, Frances H

    2004-07-01

    Protein function can be tuned using laboratory evolution, in which one rapidly searches through a library of proteins for the properties of interest. In site-directed recombination, n crossovers are chosen in an alignment of p parents to define a set of p(n + 1) peptide fragments. These fragments are then assembled combinatorially to create a library of p(n+1) proteins. We have developed a computational algorithm to enrich these libraries in folded proteins while maintaining an appropriate level of diversity for evolution. For a given set of parents, our algorithm selects crossovers that minimize the average energy of the library, subject to constraints on the length of each fragment. This problem is equivalent to finding the shortest path between nodes in a network, for which the global minimum can be found efficiently. Our algorithm has a running time of O(N(3)p(2) + N(2)n) for a protein of length N. Adjusting the constraints on fragment length generates a set of optimized libraries with varying degrees of diversity. By comparing these optima for different sets of parents, we rapidly determine which parents yield the lowest energy libraries.

  9. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  10. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  11. Studies of EGRET sources with a novel image restoration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, Hiroyasu; Cohen-Tanugi, Johann; Kamae, Tuneyoshi

    2007-07-12

    We have developed an image restoration technique based on the Richardson-Lucy algorithm optimized for GLAST-LAT image analysis. Our algorithm is original since it utilizes the PSF (point spread function) that is calculated for each event. This is critical for EGRET and GLAST-LAT image analysis since the PSF depends on the energy and angle of incident gamma-rays and varies by more than one order of magnitude. EGRET and GLAST-LAT image analysis also faces Poisson noise due to low photon statistics. Our technique incorporates wavelet filtering to minimize noise effects. We present studies of EGRET sources using this novel image restoration techniquemore » for possible identification of extended gamma-ray sources.« less

  12. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  13. When the lowest energy does not induce native structures: parallel minimization of multi-energy values by hybridizing searching intelligences.

    PubMed

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.

  14. When the Lowest Energy Does Not Induce Native Structures: Parallel Minimization of Multi-Energy Values by Hybridizing Searching Intelligences

    PubMed Central

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708

  15. Outage Probability Minimization for Energy Harvesting Cognitive Radio Sensor Networks

    PubMed Central

    Zhang, Fan; Jing, Tao; Huo, Yan; Jiang, Kaiwei

    2017-01-01

    The incorporation of cognitive radio (CR) capability in wireless sensor networks yields a promising network paradigm known as CR sensor networks (CRSNs), which is able to provide spectrum efficient data communication. However, due to the high energy consumption results from spectrum sensing, as well as subsequent data transmission, the energy supply for the conventional sensor nodes powered by batteries is regarded as a severe bottleneck for sustainable operation. The energy harvesting technique, which gathers energy from the ambient environment, is regarded as a promising solution to perpetually power-up energy-limited devices with a continual source of energy. Therefore, applying the energy harvesting (EH) technique in CRSNs is able to facilitate the self-sustainability of the energy-limited sensors. The primary concern of this study is to design sensing-transmission policies to minimize the long-term outage probability of EH-powered CR sensor nodes. We formulate this problem as an infinite-horizon discounted Markov decision process and propose an ϵ-optimal sensing-transmission (ST) policy through using the value iteration algorithm. ϵ is the error bound between the ST policy and the optimal policy, which can be pre-defined according to the actual need. Moreover, for a special case that the signal-to-noise (SNR) power ratio is sufficiently high, we present an efficient transmission (ET) policy and prove that the ET policy achieves the same performance with the ST policy. Finally, extensive simulations are conducted to evaluate the performance of the proposed policies and the impaction of various network parameters. PMID:28125023

  16. Outage Probability Minimization for Energy Harvesting Cognitive Radio Sensor Networks.

    PubMed

    Zhang, Fan; Jing, Tao; Huo, Yan; Jiang, Kaiwei

    2017-01-24

    The incorporation of cognitive radio (CR) capability in wireless sensor networks yields a promising network paradigm known as CR sensor networks (CRSNs), which is able to provide spectrum efficient data communication. However, due to the high energy consumption results from spectrum sensing, as well as subsequent data transmission, the energy supply for the conventional sensor nodes powered by batteries is regarded as a severe bottleneck for sustainable operation. The energy harvesting technique, which gathers energy from the ambient environment, is regarded as a promising solution to perpetually power-up energy-limited devices with a continual source of energy. Therefore, applying the energy harvesting (EH) technique in CRSNs is able to facilitate the self-sustainability of the energy-limited sensors. The primary concern of this study is to design sensing-transmission policies to minimize the long-term outage probability of EH-powered CR sensor nodes. We formulate this problem as an infinite-horizon discounted Markov decision process and propose an ϵ -optimal sensing-transmission (ST) policy through using the value iteration algorithm. ϵ is the error bound between the ST policy and the optimal policy, which can be pre-defined according to the actual need. Moreover, for a special case that the signal-to-noise (SNR) power ratio is sufficiently high, we present an efficient transmission (ET) policy and prove that the ET policy achieves the same performance with the ST policy. Finally, extensive simulations are conducted to evaluate the performance of the proposed policies and the impaction of various network parameters.

  17. New additions to the ClusPro server motivated by CAPRI.

    PubMed

    Vajda, Sandor; Yueh, Christine; Beglov, Dmitri; Bohnuud, Tanggis; Mottarella, Scott E; Xia, Bing; Hall, David R; Kozakov, Dima

    2017-03-01

    The heavily used protein-protein docking server ClusPro performs three computational steps as follows: (1) rigid body docking, (2) RMSD based clustering of the 1000 lowest energy structures, and (3) the removal of steric clashes by energy minimization. In response to challenges encountered in recent CAPRI targets, we added three new options to ClusPro. These are (1) accounting for small angle X-ray scattering data in docking; (2) considering pairwise interaction data as restraints; and (3) enabling discrimination between biological and crystallographic dimers. In addition, we have developed an extremely fast docking algorithm based on 5D rotational manifold FFT, and an algorithm for docking flexible peptides that include known sequence motifs. We feel that these developments will further improve the utility of ClusPro. However, CAPRI emphasized several shortcomings of the current server, including the problem of selecting the right energy parameters among the five options provided, and the problem of selecting the best models among the 10 generated for each parameter set. In addition, results convinced us that further development is needed for docking homology models. Finally, we discuss the difficulties we have encountered when attempting to develop a refinement algorithm that would be computationally efficient enough for inclusion in a heavily used server. Proteins 2017; 85:435-444. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Stephen; Heaney, Michael; Jin, Xin

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less

  19. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Stephen; Heaney, Michael; Jin, Xin

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less

  20. Deployment strategy for battery energy storage system in distribution network based on voltage violation regulation

    NASA Astrophysics Data System (ADS)

    Wu, H.; Zhou, L.; Xu, T.; Fang, W. L.; He, W. G.; Liu, H. M.

    2017-11-01

    In order to improve the situation of voltage violation caused by the grid-connection of photovoltaic (PV) system in a distribution network, a bi-level programming model is proposed for battery energy storage system (BESS) deployment. The objective function of inner level programming is to minimize voltage violation, with the power of PV and BESS as the variables. The objective function of outer level programming is to minimize the comprehensive function originated from inner layer programming and all the BESS operating parameters, with the capacity and rated power of BESS as the variables. The differential evolution (DE) algorithm is applied to solve the model. Based on distribution network operation scenarios with photovoltaic generation under multiple alternative output modes, the simulation results of IEEE 33-bus system prove that the deployment strategy of BESS proposed in this paper is well adapted to voltage violation regulation invariable distribution network operation scenarios. It contributes to regulating voltage violation in distribution network, as well as to improve the utilization of PV systems.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu; Phanish, Deepa

    We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic Orbital-Free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas–Fermi–von Weizsacker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-ordermore » finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation are both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.« less

  2. Competitive two-agent scheduling problems to minimize the weighted combination of makespans in a two-machine open shop

    NASA Astrophysics Data System (ADS)

    Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia

    2018-04-01

    In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.

  3. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  4. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  5. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  6. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  7. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  8. An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption

    NASA Astrophysics Data System (ADS)

    Sun, Yanhua; Hao, Zhe; Zhang, Yanhua

    2018-01-01

    With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.

  9. The effective use of virtualization for selection of data centers in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  10. Evolution, Energy Landscapes and the Paradoxes of Protein Folding

    PubMed Central

    Wolynes, Peter G.

    2014-01-01

    Protein folding has been viewed as a difficult problem of molecular self-organization. The search problem involved in folding however has been simplified through the evolution of folding energy landscapes that are funneled. The funnel hypothesis can be quantified using energy landscape theory based on the minimal frustration principle. Strong quantitative predictions that follow from energy landscape theory have been widely confirmed both through laboratory folding experiments and from detailed simulations. Energy landscape ideas also have allowed successful protein structure prediction algorithms to be developed. The selection constraint of having funneled folding landscapes has left its imprint on the sequences of existing protein structural families. Quantitative analysis of co-evolution patterns allows us to infer the statistical characteristics of the folding landscape. These turn out to be consistent with what has been obtained from laboratory physicochemical folding experiments signalling a beautiful confluence of genomics and chemical physics. PMID:25530262

  11. Enforcing Memory Policy Specifications in Reconfigurable Hardware

    DTIC Science & Technology

    2008-10-01

    we explain the algorithms behind our reference monitor design flow. In Section 4, we describe our access policy language including several example...NFA from this regular expression using Thompson’s Algorithm [1] as implemented by Gerzic [19]. Figure 4 shows the NFA for our policy. Notice that the... Algorithm [1] as implemented by Grail [49] to minimize the DFA. Figure 5 shows the minimized DFA for our policy. Processing the Ranges Before we can

  12. Globally Deghosting for Marine Streamer with Alternating Minimization Approach in Frequency-slowness Domain

    NASA Astrophysics Data System (ADS)

    Wang, C.; Zhu, Z.; Gu, H.; Liu, C.; Liu, Z.; Jiao, Z.

    2017-12-01

    The ghost effects of the sea surface can generate notch in marine towed-streamer data, which results in narrow bandwidth of seismic data. Currently, deghosting is widely utilized to increase the bandwidth of the seismic data or the images. However, most of the conventional deghosting algorithms havenot considered the error of streamer depth causing a biased ghost-delay time (τ) with respect to primary reflection and amplitude difference coefficient (r) between ghost and primary reflection varies with offset due to rugged seabed and target depth variation. We proposed a ghost filtering operator considering the protentional biases within the ghost-delay time (τ) and the amplitude difference coefficient (r). The up-going wavefield (u), ghost-delay time (τ) and amplitude difference coefficient (r) can be obtained by utilizing alternating minimization approach for minimizing the difference between actual wavefield and theoretical wavefield in frequency-slowness domain. The main idea is to alternatively updating u, τ and r in each iteration: we update u by least-squares when we keep τ and r constant; and we then keep u constant and optimize over τ and r with a closed-form solution which is closely related to matched filtering. The convergence of the proposed algorithm is guaranteed since we have closed-form solutions for each stage. The experiments on synthetic record confirmed the reliability of the proposed algorithm. We also demonstrate our proposed method in marine VDS shot acquisition. After migration stack processing, our ghosting method significantly increases the bandwidth of the average amplitude, amplitude energy of the medium and high frequency spectrum, improving resolution of medium and deep reflection and providing higher signal-to-noise ratio with clear break point. This research is funded by China Important National Science & Technology Specific Projects (2016ZX05026001-001).

  13. Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gang, Lu

    Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.

  14. Majorization as a Tool for Optimizing a Class of Matrix Functions.

    ERIC Educational Resources Information Center

    Kiers, Henk A.

    1990-01-01

    General algorithms are presented that can be used for optimizing matrix trace functions subject to certain constraints on the parameters. The parameter set that minimizes the majorizing function also decreases the matrix trace function, providing a monotonically convergent algorithm for minimizing the matrix trace function iteratively. (SLD)

  15. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  16. Quantum discord of two-qubit X states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Qing; Yu Sixia; Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, 230026 Anhui

    Quantum discord provides a measure for quantifying quantum correlations beyond entanglement and is very hard to compute even for two-qubit states because of the minimization over all possible measurements. Recently a simple algorithm to evaluate the quantum discord for two-qubit X states was proposed by Ali, Rau, and Alber [Phys. Rev. A 81, 042105 (2010)] with minimization taken over only a few cases. Here we shall at first identify a class of X states, whose quantum discord can be evaluated analytically without any minimization, for which their algorithm is valid, and also identify a family of X states for whichmore » their algorithm fails. And then we demonstrate that this special family of X states provides furthermore an explicit example for the inequivalence between the minimization over positive operator-valued measures and that over von Neumann measurements.« less

  17. Segmentation of large periapical lesions toward dental computer-aided diagnosis in cone-beam CT scans

    NASA Astrophysics Data System (ADS)

    Rysavy, Steven; Flores, Arturo; Enciso, Reyes; Okada, Kazunori

    2008-03-01

    This paper presents an experimental study for assessing the applicability of general-purpose 3D segmentation algorithms for analyzing dental periapical lesions in cone-beam computed tomography (CBCT) scans. In the field of Endodontics, clinical studies have been unable to determine if a periapical granuloma can heal with non-surgical methods. Addressing this issue, Simon et al. recently proposed a diagnostic technique which non-invasively classifies target lesions using CBCT. Manual segmentation exploited in their study, however, is too time consuming and unreliable for real world adoption. On the other hand, many technically advanced algorithms have been proposed to address segmentation problems in various biomedical and non-biomedical contexts, but they have not yet been applied to the field of dentistry. Presented in this paper is a novel application of such segmentation algorithms to the clinically-significant dental problem. This study evaluates three state-of-the-art graph-based algorithms: a normalized cut algorithm based on a generalized eigen-value problem, a graph cut algorithm implementing energy minimization techniques, and a random walks algorithm derived from discrete electrical potential theory. In this paper, we extend the original 2D formulation of the above algorithms to segment 3D images directly and apply the resulting algorithms to the dental CBCT images. We experimentally evaluate quality of the segmentation results for 3D CBCT images, as well as their 2D cross sections. The benefits and pitfalls of each algorithm are highlighted.

  18. Identifying High-redshift Gamma-Ray Bursts with RATIR

    NASA Astrophysics Data System (ADS)

    Littlejohns, O. M.; Butler, N. R.; Cucchiara, A.; Watson, A. M.; Kutyrev, A. S.; Lee, W. H.; Richer, M. G.; Klein, C. R.; Fox, O. D.; Prochaska, J. X.; Bloom, J. S.; Troja, E.; Ramirez-Ruiz, E.; de Diego, J. A.; Georgiev, L.; González, J.; Román-Zúñiga, C. G.; Gehrels, N.; Moseley, H.

    2014-07-01

    We present a template-fitting algorithm for determining photometric redshifts, z phot, of candidate high-redshift gamma-ray bursts (GRBs). Using afterglow photometry, obtained by the Reionization and Transients InfraRed (RATIR) camera, this algorithm accounts for the intrinsic GRB afterglow spectral energy distribution, host dust extinction, and the effect of neutral hydrogen (local and cosmological) along the line of sight. We present the results obtained by this algorithm and the RATIR photometry of GRB 130606A, finding a range of best-fit solutions, 5.6 < z phot < 6.0, for models of several host dust extinction laws (none, the Milky Way, Large Magellanic Clouds, and Small Magellanic Clouds), consistent with spectroscopic measurements of the redshift of this GRB. Using simulated RATIR photometry, we find that our algorithm provides precise measures of z phot in the ranges of 4 < z phot <~ 8 and 9 < z phot < 10 and can robustly determine when z phot > 4. Further testing highlights the required caution in cases of highly dust-extincted host galaxies. These tests also show that our algorithm does not erroneously find z phot < 4 when z sim > 4, thereby minimizing false negatives and allowing us to rapidly identify all potential high-redshift events.

  19. Electromagnetic interference-aware transmission scheduling and power control for dynamic wireless access in hospital environments.

    PubMed

    Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio

    2011-11-01

    We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.

  20. Optimal sensor fusion for land vehicle navigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, J.D.

    1990-10-01

    Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. Thismore » paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.« less

  1. Cross-platform validation and analysis environment for particle physics

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  2. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  3. On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies

    PubMed Central

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier

    2013-01-01

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582

  4. On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.

    PubMed

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier

    2013-08-09

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.

  5. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  6. Real-time minimal-bit-error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  7. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  8. Linking the historical and chemical definitions of diabatic states for charge and excitation energy transfer reactions in condensed phase.

    PubMed

    Pavanello, Michele; Neugebauer, Johannes

    2011-10-07

    Marcus theory of electron transfer (ET) and Förster theory of excitation energy transfer (EET) rely on the Condon approximation and the theoretical availability of initial and final states of ET and EET reactions, often called diabatic states. Recently [Subotnik et al., J. Chem. Phys. 130, 234102 (2009)], diabatic states for practical calculations of ET and EET reactions were defined in terms of their interactions with the surrounding environment. However, from a purely theoretical standpoint, the definition of diabatic states must arise from the minimization of the dynamic couplings between the trial diabatic states. In this work, we show that if the Condon approximation is valid, then a minimization of the derived dynamic couplings leads to corresponding diabatic states for ET reactions taking place in solution by diagonalization of the dipole moment matrix, which is equivalent to a Boys localization algorithm; while for EET reactions in solution, diabatic states are found through the Edmiston-Ruedenberg localization algorithm. In the derivation, we find interesting expressions for the environmental contribution to the dynamic coupling of the adiabatic states in condensed-phase processes. In one of the cases considered, we find that such a contribution is trivially evaluable as a scalar product of the transition dipole moment with a quantity directly derivable from the geometry arrangement of the nuclei in the molecular environment. Possibly, this has applications in the evaluation of dynamic couplings for large scale simulations. © 2011 American Institute of Physics

  9. An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings

    NASA Astrophysics Data System (ADS)

    Karizi, Nasim

    An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.

  10. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  11. Use of digital control theory state space formalism for feedback at SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himel, T.; Hendrickson, L.; Rouse, F.

    The algorithms used in the database-driven SLC fast-feedback system are based on the state space formalism of digital control theory. These are implemented as a set of matrix equations which use a Kalman filter to estimate a vector of states from a vector of measurements, and then apply a gain matrix to determine the actuator settings from the state vector. The matrices used in the calculation are derived offline using Linear Quadratic Gaussian minimization. For a given noise spectrum, this procedure minimizes the rms of the states (e.g., the position or energy of the beam). The offline program also allowsmore » simulation of the loop's response to arbitrary inputs, and calculates its frequency response. 3 refs., 3 figs.« less

  12. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    PubMed

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  13. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks

    PubMed Central

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-01-01

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636

  14. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.

    PubMed

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-07-04

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.

  15. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  16. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks.

    PubMed

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-07-14

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  17. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    PubMed Central

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-01-01

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970

  18. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  19. On an algorithmic definition for the components of the minimal cell.

    PubMed

    Martínez, Octavio; Reyes-Valdés, M Humberto

    2018-01-01

    Living cells are highly complex systems comprising a multitude of elements that are engaged in the many convoluted processes observed during the cell cycle. However, not all elements and processes are essential for cell survival and reproduction under steady-state environmental conditions. To distinguish between essential from expendable cell components and thus define the 'minimal cell' and the corresponding 'minimal genome', we postulate that the synthesis of all cell elements can be represented as a finite set of binary operators, and within this framework we show that cell elements that depend on their previous existence to be synthesized are those that are essential for cell survival. An algorithm to distinguish essential cell elements is presented and demonstrated within an interactome. Data and functions implementing the algorithm are given as supporting information. We expect that this algorithmic approach will lead to the determination of the complete interactome of the minimal cell, which could then be experimentally validated. The assumptions behind this hypothesis as well as its consequences for experimental and theoretical biology are discussed.

  20. Optimization Methods in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.

    2009-09-01

    Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).

  1. Adaptive-weighted Total Variation Minimization for Sparse Data toward Low-dose X-ray Computed Tomography Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621

  2. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    PubMed

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-07

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  3. Improved Model for Predicting the Free Energy Contribution of Dinucleotide Bulges to RNA Duplex Stability.

    PubMed

    Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M

    2015-09-01

    Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.

  4. Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.

    PubMed

    Cvitaš, Marko T

    2018-03-13

    The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    March-Leuba, J.A.

    Nuclear plants of the 21st century will employ higher levels of automation and fault tolerance to increase availability, reduce accident risk, and lower operating costs. Key developments in control algorithms, fault diagnostics, fault tolerance, and communication in a distributed system are needed to implement the fully automated plant. Equally challenging will be integrating developments in separate information and control fields into a cohesive system, which collectively achieves the overall goals of improved performance, safety, reliability, maintainability, and cost-effectiveness. Under the Nuclear Energy Research Initiative (NERI), the U. S. Department of Energy is sponsoring a project to address some of themore » technical issues involved in meeting the long-range goal of 21st century reactor control systems. This project, ''A New Paradigm for Automated Development Of Highly Reliable Control Architectures For Future Nuclear Plants,'' involves researchers from Oak Ridge National Laboratory, University of Tennessee, and North Carolina State University. This paper documents a research effort to develop methods for automated generation of control systems that can be traced directly to the design requirements. Our final goal is to allow the designer to specify only high-level requirements and stress factors that the control system must survive (e.g. a list of transients, or a requirement to withstand a single failure.) To this end, the ''control engine'' automatically selects and validates control algorithms and parameters that are optimized to the current state of the plant, and that have been tested under the prescribed stress factors. The control engine then automatically generates the control software from validated algorithms. Examples of stress factors that the control system must ''survive'' are: transient events (e.g., set-point changes, or expected occurrences such a load rejection,) and postulated component failures. These stress factors are specified by the designer and become a database of prescribed transients and component failures. The candidate control systems are tested, and their parameters optimized, for each of these stresses. Examples of high-level requirements are: response time less than xx seconds, or overshoot less than xx% ... etc. In mathematical terms, these types of requirements are defined as ''constraints,'' and there are standard mathematical methods to minimize an objective function subject to constraints. Since, in principle, any control design that satisfies all the above constraints is acceptable, the designer must also select an objective function that describes the ''goodness'' of the control design. Examples of objective functions are: minimize the number or amount of control motions, minimize an energy balance... etc.« less

  6. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  7. Semiautomatic tumor segmentation with multimodal images in a conditional random field framework.

    PubMed

    Hu, Yu-Chi; Grossberg, Michael; Mageras, Gikas

    2016-04-01

    Volumetric medical images of a single subject can be acquired using different imaging modalities, such as computed tomography, magnetic resonance imaging (MRI), and positron emission tomography. In this work, we present a semiautomatic segmentation algorithm that can leverage the synergies between different image modalities while integrating interactive human guidance. The algorithm provides a statistical segmentation framework partly automating the segmentation task while still maintaining critical human oversight. The statistical models presented are trained interactively using simple brush strokes to indicate tumor and nontumor tissues and using intermediate results within a patient's image study. To accomplish the segmentation, we construct the energy function in the conditional random field (CRF) framework. For each slice, the energy function is set using the estimated probabilities from both user brush stroke data and prior approved segmented slices within a patient study. The progressive segmentation is obtained using a graph-cut-based minimization. Although no similar semiautomated algorithm is currently available, we evaluated our method with an MRI data set from Medical Image Computing and Computer Assisted Intervention Society multimodal brain segmentation challenge (BRATS 2012 and 2013) against a similar fully automatic method based on CRF and a semiautomatic method based on grow-cut, and our method shows superior performance.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumuluru, Jaya Shankar; McCulloch, Richard Chet James

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less

  9. Unified anomaly suppression and boundary extraction in laser radar range imagery based on a joint curve-evolution and expectation-maximization algorithm.

    PubMed

    Feng, Haihua; Karl, William Clem; Castañon, David A

    2008-05-01

    In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.

  10. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  11. Optimization of Stability Constrained Geometrically Nonlinear Shallow Trusses Using an Arc Length Sparse Method with a Strain Energy Density Approach

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.; Nguyen, Duc T.

    2008-01-01

    A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.

  12. New Approaches to Minimum-Energy Design of Integer- and Fractional-Order Perfect Control Algorithms

    NASA Astrophysics Data System (ADS)

    Hunek, Wojciech P.; Wach, Łukasz

    2017-10-01

    In this paper the new methods concerning the energy-based minimization of the perfect control inputs is presented. For that reason the multivariable integer- and fractional-order models are applied which can be used for describing a various real world processes. Up to now, the classical approaches have been used in forms of minimum-norm/least squares inverses. Notwithstanding, the above-mentioned tool do not guarantee the optimal control corresponding to optimal input energy. Therefore the new class of inversebased methods has been introduced, in particular the new σ- and H-inverse of nonsquare parameter and polynomial matrices. Thus a proposed solution remarkably outperforms the typical ones in systems where the control runs can be understood in terms of different physical quantities, for example heat and mass transfer, electricity etc. A simulation study performed in Matlab/Simulink environment confirms the big potential of the new energy-based approaches.

  13. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  14. CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications.

    PubMed

    Lei, Guoqing; Dou, Yong; Wan, Wen; Xia, Fei; Li, Rongchun; Ma, Meng; Zou, Dan

    2012-01-01

    Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications.

  15. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  16. Harmonic Optimization in Voltage Source Inverter for PV Application using Heuristic Algorithms

    NASA Astrophysics Data System (ADS)

    Kandil, Shaimaa A.; Ali, A. A.; El Samahy, Adel; Wasfi, Sherif M.; Malik, O. P.

    2016-12-01

    Selective Harmonic Elimination (SHE) technique is the fundamental switching frequency scheme that is used to eliminate specific order harmonics. Its application to minimize low order harmonics in a three level inverter is proposed in this paper. The modulation strategy used here is SHEPWM and the nonlinear equations, that characterize the low order harmonics, are solved using Harmony Search Algorithm (HSA) to obtain the optimal switching angles that minimize the required harmonics and maintain the fundamental at the desired value. Total Harmonic Distortion (THD) of the output voltage is minimized maintaining selected harmonics within allowable limits. A comparison has been drawn between HSA, Genetic Algorithm (GA) and Newton Raphson (NR) technique using MATLAB software to determine the effectiveness of getting optimized switching angles.

  17. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  18. Minimization of Delay Costs in the Realization of Production Orders in Two-Machine System

    NASA Astrophysics Data System (ADS)

    Dylewski, Robert; Jardzioch, Andrzej; Dworak, Oliver

    2018-03-01

    The article presents a new algorithm that enables the allocation of the optimal scheduling of the production orders in the two-machine system based on the minimum cost of order delays. The formulated algorithm uses the method of branch and bounds and it is a particular generalisation of the algorithm enabling for the determination of the sequence of the production orders with the minimal sum of the delays. In order to illustrate the proposed algorithm in the best way, the article contains examples accompanied by the graphical trees of solutions. The research analysing the utility of the said algorithm was conducted. The achieved results proved the usefulness of the proposed algorithm when applied to scheduling of orders. The formulated algorithm was implemented in the Matlab programme. In addition, the studies for different sets of production orders were conducted.

  19. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  20. Modified Shuffled Frog Leaping Optimization Algorithm Based Distributed Generation Rescheduling for Loss Minimization

    NASA Astrophysics Data System (ADS)

    Arya, L. D.; Koshti, Atul

    2018-05-01

    This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.

  1. A wavelet based method for automatic detection of slow eye movements: a pilot study.

    PubMed

    Magosso, Elisa; Provini, Federica; Montagna, Pasquale; Ursino, Mauro

    2006-11-01

    Electro-oculographic (EOG) activity during the wake-sleep transition is characterized by the appearance of slow eye movements (SEM). The present work describes an algorithm for the automatic localisation of SEM events from EOG recordings. The algorithm is based on a wavelet multiresolution analysis of the difference between right and left EOG tracings, and includes three main steps: (i) wavelet decomposition down to 10 detail levels (i.e., 10 scales), using Daubechies order 4 wavelet; (ii) computation of energy in 0.5s time steps at any level of decomposition; (iii) construction of a non-linear discriminant function expressing the relative energy of high-scale details to both high- and low-scale details. The main assumption is that the value of the discriminant function increases above a given threshold during SEM episodes due to energy redistribution toward higher scales. Ten EOG recordings from ten male patients with obstructive sleep apnea syndrome were used. All tracings included a period from pre-sleep wakefulness to stage 2 sleep. Two experts inspected the tracings separately to score SEMs. A reference set of SEM (gold standard) were obtained by joint examination by both experts. Parameters of the discriminant function were assigned on three tracings (design set) to minimize the disagreement between the system classification and classification by the two experts; the algorithm was then tested on the remaining seven tracings (test set). Results show that the agreement between the algorithm and the gold standard was 80.44+/-4.09%, the sensitivity of the algorithm was 67.2+/-7.37% and the selectivity 83.93+/-8.65%. However, most errors were not caused by an inability of the system to detect intervals with SEM activity against NON-SEM intervals, but were due to a different localisation of the beginning and end of some SEM episodes. The proposed method may be a valuable tool for computerized EOG analysis.

  2. Passive designs and renewable energy systems optimization of a net zero energy building in Embrun/France

    NASA Astrophysics Data System (ADS)

    Harkouss, F.; Biwole, P. H.; Fardoun, F.

    2018-05-01

    Buildings’ optimization is a smart method to inspect the available design choices starting from passive strategies, to energy efficient systems and finally towards the adequate renewable energy system to be implemented. This paper outlines the methodology and the cost-effectiveness potential for optimizing the design of net-zero energy building in a French city; Embrun. The non-dominated sorting genetic algorithm is chosen in order to minimize thermal, electrical demands and life cycle cost while reaching the net zero energy balance; and thus getting the Pareto-front. Elimination and Choice Expressing the Reality decision making method is applied to the Pareto-front so as to obtain one optimal solution. A wide range of energy efficiency measures are investigated, besides solar energy systems are employed to produce required electricity and hot water for domestic purposes. The results indicate that the appropriate selection of the passive parameters is very important and critical in reducing the building energy consumption. The optimum design parameters yield to a decrease of building’s thermal loads and life cycle cost by 32.96% and 14.47% respectively.

  3. Application of independent component analysis for speech-music separation using an efficient score function estimation

    NASA Astrophysics Data System (ADS)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  4. A population-based evolutionary search approach to the multiple minima problem in de novo protein structure prediction

    PubMed Central

    2013-01-01

    Background Elucidating the native structure of a protein molecule from its sequence of amino acids, a problem known as de novo structure prediction, is a long standing challenge in computational structural biology. Difficulties in silico arise due to the high dimensionality of the protein conformational space and the ruggedness of the associated energy surface. The issue of multiple minima is a particularly troublesome hallmark of energy surfaces probed with current energy functions. In contrast to the true energy surface, these surfaces are weakly-funneled and rich in comparably deep minima populated by non-native structures. For this reason, many algorithms seek to be inclusive and obtain a broad view of the low-energy regions through an ensemble of low-energy (decoy) conformations. Conformational diversity in this ensemble is key to increasing the likelihood that the native structure has been captured. Methods We propose an evolutionary search approach to address the multiple-minima problem in decoy sampling for de novo structure prediction. Two population-based evolutionary search algorithms are presented that follow the basic approach of treating conformations as individuals in an evolving population. Coarse graining and molecular fragment replacement are used to efficiently obtain protein-like child conformations from parents. Potential energy is used both to bias parent selection and determine which subset of parents and children will be retained in the evolving population. The effect on the decoy ensemble of sampling minima directly is measured by additionally mapping a conformation to its nearest local minimum before considering it for retainment. The resulting memetic algorithm thus evolves not just a population of conformations but a population of local minima. Results and conclusions Results show that both algorithms are effective in terms of sampling conformations in proximity of the known native structure. The additional minimization is shown to be key to enhancing sampling capability and obtaining a diverse ensemble of decoy conformations, circumventing premature convergence to sub-optimal regions in the conformational space, and approaching the native structure with proximity that is comparable to state-of-the-art decoy sampling methods. The results are shown to be robust and valid when using two representative state-of-the-art coarse-grained energy functions. PMID:24565020

  5. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Joyce Jihyun; Yin, Rongxin; Kiliccote, Sila

    Open Automated Demand Response (OpenADR), an XML-based information exchange model, is used to facilitate continuous price-responsive operation and demand response participation for large commercial buildings in New York who are subject to the default day-ahead hourly pricing. We summarize the existing demand response programs in New York and discuss OpenADR communication, prioritization of demand response signals, and control methods. Building energy simulation models are developed and field tests are conducted to evaluate continuous energy management and demand response capabilities of two commercial buildings in New York City. Preliminary results reveal that providing machine-readable prices to commercial buildings can facilitate bothmore » demand response participation and continuous energy cost savings. Hence, efforts should be made to develop more sophisticated algorithms for building control systems to minimize customer's utility bill based on price and reliability information from the electricity grid.« less

  7. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  8. Diffusion in liquid Germanium using ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kulkarni, R. V.; Aulbur, W. G.; Stroud, D.

    1996-03-01

    We describe the results of calculations of the self-diffusion constant of liquid Ge over a range of temperatures. The calculations are carried out using an ab initio molecular dynamics scheme which combines an LDA model for the electronic structure with the Bachelet-Hamann-Schlüter norm-conserving pseudopotentials^1. The energies associated with electronic degrees of freedom are minimized using the Williams-Soler algorithm, and ionic moves are carried out using the Verlet algorithm. We use an energy cutoff of 10 Ry, which is sufficient to give results for the lattice constant and bulk modulus of crystalline Ge to within 1% and 12% of experiment. The program output includes not only the self-diffusion constant but also the structure factor, electronic density of states, and low-frequency electrical conductivity. We will compare our results with other ab initio and semi-empirical calculations, and discuss extension to impurity diffusion. ^1 We use the ab initio molecular dynamics code fhi94md, developed at 1cm the Fritz-Haber Institute, Berlin. ^2 Work supported by NASA, Grant NAG3-1437.

  9. Advanced Energy Storage Management in Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Ceylan, Oguzhan; Xiao, Bailu

    2016-01-01

    With increasing penetration of distributed generation (DG) in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative mixed integer quadratic constrained quadratic programming model to optimize the operation of a three phase unbalanced distribution system with high penetration of Photovoltaic (PV) panels, DG and energy storage (ES) is developed. The proposed model minimizes not only the operating cost, including fuel cost and purchasing cost, but also voltage deviations and power loss. The optimization model is based on the linearized sensitivity coefficients between state variables (e.g., node voltages) andmore » control variables (e.g., real and reactive power injections of DG and ES). To avoid slow convergence when close to the optimum, a golden search method is introduced to control the step size and accelerate the convergence. The proposed algorithm is demonstrated on modified IEEE 13 nodes test feeders with multiple PV panels, DG and ES. Numerical simulation results validate the proposed algorithm. Various scenarios of system configuration are studied and some critical findings are concluded.« less

  10. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  11. A supervoxel-based segmentation method for prostate MR images

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  12. Pharmit: interactive exploration of chemical space.

    PubMed

    Sunseri, Jocelyn; Koes, David Ryan

    2016-07-08

    Pharmit (http://pharmit.csb.pitt.edu) provides an online, interactive environment for the virtual screening of large compound databases using pharmacophores, molecular shape and energy minimization. Users can import, create and edit virtual screening queries in an interactive browser-based interface. Queries are specified in terms of a pharmacophore, a spatial arrangement of the essential features of an interaction, and molecular shape. Search results can be further ranked and filtered using energy minimization. In addition to a number of pre-built databases of popular compound libraries, users may submit their own compound libraries for screening. Pharmit uses state-of-the-art sub-linear algorithms to provide interactive screening of millions of compounds. Queries typically take a few seconds to a few minutes depending on their complexity. This allows users to iteratively refine their search during a single session. The easy access to large chemical datasets provided by Pharmit simplifies and accelerates structure-based drug design. Pharmit is available under a dual BSD/GPL open-source license. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Minimal spanning tree algorithm for γ-ray source detection in sparse photon images: cluster parameters and selection strategies

    DOE PAGES

    Campana, R.; Bernieri, E.; Massaro, E.; ...

    2013-05-22

    We present that the minimal spanning tree (MST) algorithm is a graph-theoretical cluster-finding method. We previously applied it to γ-ray bidimensional images, showing that it is quite sensitive in finding faint sources. Possible sources are associated with the regions where the photon arrival directions clusterize. MST selects clusters starting from a particular “tree” connecting all the point of the image and performing a cut based on the angular distance between photons, with a number of events higher than a given threshold. In this paper, we show how a further filtering, based on some parameters linked to the cluster properties, canmore » be applied to reduce spurious detections. We find that the most efficient parameter for this secondary selection is the magnitudeM of a cluster, defined as the product of its number of events by its clustering degree. We test the sensitivity of the method by means of simulated and real Fermi-Large Area Telescope (LAT) fields. Our results show that √M is strongly correlated with other statistical significance parameters, derived from a wavelet based algorithm and maximum likelihood (ML) analysis, and that it can be used as a good estimator of statistical significance of MST detections. Finally, we apply the method to a 2-year LAT image at energies higher than 3 GeV, and we show the presence of new clusters, likely associated with BL Lac objects.« less

  14. Minimal-scan filtered backpropagation algorithms for diffraction tomography.

    PubMed

    Pan, X; Anastasio, M A

    1999-12-01

    The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.

  15. Energy-efficient approach to minimizing the energy consumption in an extended job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Tang, Dunbing; Dai, Min

    2015-09-01

    The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.

  16. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  17. Extended scene Shack-Hartmann wavefront sensor algorithm: minimization of scene content dependent shift estimation errors.

    PubMed

    Sidick, Erkin

    2013-09-10

    An adaptive periodic-correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high accuracy even when the subimages in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift estimate error of the APC algorithm has a component that depends on the content of the extended scene. In this paper, we assess the amount of that error and propose a method to minimize it.

  18. Extended Scene SH Wavefront Sensor Algorithm: Minimization of Scene Content Dependent Shift Estimation Errors

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    Adaptive Periodic-Correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high-accuracy even when the sub-images in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift-estimate error of the APC algorithm has a component that depends on the content of extended-scene. In this paper we assess the amount of that error and propose a method to minimize it.

  19. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images

    PubMed Central

    Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae

    2016-01-01

    This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151

  20. Sensor Transmission Power Schedule for Smart Grids

    NASA Astrophysics Data System (ADS)

    Gao, C.; Huang, Y. H.; Li, J.; Liu, X. D.

    2017-11-01

    Smart grid has attracted much attention by the requirement of new generation renewable energy. Nowadays, the real-time state estimation, with the help of phasor measurement unit, plays an important role to keep smart grid stable and efficient. However, the limitation of the communication channel is not considered by related work. Considering the familiar limited on-board batteries wireless sensor in smart grid, transmission power schedule is designed in this paper, which minimizes energy consumption with proper EKF filtering performance requirement constrain. Based on the event-triggered estimation theory, the filtering algorithm is also provided to utilize the information contained in the power schedule. Finally, its feasibility and performance is demonstrated using the standard IEEE 39-bus system with phasor measurement units (PMUs).

  1. Cross-platform validation and analysis environment for particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less

  2. Dosimetric Evaluation of Metal Artefact Reduction using Metal Artefact Reduction (MAR) Algorithm and Dual-energy Computed Tomography (CT) Method

    NASA Astrophysics Data System (ADS)

    Laguda, Edcer Jerecho

    Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated. Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated. Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

  3. Energy performance and savings potentials with skylights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arasteh, D.; Johnson, R.; Selkowitz, S.

    1984-12-01

    This study systematically explores the energy effects of skylight systems in a prototypical office building module and examines the savings from daylighting. For specific climates, roof/skylight characteristics are identified that minimize total energy or peak electrical demand. Simplified techniques for energy performance calculation are also presented based on a multiple regression analysis of our data base so that one may easily evaluate daylighting's effects on total and component energy loads and electrical peaks. This provides additional insights into the influence of skylight parameters on energy consumption and electrical peaks. We use the DOE-2.1B energy analysis program with newly incorporated daylightingmore » algorithms to determine hourly, monthly, and annual impacts of daylighting strategies on electrical lighting consumption, cooling, heating, fan power, peak electrical demands, and total energy use. A data base of more than 2000 parametric simulations for 14 US climates has been generated. Parameters varied include skylight-to-roof ratio, shading coefficient, visible transmittance, skylight well light loss, electric lighting power density, roof heat transfer coefficient, and electric lighting control type. 14 references, 13 figures, 4 tables.« less

  4. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    NASA Astrophysics Data System (ADS)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive controller and apply the inexact interior point method to this nonlinear non-convex ramp metering problem.

  5. Analysis of counting data: Development of the SATLAS Python package

    NASA Astrophysics Data System (ADS)

    Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.

    2018-01-01

    For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.

  6. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  7. Research on Collection System Optimal Design of Wind Farm with Obstacles

    NASA Astrophysics Data System (ADS)

    Huang, W.; Yan, B. Y.; Tan, R. S.; Liu, L. F.

    2017-05-01

    To the collection system optimal design of offshore wind farm, the factors considered are not only the reasonable configuration of the cable and switch, but also the influence of the obstacles on the topology design of the offshore wind farm. This paper presents a concrete topology optimization algorithm with obstacles. The minimal area rectangle encasing box of the obstacle is obtained by using the method of minimal area encasing box. Then the optimization algorithm combining the advantages of Dijkstra algorithm and Prim algorithm is used to gain the scheme of avoidance obstacle path planning. Finally a fuzzy comprehensive evaluation model based on the analytic hierarchy process is constructed to compare the performance of the different topologies. Case studies demonstrate the feasibility of the proposed algorithm and model.

  8. Hamiltonian lattice field theory: Computer calculations using variational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zako, Robert L.

    1991-12-03

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less

  9. A look-ahead variant of the Lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems. Ph.D. Thesis - Massachusetts Inst. of Technology, Aug. 1991

    NASA Technical Reports Server (NTRS)

    Nachtigal, Noel M.

    1991-01-01

    The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.

  10. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  11. Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Dong, Bin; Wen, Zaiwen

    2017-02-01

    In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.

  12. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  13. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  14. An optimization of the FPGA trigger based on the artificial neural network for a detection of neutrino-origin showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szadkowski, Zbigniew; Glas, Dariusz; Pytel, Krzysztof

    Observations of ultra-high energy neutrinos became a priority in experimental astro-particle physics. Up to now, the Pierre Auger Observatory did not find any candidate on a neutrino event. This imposes competitive limits to the diffuse flux of ultra-high energy neutrinos in the EeV range and above. A very low rate of events potentially generated by neutrinos is a significant challenge for a detection technique and requires both sophisticated algorithms and high-resolution hardware. A trigger based on a artificial neural network was implemented into the Cyclone{sup R} V E FPGA 5CEFA9F31I7. The prototype Front-End boards for Auger-Beyond-2015 with Cyclone{sup R} Vmore » E can test the neural network algorithm in real pampas conditions in 2015. Showers for muon and tau neutrino initiating particles on various altitudes, angles and energies were simulated in CORSICA and Offline platforms giving pattern of ADC traces in Auger water Cherenkov detectors. The 3-layer 12-10-1 neural network was taught in MATLAB by simulated ADC traces according the Levenberg-Marquardt algorithm. Results show that a probability of a ADC traces generation is very low due to a small neutrino cross-section. Nevertheless, ADC traces, if occur, for 1-10 EeV showers are relatively short and can be analyzed by 16-point input algorithm. For 100 EeV range traces are much longer, but with significantly higher amplitudes, which can be detected by standard threshold algorithms. We optimized the coefficients from MATLAB to get a maximal range of potentially registered events and for fixed-point FPGA processing to minimize calculation errors. Currently used Front-End boards based on no-more produced ACEXR PLDs and obsolete Cyclone{sup R} FPGAs allow an implementation of relatively simple threshold algorithms for triggers. New sophisticated trigger implemented in Cyclone{sup R} V E FPGAs with large amount of DSP blocks, embedded memory running with 120 - 160 MHz sampling may support to discover neutrino events in the Pierre Auger Observatory. (authors)« less

  15. Guidance and control of swarms of spacecraft

    NASA Astrophysics Data System (ADS)

    Morgan, Daniel James

    There has been considerable interest in formation flying spacecraft due to their potential to perform certain tasks at a cheaper cost than monolithic spacecraft. Formation flying enables the use of smaller, cheaper spacecraft that distribute the risk of the mission. Recently, the ideas of formation flying have been extended to spacecraft swarms made up of hundreds to thousands of 100-gram-class spacecraft known as femtosatellites. The large number of spacecraft and limited capabilities of each individual spacecraft present a significant challenge in guidance, navigation, and control. This dissertation deals with the guidance and control algorithms required to enable the flight of spacecraft swarms. The algorithms developed in this dissertation are focused on achieving two main goals: swarm keeping and swarm reconfiguration. The objectives of swarm keeping are to maintain bounded relative distances between spacecraft, prevent collisions between spacecraft, and minimize the propellant used by each spacecraft. Swarm reconfiguration requires the transfer of the swarm to a specific shape. Like with swarm keeping, minimizing the propellant used and preventing collisions are the main objectives. Additionally, the algorithms required for swarm keeping and swarm reconfiguration should be decentralized with respect to communication and computation so that they can be implemented on femtosats, which have limited hardware capabilities. The algorithms developed in this dissertation are concerned with swarms located in low Earth orbit. In these orbits, Earth oblateness and atmospheric drag have a significant effect on the relative motion of the swarm. The complicated dynamic environment of low Earth orbits further complicates the swarm-keeping and swarm-reconfiguration problems. To better develop and test these algorithms, a nonlinear, relative dynamic model with J2 and drag perturbations is developed. This model is used throughout this dissertation to validate the algorithms using computer simulations. The swarm-keeping problem can be solved by placing the spacecraft on J2-invariant relative orbits, which prevent collisions and minimize the drift of the swarm over hundreds of orbits using a single burn. These orbits are achieved by energy matching the spacecraft to the reference orbit. Additionally, these conditions can be repeatedly applied to minimize the drift of the swarm when atmospheric drag has a large effect (orbits with an altitude under 500 km). The swarm reconfiguration is achieved using two steps: trajectory optimization and assignment. The trajectory optimization problem can be written as a nonlinear, optimal control problem. This optimal control problem is discretized, decoupled, and convexified so that the individual femtosats can efficiently solve the optimization. Sequential convex programming is used to generate the control sequences and trajectories required to safely and efficiently transfer a spacecraft from one position to another. The sequence of trajectories is shown to converge to a Karush-Kuhn-Tucker point of the nonconvex problem. In the case where many of the spacecraft are interchangeable, a variable-swarm, distributed auction algorithm is used to determine the assignment of spacecraft to target positions. This auction algorithm requires only local communication and all of the bidding parameters are stored locally. The assignment generated using this auction algorithm is shown to be near optimal and to converge in a finite number of bids. Additionally, the bidding process is used to modify the number of targets used in the assignment so that the reconfiguration can be achieved even when there is a disconnected communication network or a significant loss of agents. Once the assignment is achieved, the trajectory optimization can be run using the terminal positions determined by the auction algorithm. To implement these algorithms in real time a model predictive control formulation is used. Model predictive control uses a finite horizon to apply the most up-to-date control sequence while simultaneously calculating a new assignment and trajectory based on updated state information. Using a finite horizon allows collisions to only be considered between spacecraft that are near each other at the current time. This relaxes the all-to-all communication assumption so that only neighboring agents need to communicate. Experimental validation is done using the formation flying testbed. The swarm-reconfiguration algorithms are tested using multiple quadrotors. Experiments have been performed using sequential convex programming for offline trajectory planning, model predictive control and sequential convex programming for real-time trajectory generation, and the variable-swarm, distributed auction algorithm for optimal assignment. These experiments show that the swarm-reconfiguration algorithms can be implemented in real time using actual hardware. In general, this dissertation presents guidance and control algorithms that maintain and reconfigure swarms of spacecraft while maintaining the shape of the swarm, preventing collisions between the spacecraft, and minimizing the amount of propellant used.

  16. Control Allocation with Load Balancing

    NASA Technical Reports Server (NTRS)

    Bodson, Marc; Frost, Susan A.

    2009-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.

  17. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  18. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  19. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  20. Low-power wearable respiratory sound sensing.

    PubMed

    Oletic, Dinko; Arsenali, Bruno; Bilas, Vedran

    2014-04-09

    Building upon the findings from the field of automated recognition of respiratory sound patterns, we propose a wearable wireless sensor implementing on-board respiratory sound acquisition and classification, to enable continuous monitoring of symptoms, such as asthmatic wheezing. Low-power consumption of such a sensor is required in order to achieve long autonomy. Considering that the power consumption of its radio is kept minimal if transmitting only upon (rare) occurrences of wheezing, we focus on optimizing the power consumption of the digital signal processor (DSP). Based on a comprehensive review of asthmatic wheeze detection algorithms, we analyze the computational complexity of common features drawn from short-time Fourier transform (STFT) and decision tree classification. Four algorithms were implemented on a low-power TMS320C5505 DSP. Their classification accuracies were evaluated on a dataset of prerecorded respiratory sounds in two operating scenarios of different detection fidelities. The execution times of all algorithms were measured. The best classification accuracy of over 92%, while occupying only 2.6% of the DSP's processing time, is obtained for the algorithm featuring the time-frequency tracking of shapes of crests originating from wheezing, with spectral features modeled using energy.

  1. Interferometric synthetic aperture radar phase unwrapping based on sparse Markov random fields by graph cuts

    NASA Astrophysics Data System (ADS)

    Zhou, Lifan; Chai, Dengfeng; Xia, Yu; Ma, Peifeng; Lin, Hui

    2018-01-01

    Phase unwrapping (PU) is one of the key processes in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) data. It is known that two-dimensional (2-D) PU problems can be formulated as maximum a posteriori estimation of Markov random fields (MRFs). However, considering that the traditional MRF algorithm is usually defined on a rectangular grid, it fails easily if large parts of the wrapped data are dominated by noise caused by large low-coherence area or rapid-topography variation. A PU solution based on sparse MRF is presented to extend the traditional MRF algorithm to deal with sparse data, which allows the unwrapping of InSAR data dominated by high phase noise. To speed up the graph cuts algorithm for sparse MRF, we designed dual elementary graphs and merged them to obtain the Delaunay triangle graph, which is used to minimize the energy function efficiently. The experiments on simulated and real data, compared with other existing algorithms, both confirm the effectiveness of the proposed MRF approach, which suffers less from decorrelation effects caused by large low-coherence area or rapid-topography variation.

  2. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  3. Re-scheduling as a tool for the power management on board a spacecraft

    NASA Technical Reports Server (NTRS)

    Albasheer, Omar; Momoh, James A.

    1995-01-01

    The scheduling of events on board a spacecraft is based on forecast energy levels. The real time values of energy may not coincide with the forecast values; consequently, a dynamic revising to the allocation of power is needed. The re-scheduling is also needed for other reasons on board a spacecraft like the addition of new event which must be scheduled, or a failure of an event due to many different contingencies. This need of rescheduling is very important to the survivability of the spacecraft. In this presentation, a re-scheduling tool will be presented as a part of an overall scheme for the power management on board a spacecraft from the allocation of energy point of view. The overall scheme is based on the optimal use of energy available on board a spacecraft using expert systems combined with linear optimization techniques. The system will be able to schedule maximum number of events utilizing most energy available. The outcome is more events scheduled to share the operation cost of that spacecraft. The system will also be able to re-schedule in case of a contingency with minimal time and minimal disturbance of the original schedule. The end product is a fully integrated planning system capable of producing the right decisions in short time with less human error. The overall system will be presented with the re-scheduling algorithm discussed in detail, then the tests and results will be presented for validations.

  4. Global optimization of additive potential energy functions: Predicting binary Lennard-Jones clusters

    NASA Astrophysics Data System (ADS)

    Kolossváry, István; Bowers, Kevin J.

    2010-11-01

    We present a method for minimizing additive potential-energy functions. Our hidden-force algorithm can be described as an intricate multiplayer tug-of-war game in which teams try to break an impasse by randomly assigning some players to drop their ropes while the others are still tugging until a partial impasse is reached, then, instructing the dropouts to resume tugging, for all teams to come to a new overall impasse. Utilizing our algorithm in a non-Markovian parallel Monte Carlo search, we found 17 new putative global minima for binary Lennard-Jones clusters in the size range of 90-100 particles. The method is efficient enough that an unbiased search was possible; no potential-energy surface symmetries were exploited. All new minima are comprised of three nested polyicosahedral or polytetrahedral shells when viewed as a nested set of Connolly surfaces (though the shell structure has previously gone unscrutinized, known minima are often qualitatively similar). Unlike known minima, in which the outer and inner shells are comprised of the larger and smaller atoms, respectively, in 13 of the new minima, the atoms are not as clearly separated by size. Furthermore, while some known minima have inner shells stabilized by larger atoms, four of the new minima have outer shells stabilized by smaller atoms.

  5. CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications

    PubMed Central

    2012-01-01

    Background Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. Results In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Conclusions Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications. PMID:22369626

  6. Wavelength routing beyond the standard graph coloring approach

    NASA Astrophysics Data System (ADS)

    Blankenhorn, Thomas

    2004-04-01

    When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.

  7. User's Manual for the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Cheatwood, F. McNeil

    1996-01-01

    This user's manual provides detailed instructions for the installation and the application of version 4.1 of the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA). Also provides simulation of flow field in thermochemical nonequilibrium around vehicles traveling at hypersonic velocities through the atmosphere. Earlier versions of LAURA were predominantly research codes, and they had minimal (or no) documentation. This manual describes UNIX-based utilities for customizing the code for special applications that also minimize system resource requirements. The algorithm is reviewed, and the various program options are related to specific equations and variables in the theoretical development.

  8. Multiobjective evolutionary algorithm with many tables for purely ab initio protein structure prediction.

    PubMed

    Brasil, Christiane Regina Soares; Delbem, Alexandre Claudio Botazzo; da Silva, Fernando Luís Barroso

    2013-07-30

    This article focuses on the development of an approach for ab initio protein structure prediction (PSP) without using any earlier knowledge from similar protein structures, as fragment-based statistics or inference of secondary structures. Such an approach is called purely ab initio prediction. The article shows that well-designed multiobjective evolutionary algorithms can predict relevant protein structures in a purely ab initio way. One challenge for purely ab initio PSP is the prediction of structures with β-sheets. To work with such proteins, this research has also developed procedures to efficiently estimate hydrogen bond and solvation contribution energies. Considering van der Waals, electrostatic, hydrogen bond, and solvation contribution energies, the PSP is a problem with four energetic terms to be minimized. Each interaction energy term can be considered an objective of an optimization method. Combinatorial problems with four objectives have been considered too complex for the available multiobjective optimization (MOO) methods. The proposed approach, called "Multiobjective evolutionary algorithms with many tables" (MEAMT), can efficiently deal with four objectives through the combination thereof, performing a more adequate sampling of the objective space. Therefore, this method can better map the promising regions in this space, predicting structures in a purely ab initio way. In other words, MEAMT is an efficient optimization method for MOO, which explores simultaneously the search space as well as the objective space. MEAMT can predict structures with one or two domains with RMSDs comparable to values obtained by recently developed ab initio methods (GAPFCG , I-PAES, and Quark) that use different levels of earlier knowledge. Copyright © 2013 Wiley Periodicals, Inc.

  9. An Approach to Quad Meshing Based On Cross Valued Maps and the Ginzburg-Landau Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viertel, Ryan; Osting, Braxton

    2017-08-01

    A generalization of vector fields, referred to as N-direction fields or cross fields when N=4, has been recently introduced and studied for geometry processing, with applications in quadrilateral (quad) meshing, texture mapping, and parameterization. We make the observation that cross field design for two-dimensional quad meshing is related to the well-known Ginzburg-Landau problem from mathematical physics. This identification yields a variety of theoretical tools for efficiently computing boundary-aligned quad meshes, with provable guarantees on the resulting mesh, for example, the number of mesh defects and bounds on the defect locations. The procedure for generating the quad mesh is to (i)more » find a complex-valued "representation" field that minimizes the Dirichlet energy subject to a boundary constraint, (ii) convert the representation field into a boundary-aligned, smooth cross field, (iii) use separatrices of the cross field to partition the domain into four sided regions, and (iv) mesh each of these four-sided regions using standard techniques. Under certain assumptions on the geometry of the domain, we prove that this procedure can be used to produce a cross field whose separatrices partition the domain into four sided regions. To solve the energy minimization problem for the representation field, we use an extension of the Merriman-Bence-Osher (MBO) threshold dynamics method, originally conceived as an algorithm to simulate motion by mean curvature, to minimize the Ginzburg-Landau energy for the optimal representation field. Lastly, we demonstrate the method on a variety of test domains.« less

  10. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  11. Nuclear Potential Clustering As a New Tool to Detect Patterns in High Dimensional Datasets

    NASA Astrophysics Data System (ADS)

    Tonkova, V.; Paulus, D.; Neeb, H.

    2013-02-01

    We present a new approach for the clustering of high dimensional data without prior assumptions about the structure of the underlying distribution. The proposed algorithm is based on a concept adapted from nuclear physics. To partition the data, we model the dynamic behaviour of nucleons interacting in an N-dimensional space. An adaptive nuclear potential, comprised of a short-range attractive (strong interaction) and a long-range repulsive term (Coulomb force) is assigned to each data point. By modelling the dynamics, nucleons that are densely distributed in space fuse to build nuclei (clusters) whereas single point clusters repel each other. The formation of clusters is completed when the system reaches the state of minimal potential energy. The data are then grouped according to the particles' final effective potential energy level. The performance of the algorithm is tested with several synthetic datasets showing that the proposed method can robustly identify clusters even when complex configurations are present. Furthermore, quantitative MRI data from 43 multiple sclerosis patients were analyzed, showing a reasonable splitting into subgroups according to the individual patients' disease grade. The good performance of the algorithm on such highly correlated non-spherical datasets, which are typical for MRI derived image features, shows that Nuclear Potential Clustering is a valuable tool for automated data analysis, not only in the MRI domain.

  12. Effects of tools inserted through snake-like surgical manipulators.

    PubMed

    Murphy, Ryan J; Otake, Yoshito; Wolfe, Kevin C; Taylor, Russell H; Armand, Mehran

    2014-01-01

    Snake-like manipulators with a large, open lumen can offer improved treatment alternatives for minimally-and less-invasive surgeries. In these procedures, surgeons use the manipulator to introduce and control flexible tools in the surgical environment. This paper describes a predictive algorithm for estimating manipulator configuration given tip position for nonconstant curvature, cable-driven manipulators using energy minimization. During experimental bending of the manipulator with and without a tool inserted in its lumen, images were recorded from an overhead camera in conjunction with actuation cable tension and length. To investigate the accuracy, the estimated manipulator configuration from the model and the ground-truth configuration measured from the image were compared. Additional analysis focused on the response differences for the manipulator with and without a tool inserted through the lumen. Results indicate that the energy minimization model predicts manipulator configuration with an error of 0.24 ± 0.22mm without tools in the lumen and 0.24 ± 0.19mm with tools in the lumen (no significant difference, p = 0.81). Moreover, tools did not introduce noticeable perturbations in the manipulator trajectory; however, there was an increase in requisite force required to reach a configuration. These results support the use of the proposed estimation method for calculating the shape of the manipulator with an tool inserted in its lumen when an accuracy range of at least 1mm is required.

  13. Parameter optimization for transitions between memory states in small arrays of Josephson junctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezac, Jacob D.; Imam, Neena; Braiman, Yehuda

    Coupled arrays of Josephson junctions possess multiple stable zero voltage states. Such states can store information and consequently can be utilized for cryogenic memory applications. Basic memory operations can be implemented by sending a pulse to one of the junctions and studying transitions between the states. In order to be suitable for memory operations, such transitions between the states have to be fast and energy efficient. Here in this article we employed simulated annealing, a stochastic optimization algorithm, to study parameter optimization of array parameters which minimizes times and energies of transitions between specifically chosen states that can be utilizedmore » for memory operations (Read, Write, and Reset). Simulation results show that such transitions occur with access times on the order of 10–100 ps and access energies on the order of 10 -19–5×10 -18 J. Numerical simulations are validated with approximate analytical results.« less

  14. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE PAGES

    Luo, Na; Hong, Tianzhen; Li, Hui; ...

    2017-07-25

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  15. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Na; Hong, Tianzhen; Li, Hui

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.

    This report describes how the intelligent load control (ILC) algorithm can be implemented to achieve peak demand reduction while minimizing impacts on occupant comfort. The algorithm was designed to minimize the additional sensors and minimum configuration requirements to enable a scalable and cost-effective implementation for both large and small-/medium-sized commercial buildings. The ILC algorithm uses an analytic hierarchy process (AHP) to dynamically prioritize the available curtailable loads based on both quantitative (deviation of zone conditions from set point) and qualitative rules (types of zone). Although the ILC algorithm described in this report was highly tailored to work with rooftop units,more » it can be generalized for application to other building loads such as variable-air-volume (VAV) boxes and lighting systems.« less

  17. Minimal algorithm for running an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Stoica, V.; Borborean, A.; Ciocan, A.; Manciu, C.

    2018-01-01

    The internal combustion engine control is a well-known topic within automotive industry and is widely used. However, in research laboratories and universities the use of a control system trading is not the best solution because of predetermined operating algorithms, and calibrations (accessible only by the manufacturer) without allowing massive intervention from outside. Laboratory solutions on the market are very expensive. Consequently, in the paper we present a minimal algorithm required to start-up and run an internal combustion engine. The presented solution can be adapted to function on performance microcontrollers available on the market at the present time and at an affordable price. The presented algorithm was implemented in LabView and runs on a CompactRIO hardware platform.

  18. Minimal time change detection algorithm for reconfigurable control system and application to aerospace

    NASA Technical Reports Server (NTRS)

    Kim, Sungwan

    1994-01-01

    System parameters should be tracked on-line to build a reconfigurable control system even though there exists an abrupt change. For this purpose, a new performance index that we are studying is the speed of adaptation- how quickly does the system determine that a change has occurred? In this paper, a new, robust algorithm that is optimized to minimize the time delay in detecting a change for fixed false alarm probability is proposed. Simulation results for the aircraft lateral motion with a known or unknown change in control gain matrices, in the presence of doublet input, indicate that the algorithm works fairly well. One of its distinguishing properties is that detection delay of this algorithm is superior to that of Whiteness Test.

  19. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    PubMed Central

    Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  20. Metabolic flux estimation using particle swarm optimization with penalty function.

    PubMed

    Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun

    2009-01-01

    Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.

  1. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  2. Polyhedral Interpolation for Optimal Reaction Control System Jet Selection

    NASA Technical Reports Server (NTRS)

    Gefert, Leon P.; Wright, Theodore

    2014-01-01

    An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.

  3. On multiple crack identification by ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Brigante, M.; Sumbatyan, M. A.

    2018-04-01

    The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.

  4. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less

  5. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  6. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  7. A finite-element toolbox for the stationary Gross-Pitaevskii equation with rotation

    NASA Astrophysics Data System (ADS)

    Vergez, Guillaume; Danaila, Ionut; Auliac, Sylvain; Hecht, Frédéric

    2016-12-01

    We present a new numerical system using classical finite elements with mesh adaptivity for computing stationary solutions of the Gross-Pitaevskii equation. The programs are written as a toolbox for FreeFem++ (www.freefem.org), a free finite-element software available for all existing operating systems. This offers the advantage to hide all technical issues related to the implementation of the finite element method, allowing to easily code various numerical algorithms. Two robust and optimized numerical methods were implemented to minimize the Gross-Pitaevskii energy: a steepest descent method based on Sobolev gradients and a minimization algorithm based on the state-of-the-art optimization library Ipopt. For both methods, mesh adaptivity strategies are used to reduce the computational time and increase the local spatial accuracy when vortices are present. Different run cases are made available for 2D and 3D configurations of Bose-Einstein condensates in rotation. An optional graphical user interface is also provided, allowing to easily run predefined cases or with user-defined parameter files. We also provide several post-processing tools (like the identification of quantized vortices) that could help in extracting physical features from the simulations. The toolbox is extremely versatile and can be easily adapted to deal with different physical models.

  8. Correction method for stripe nonuniformity.

    PubMed

    Qian, Weixian; Chen, Qian; Gu, Guohua; Guan, Zhiqiang

    2010-04-01

    Stripe nonuniformity is very typical in line infrared focal plane arrays (IR-FPA) and uncooled staring IR-FPA. In this paper, the mechanism of the stripe nonuniformity is analyzed, and the gray-scale co-occurrence matrix theory and optimization theory are studied. Through these efforts, the stripe nonuniformity correction problem is translated into the optimization problem. The goal of the optimization is to find the minimal energy of the image's line gradient. After solving the constrained nonlinear optimization equation, the parameters of the stripe nonuniformity correction are obtained and the stripe nonuniformity correction is achieved. The experiments indicate that this algorithm is effective and efficient.

  9. Camouflage target reconnaissance based on hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Hua, Wenshen; Guo, Tong; Liu, Xun

    2015-08-01

    Efficient camouflaged target reconnaissance technology makes great influence on modern warfare. Hyperspectral images can provide large spectral range and high spectral resolution, which are invaluable in discriminating between camouflaged targets and backgrounds. Hyperspectral target detection and classification technology are utilized to achieve single class and multi-class camouflaged targets reconnaissance respectively. Constrained energy minimization (CEM), a widely used algorithm in hyperspectral target detection, is employed to achieve one class camouflage target reconnaissance. Then, support vector machine (SVM), a classification method, is proposed to achieve multi-class camouflage target reconnaissance. Experiments have been conducted to demonstrate the efficiency of the proposed method.

  10. Protecting core networks with dual-homing: A study on enhanced network availability, resource efficiency, and energy-savings

    NASA Astrophysics Data System (ADS)

    Abeywickrama, Sandu; Furdek, Marija; Monti, Paolo; Wosinska, Lena; Wong, Elaine

    2016-12-01

    Core network survivability affects the reliability performance of telecommunication networks and remains one of the most important network design considerations. This paper critically examines the benefits arising from utilizing dual-homing in the optical access networks to provide resource-efficient protection against link and node failures in the optical core segment. Four novel, heuristic-based RWA algorithms that provide dedicated path protection in networks with dual-homing are proposed and studied. These algorithms protect against different failure scenarios (i.e. single link or node failures) and are implemented with different optimization objectives (i.e., minimization of wavelength usage and path length). Results obtained through simulations and comparison with baseline architectures indicate that exploiting dual-homed architecture in the access segment can bring significant improvements in terms of core network resource usage, connection availability, and power consumption.

  11. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  12. Interconnect fatigue design for terrestrial photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1982-01-01

    The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.

  13. Interconnect fatigue design for terrestrial photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1982-03-01

    The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.

  14. A modified method for MRF segmentation and bias correction of MR image with intensity inhomogeneity.

    PubMed

    Xie, Mei; Gao, Jingjing; Zhu, Chongjin; Zhou, Yan

    2015-01-01

    Markov random field (MRF) model is an effective method for brain tissue classification, which has been applied in MR image segmentation for decades. However, it falls short of the expected classification in MR images with intensity inhomogeneity for the bias field is not considered in the formulation. In this paper, we propose an interleaved method joining a modified MRF classification and bias field estimation in an energy minimization framework, whose initial estimation is based on k-means algorithm in view of prior information on MRI. The proposed method has a salient advantage of overcoming the misclassifications from the non-interleaved MRF classification for the MR image with intensity inhomogeneity. In contrast to other baseline methods, experimental results also have demonstrated the effectiveness and advantages of our algorithm via its applications in the real and the synthetic MR images.

  15. MUSiC - A general search for deviations from monte carlo predictions in CMS

    NASA Astrophysics Data System (ADS)

    Biallass, Philipp A.; CMS Collaboration

    2009-06-01

    A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.

  16. MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS

    NASA Astrophysics Data System (ADS)

    Hof, Carsten

    2009-05-01

    We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.

  17. Health-aware Model Predictive Control of Pasteurization Plant

    NASA Astrophysics Data System (ADS)

    Karimi Pour, Fatemeh; Puig, Vicenç; Ocampo-Martinez, Carlos

    2017-01-01

    In order to optimize the trade-off between components life and energy consumption, the integration of a system health management and control modules is required. This paper proposes the integration of model predictive control (MPC) with a fatigue estimation approach that minimizes the damage of the components of a pasteurization plant. The fatigue estimation is assessed with the rainflow counting algorithm. Using data from this algorithm, a simplified model that characterizes the health of the system is developed and integrated with MPC. The MPC controller objective is modified by adding an extra criterion that takes into account the accumulated damage. But, a steady-state offset is created by adding this extra criterion. Finally, by including an integral action in the MPC controller, the steady-state error for regulation purpose is eliminated. The proposed control scheme is validated in simulation using a simulator of a utility-scale pasteurization plant.

  18. Multiplicative noise removal through fractional order tv-based model and fast numerical schemes for its approximation

    NASA Astrophysics Data System (ADS)

    Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad

    2017-07-01

    This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.

  19. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    NASA Astrophysics Data System (ADS)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  20. Phase unwrapping with graph cuts optimization and dual decomposition acceleration for 3D high-resolution MRI data.

    PubMed

    Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi

    2017-03-01

    Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  2. Optimal design of solenoid valve to minimize cavitation by numerical analysis

    NASA Astrophysics Data System (ADS)

    Ko, Seungbin; Jang, Ilhoon; Song, Simon

    2012-11-01

    Keeping pace with the development of clean energy, hybrid cars and electric vehicles are getting extensive attention recently. In an electronic-control brake system which is essential to those vehicles, a solenoid valve is used to control external hydraulic pressure that boosts up the driver's braking force. However, strong cavitation occurs at the narrow passage between the ball and seat of a solenoid valve due to sudden decrease in pressure, leading to severe damage to the valve. In this study, we investigate the cavitation numerically to discover geometric parameters to affect the cavitation, and an optimal design to minimize the cavitation using optimization technique. As a result, we found four parameters: seat inner radius, seat angle, seat length, and ball radius. Among them, the seat inner radius affects the cavitation most. Also, we found that preventing a sudden reduction in a flow passage is important to reduce cavitation. Finally using an evolutionary algorithm for optimization we minimized cavitation. The optimal design resulted in the maximum vapor volume of fraction of 0.04 while it was 0.7 for reference geometry.

  3. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.

  4. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  5. Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.

    PubMed

    Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  6. Multi-objective generation scheduling with hybrid energy resources

    NASA Astrophysics Data System (ADS)

    Trivedi, Manas

    In economic dispatch (ED) of electric power generation, the committed generating units are scheduled to meet the load demand at minimum operating cost with satisfying all unit and system equality and inequality constraints. Generation of electricity from the fossil fuel releases several contaminants into the atmosphere. So the economic dispatch objective can no longer be considered alone due to the environmental concerns that arise from the emissions produced by fossil fueled electric power plants. This research is proposing the concept of environmental/economic generation scheduling with traditional and renewable energy sources. Environmental/economic dispatch (EED) is a multi-objective problem with conflicting objectives since emission minimization is conflicting with fuel cost minimization. Production and consumption of fossil fuel and nuclear energy are closely related to environmental degradation. This causes negative effects to human health and the quality of life. Depletion of the fossil fuel resources will also be challenging for the presently employed energy systems to cope with future energy requirements. On the other hand, renewable energy sources such as hydro and wind are abundant, inexhaustible and widely available. These sources use native resources and have the capacity to meet the present and the future energy demands of the world with almost nil emissions of air pollutants and greenhouse gases. The costs of fossil fuel and renewable energy are also heading in opposite directions. The economic policies needed to support the widespread and sustainable markets for renewable energy sources are rapidly evolving. The contribution of this research centers on solving the economic dispatch problem of a system with hybrid energy resources under environmental restrictions. It suggests an effective solution of renewable energy to the existing fossil fueled and nuclear electric utilities for the cheaper and cleaner production of electricity with hourly emission targets. Since minimizing the emissions and fuel cost are conflicting objectives, a practical approach based on multi-objective optimization is applied to obtain compromised solutions in a single simulation run using genetic algorithm. These solutions are known as non-inferior or Pareto-optimal solutions, graphically illustrated by the trade-off curves between criterions fuel cost and pollutant emission. The efficacy of the proposed approach is illustrated with the help of different sample test cases. This research would be useful for society, electric utilities, consultants, regulatory bodies, policy makers and planners.

  7. Rough sets and Laplacian score based cost-sensitive feature selection

    PubMed Central

    Yu, Shenglong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884

  8. Rough sets and Laplacian score based cost-sensitive feature selection.

    PubMed

    Yu, Shenglong; Zhao, Hong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.

  9. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  10. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  11. Analysis of labor employment assessment on production machine to minimize time production

    NASA Astrophysics Data System (ADS)

    Hernawati, Tri; Suliawati; Sari Gumay, Vita

    2018-03-01

    Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.

  12. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  13. a Single-Exposure Dual-Energy Computed Radiography Technique for Improved Nodule Detection and Classification in Chest Imaging

    NASA Astrophysics Data System (ADS)

    Zink, Frank Edward

    The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.

  14. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  15. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    NASA Astrophysics Data System (ADS)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  16. Prioritizing the Components of Vulnerability: A Genetic Algorithm Minimization of Flood Risk

    NASA Astrophysics Data System (ADS)

    Bongolan, Vena Pearl; Ballesteros, Florencio; Baritua, Karessa Alexandra; Junne Santos, Marie

    2013-04-01

    We define a flood resistant city as an optimal arrangement of communities according to their traits, with the goal of minimizing the flooding vulnerability via a genetic algorithm. We prioritize the different components of flooding vulnerability, giving each component a weight, thus expressing vulnerability as a weighted sum. This serves as the fitness function for the genetic algorithm. We also allowed non-linear interactions among related but independent components, viz, poverty and mortality rate, and literacy and radio/ tv penetration. The designs produced reflect the relative importance of the components, and we observed a synchronicity between the interacting components, giving us a more consistent design.

  17. NARMAX model identification of a palm oil biodiesel engine using multi-objective optimization differential evolution

    NASA Astrophysics Data System (ADS)

    Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin

    2017-09-01

    This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.

  18. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  19. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  20. Artificial Bee Colony Optimization of Capping Potentials for Hybrid Quantum Mechanical/Molecular Mechanical Calculations.

    PubMed

    Schiffmann, Christoph; Sebastiani, Daniel

    2011-05-10

    We present an algorithmic extension of a numerical optimization scheme for analytic capping potentials for use in mixed quantum-classical (quantum mechanical/molecular mechanical, QM/MM) ab initio calculations. Our goal is to minimize bond-cleavage-induced perturbations in the electronic structure, measured by means of a suitable penalty functional. The optimization algorithm-a variant of the artificial bee colony (ABC) algorithm, which relies on swarm intelligence-couples deterministic (downhill gradient) and stochastic elements to avoid local minimum trapping. The ABC algorithm outperforms the conventional downhill gradient approach, if the penalty hypersurface exhibits wiggles that prevent a straight minimization pathway. We characterize the optimized capping potentials by computing NMR chemical shifts. This approach will increase the accuracy of QM/MM calculations of complex biomolecules.

  1. A shifted hyperbolic augmented Lagrangian-based artificial fish two-swarm algorithm with guaranteed convergence for constrained global optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.

    2016-12-01

    This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.

  2. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  3. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  4. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  5. Developpement de techniques numeriques pour l'estimation, la modelisation et la prediction de proprietes thermodynamiques et structurales de systems metalliques a fort ordonnancement chimique

    NASA Astrophysics Data System (ADS)

    Harvey, Jean-Philippe

    In this work, the possibility to calculate and evaluate with a high degree of precision the Gibbs energy of complex multiphase equilibria for which chemical ordering is explicitly and simultaneously considered in the thermodynamic description of solid (short range order and long range order) and liquid (short range order) metallic phases is studied. The cluster site approximation (CSA) and the cluster variation method (CVM) are implemented in a new minimization technique of the Gibbs energy of multicomponent and multiphase systems to describe the thermodynamic behaviour of metallic solid solutions showing strong chemical ordering. The modified quasichemical model in the pair approximation (MQMPA) is also implemented in the new minimization algorithm presented in this work to describe the thermodynamic behaviour of metallic liquid solutions. The constrained minimization technique implemented in this work consists of a sequential quadratic programming technique based on an exact Newton’s method (i.e. the use of exact second derivatives in the determination of the Hessian of the objective function) combined to a line search method to identify a direction of sufficient decrease of the merit function. The implementation of a new algorithm to perform the constrained minimization of the Gibbs energy is justified by the difficulty to identify, in specific cases, the correct multiphase assemblage of a system where the thermodynamic behaviour of the equilibrium phases is described by one of the previously quoted models using the FactSage software (ex.: solid_CSA+liquid_MQMPA; solid1_CSA+solid2_CSA). After a rigorous validation of the constrained Gibbs energy minimization algorithm using several assessed binary and ternary systems found in the literature, the CVM and the CSA models used to describe the energetic behaviour of metallic solid solutions present in systems with key industrial applications such as the Cu-Zr and the Al-Zr systems are parameterized using fully consistent thermodynamic an structural data generated from a Monte Carlo (MC) simulator also implemented in the framework of this project. In this MC simulator, the modified embedded atom model in the second nearest neighbour formalism (MEAM-2NN) is used to describe the cohesive energy of each studied structure. A new Al-Zr MEAM-2NN interatomic potential needed to evaluate the cohesive energy of the condensed phases of this system is presented in this work. The thermodynamic integration (TI) method implemented in the MC simulator allows the evaluation of the absolute Gibbs energy of the considered solid or liquid structures. The original implementation of the TI method allowed us to evaluate theoretically for the first time all the thermodynamic mixing contributions (i.e., mixing enthalpy and mixing entropy contributions) of a metallic liquid (Cu-Zr and Al-Zr) and of a solid solution (face-centered cubic (FCC) Al-Zr solid solution) described by the MEAM-2NN. Thermodynamic and structural data obtained from MC and molecular dynamic simulations are then used to parameterize the CVM for the Al-Zr FCC solid solution and the MQMPA for the Al-Zr and the Cu-Zr liquid phase respectively. The extended thermodynamic study of these systems allow the introduction of a new type of configuration-dependent excess parameters in the definition of the thermodynamic function of solid solutions described by the CVM or the CSA. These parameters greatly improve the precision of these thermodynamic models based on experimental evidences found in the literature. A new parameterization approach of the MQMPA model of metallic liquid solutions is presented throughout this work. In this new approach, calculated pair fractions obtained from MC/MD simulations are taken into account as well as configuration-independent volumetric relaxation effects (regular like excess parameters) in order to parameterize precisely the Gibbs energy function of metallic melts. The generation of a complete set of fully consistent thermodynamic, physical and structural data for solid, liquid, and stoichiometric compounds and the subsequent parameterization of their respective thermodynamic model lead to the first description of the complete Al-Zr phase diagram in the range of composition [0 ≤ XZr ≤ 5 / 9] based on theoretical and fully consistent thermodynamic properties. MC and MD simulations are performed for the Al-Zr system to define for the first time the precise thermodynamic behaviour of the amorphous phase for its entire range of composition. Finally, all the thermodynamic models for the liquid phase, the FCC solid solution and the amorphous phase are used to define conditions based on thermodynamic and volumetric considerations that favor the amorphization of Al-Zr alloys.

  6. Graph cuts and neural networks for segmentation and porosity quantification in Synchrotron Radiation X-ray μCT of an igneous rock sample.

    PubMed

    Meneses, Anderson Alvarenga de Moura; Palheta, Dayara Bastos; Pinheiro, Christiano Jorge Gomes; Barroso, Regina Cely Rodrigues

    2018-03-01

    X-ray Synchrotron Radiation Micro-Computed Tomography (SR-µCT) allows a better visualization in three dimensions with a higher spatial resolution, contributing for the discovery of aspects that could not be observable through conventional radiography. The automatic segmentation of SR-µCT scans is highly valuable due to its innumerous applications in geological sciences, especially for morphology, typology, and characterization of rocks. For a great number of µCT scan slices, a manual process of segmentation would be impractical, either for the time expended and for the accuracy of results. Aiming the automatic segmentation of SR-µCT geological sample images, we applied and compared Energy Minimization via Graph Cuts (GC) algorithms and Artificial Neural Networks (ANNs), as well as the well-known K-means and Fuzzy C-Means algorithms. The Dice Similarity Coefficient (DSC), Sensitivity and Precision were the metrics used for comparison. Kruskal-Wallis and Dunn's tests were applied and the best methods were the GC algorithms and ANNs (with Levenberg-Marquardt and Bayesian Regularization). For those algorithms, an approximate Dice Similarity Coefficient of 95% was achieved. Our results confirm the possibility of usage of those algorithms for segmentation and posterior quantification of porosity of an igneous rock sample SR-µCT scan. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  8. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  9. Method for Optimal Sensor Deployment on 3D Terrains Utilizing a Steady State Genetic Algorithm with a Guided Walk Mutation Operator Based on the Wavelet Transform

    PubMed Central

    Unaldi, Numan; Temel, Samil; Asari, Vijayan K.

    2012-01-01

    One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078

  10. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less

  11. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry.

    PubMed

    Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A

    2017-03-21

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

  12. INFO-RNA--a fast approach to inverse RNA folding.

    PubMed

    Busch, Anke; Backofen, Rolf

    2006-08-01

    The structure of RNA molecules is often crucial for their function. Therefore, secondary structure prediction has gained much interest. Here, we consider the inverse RNA folding problem, which means designing RNA sequences that fold into a given structure. We introduce a new algorithm for the inverse folding problem (INFO-RNA) that consists of two parts; a dynamic programming method for good initial sequences and a following improved stochastic local search that uses an effective neighbor selection method. During the initialization, we design a sequence that among all sequences adopts the given structure with the lowest possible energy. For the selection of neighbors during the search, we use a kind of look-ahead of one selection step applying an additional energy-based criterion. Afterwards, the pre-ordered neighbors are tested using the actual optimization criterion of minimizing the structure distance between the target structure and the mfe structure of the considered neighbor. We compared our algorithm to RNAinverse and RNA-SSD for artificial and biological test sets. Using INFO-RNA, we performed better than RNAinverse and in most cases, we gained better results than RNA-SSD, the probably best inverse RNA folding tool on the market. www.bioinf.uni-freiburg.de?Subpages/software.html.

  13. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    DOE PAGES

    Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.

    2017-03-16

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less

  14. The trust-region self-consistent field method in Kohn-Sham density-functional theory.

    PubMed

    Thøgersen, Lea; Olsen, Jeppe; Köhn, Andreas; Jørgensen, Poul; Sałek, Paweł; Helgaker, Trygve

    2005-08-15

    The trust-region self-consistent field (TRSCF) method is extended to the optimization of the Kohn-Sham energy. In the TRSCF method, both the Roothaan-Hall step and the density-subspace minimization step are replaced by trust-region optimizations of local approximations to the Kohn-Sham energy, leading to a controlled, monotonic convergence towards the optimized energy. Previously the TRSCF method has been developed for optimization of the Hartree-Fock energy, which is a simple quadratic function in the density matrix. However, since the Kohn-Sham energy is a nonquadratic function of the density matrix, the local energy functions must be generalized for use with the Kohn-Sham model. Such a generalization, which contains the Hartree-Fock model as a special case, is presented here. For comparison, a rederivation of the popular direct inversion in the iterative subspace (DIIS) algorithm is performed, demonstrating that the DIIS method may be viewed as a quasi-Newton method, explaining its fast local convergence. In the global region the convergence behavior of DIIS is less predictable. The related energy DIIS technique is also discussed and shown to be inappropriate for the optimization of the Kohn-Sham energy.

  15. Strategies to Save 50% Site Energy in Grocery and General Merchandise Stores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirsch, A.; Hale, E.; Leach, M.

    2011-03-01

    This paper summarizes the methodology and main results of two recently published Technical Support Documents. These reports explore the feasibility of designing general merchandise and grocery stores that use half the energy of a minimally code-compliant building, as measured on a whole-building basis. We used an optimization algorithm to trace out a minimum cost curve and identify designs that satisfy the 50% energy savings goal. We started from baseline building energy use and progressed to more energy-efficient designs by sequentially adding energy design measures (EDMs). Certain EDMs figured prominently in reaching the 50% energy savings goal for both building types:more » (1) reduced lighting power density; (2) optimized area fraction and construction of view glass or skylights, or both, as part of a daylighting system tuned to 46.5 fc (500 lux); (3) reduced infiltration with a main entrance vestibule or an envelope air barrier, or both; and (4) energy recovery ventilators, especially in humid and cold climates. In grocery stores, the most effective EDM, which was chosen for all climates, was replacing baseline medium-temperature refrigerated cases with high-efficiency models that have doors.« less

  16. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  17. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE PAGES

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...

    2017-10-17

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  18. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  19. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  20. Using Ant Colony Optimization for Routing in VLSI Chips

    NASA Astrophysics Data System (ADS)

    Arora, Tamanna; Moses, Melanie

    2009-04-01

    Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.

  1. Increasingly minimal bias routing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataineh, Abdulla; Court, Thomas; Roweth, Duncan

    2017-02-21

    A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).

  2. Exploiting node mobility for energy optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    El-Moukaddem, Fatme Mohammad

    Wireless Sensor Networks (WSNs) have become increasingly available for data-intensive applications such as micro-climate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit the sheer amount of data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies such as batteries or small solar panels. The availability of numerous low-cost robotic units (e.g. Robomote and Khepera) has made it possible to construct sensor networks consisting of mobile sensor nodes. It has been shown that the controlled mobility offered by mobile sensors can be exploited to improve the energy efficiency of a network. In this thesis, we propose schemes that use mobile sensor nodes to reduce the energy consumption of data-intensive WSNs. Our approaches differ from previous work in two main aspects. First, our approaches do not require complex motion planning of mobile nodes, and hence can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless communications into a holistic optimization framework. We consider three problems arising from the limited energy in the sensor nodes. In the first problem, the network consists of mostly static nodes and contains only a few mobile nodes. In the second and third problems, we assume essentially that all nodes in the WSN are mobile. We first study a new problem called max-data mobile relay configuration (MMRC ) that finds the positions of a set of mobile sensors, referred to as relays, that maximize the total amount of data gathered by the network during its lifetime. We show that the MMRC problem is surprisingly complex even for a trivial network topology due to the joint consideration of the energy consumption of both wireless communication and mechanical locomotion. We present optimal MMRC algorithms and practical distributed implementations for several important network topologies and applications. Second, we consider the problem of minimizing the total energy consumption of a network. We design an iterative algorithm that improves a given configuration by relocating nodes to new positions. We show that this algorithm converges to the optimal configuration for the given transmission routes. Moreover, we propose an efficient distributed implementation that does not require explicit synchronization. Finally, we consider the problem of maximizing the lifetime of the network. We propose an approach that exploits the mobility of the nodes to balance the energy consumption throughout the network. We develop efficient algorithms for single and multiple round approaches. For all three problems, we evaluate the efficiency of our algorithms through simulations. Our simulation results based on realistic energy models obtained from existing mobile and static sensor platforms show that our approaches significantly improve the network's performance and outperform existing approaches.

  3. An Energy-Based Three-Dimensional Segmentation Approach for the Quantitative Interpretation of Electron Tomograms

    PubMed Central

    Bartesaghi, Alberto; Sapiro, Guillermo; Subramaniam, Sriram

    2006-01-01

    Electron tomography allows for the determination of the three-dimensional structures of cells and tissues at resolutions significantly higher than that which is possible with optical microscopy. Electron tomograms contain, in principle, vast amounts of information on the locations and architectures of large numbers of subcellular assemblies and organelles. The development of reliable quantitative approaches for the analysis of features in tomograms is an important problem, and a challenging prospect due to the low signal-to-noise ratios that are inherent to biological electron microscopic images. This is, in part, a consequence of the tremendous complexity of biological specimens. We report on a new method for the automated segmentation of HIV particles and selected cellular compartments in electron tomograms recorded from fixed, plastic-embedded sections derived from HIV-infected human macrophages. Individual features in the tomogram are segmented using a novel robust algorithm that finds their boundaries as global minimal surfaces in a metric space defined by image features. The optimization is carried out in a transformed spherical domain with the center an interior point of the particle of interest, providing a proper setting for the fast and accurate minimization of the segmentation energy. This method provides tools for the semi-automated detection and statistical evaluation of HIV particles at different stages of assembly in the cells and presents opportunities for correlation with biochemical markers of HIV infection. The segmentation algorithm developed here forms the basis of the automated analysis of electron tomograms and will be especially useful given the rapid increases in the rate of data acquisition. It could also enable studies of much larger data sets, such as those which might be obtained from the tomographic analysis of HIV-infected cells from studies of large populations. PMID:16190467

  4. Multiscale geometric modeling of macromolecules I: Cartesian representation

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2014-01-01

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the polarized curvature, for the prediction of protein binding sites.

  5. Peptide docking of HIV-1 p24 with single chain fragment variable (scFv) by CDOCKER algorithm

    NASA Astrophysics Data System (ADS)

    Karim, Hana Atiqah Abdul; Tayapiwatana, Chatchai; Nimmanpipug, Piyarat; Zain, Sharifuddin M.; Rahman, Noorsaadah Abdul; Lee, Vannajan Sanghiran

    2014-10-01

    In search for the important residues that might have involve in the binding interaction between the p24 caspid protein of HIV-1 fragment (MET68 - PRO90) with the single chain fragment variable (scFv) of FAB23.5, modern computational chemistry approach has been conducted and applied. The p24 fragment was initially taken out from the 1AFV protein molecule consisting of both light (VL) and heavy (VH) chains of FAB23.5 as well as the HIV-1 caspid protein. From there, the p24 (antigen) fragment was made to dock back into the protein pocket receptor (antibody) by using the CDOCKER algorithm to conduct the molecular docking process. The score calculated from the CDOCKER gave 15 possible docked poses with various docked ligand's positions, the interaction energy as well as the binding energy. The best docked pose that imitates the original antigen's position was determined and further processed to the In Situ minimization to obtain the residues interaction energy as well as to observe the hydrogen bonds interaction in the protein-peptide complex. Based on the results demonstrated, the specific residues in the complex that have shown immense lower interaction energies in the 5Å vicinity region from the peptide are from the heavy chain (VH:TYR105) and light chain (VL: ASN31, TYR32, and GLU97). Those residues play vital roles in the binding mechanism of Antibody-Antigen (Ab-Ag) complex of p24 with FAB23.5.

  6. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  7. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  8. Parameter optimization on the convergence surface of path simulations

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, Srinivas Niranj

    Computational treatments of protein conformational changes tend to focus on the trajectories themselves, despite the fact that it is the transition state structures that contain information about the barriers that impose multi-state behavior. PATH is an algorithm that computes a transition pathway between two protein crystal structures, along with the transition state structure, by minimizing the Onsager-Machlup action functional. It is rapid but depends on several unknown input parameters whose range of different values can potentially generate different transition-state structures. Transition-state structures arising from different input parameters cannot be uniquely compared with those generated by other methods. I outline modifications that I have made to the PATH algorithm that estimates these input parameters in a manner that circumvents these difficulties, and describe two complementary tests that validate the transition-state structures found by the PATH algorithm. First, I show that although the PATH algorithm and two other approaches to computing transition pathways produce different low-energy structures connecting the initial and final ground-states with the transition state, all three methods agree closely on the configurations of their transition states. Second, I show that the PATH transition states are close to the saddle points of free-energy surfaces connecting initial and final states generated by replica-exchange Discrete Molecular Dynamics simulations. I show that aromatic side-chain rearrangements create similar potential energy barriers in the transition-state structures identified by PATH for a signaling protein, a contractile protein, and an enzyme. Finally, I observed, but cannot account for, the fact that trajectories obtained for all-atom and Calpha-only simulations identify transition state structures in which the Calpha atoms are in essentially the same positions. The consistency between transition-state structures derived by different algorithms for unrelated protein systems argues that although functionally important protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. In the end, I outline the strategies that could enhance the efficiency and applicability of PATH.

  9. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  10. The minimally invasive spinal deformity surgery algorithm: a reproducible rational framework for decision making in minimally invasive spinal deformity surgery.

    PubMed

    Mummaneni, Praveen V; Shaffrey, Christopher I; Lenke, Lawrence G; Park, Paul; Wang, Michael Y; La Marca, Frank; Smith, Justin S; Mundis, Gregory M; Okonkwo, David O; Moal, Bertrand; Fessler, Richard G; Anand, Neel; Uribe, Juan S; Kanter, Adam S; Akbarnia, Behrooz; Fu, Kai-Ming G

    2014-05-01

    Minimally invasive surgery (MIS) is an alternative to open deformity surgery for the treatment of patients with adult spinal deformity. However, at this time MIS techniques are not as versatile as open deformity techniques, and MIS techniques have been reported to result in suboptimal sagittal plane correction or pseudarthrosis when used for severe deformities. The minimally invasive spinal deformity surgery (MISDEF) algorithm was created to provide a framework for rational decision making for surgeons who are considering MIS versus open spine surgery. A team of experienced spinal deformity surgeons developed the MISDEF algorithm that incorporates a patient's preoperative radiographic parameters and leads to one of 3 general plans ranging from MIS direct or indirect decompression to open deformity surgery with osteotomies. The authors surveyed fellowship-trained spine surgeons experienced with spinal deformity surgery to validate the algorithm using a set of 20 cases to establish interobserver reliability. They then resurveyed the same surgeons 2 months later with the same cases presented in a different sequence to establish intraobserver reliability. Responses were collected and tabulated. Fleiss' analysis was performed using MATLAB software. Over a 3-month period, 11 surgeons completed the surveys. Responses for MISDEF algorithm case review demonstrated an interobserver kappa of 0.58 for the first round of surveys and an interobserver kappa of 0.69 for the second round of surveys, consistent with substantial agreement. In at least 10 cases there was perfect agreement between the reviewing surgeons. The mean intraobserver kappa for the 2 surveys was 0.86 ± 0.15 (± SD) and ranged from 0.62 to 1. The use of the MISDEF algorithm provides consistent and straightforward guidance for surgeons who are considering either an MIS or an open approach for the treatment of patients with adult spinal deformity. The MISDEF algorithm was found to have substantial inter- and intraobserver agreement. Although further studies are needed, the application of this algorithm could provide a platform for surgeons to achieve the desired goals of surgery.

  11. MULTI-OBJECTIVE OPTIMAL DESIGN OF GROUNDWATER REMEDIATION SYSTEMS: APPLICATION OF THE NICHED PARETO GENETIC ALGORITHM (NPGA). (R826614)

    EPA Science Inventory

    A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...

  12. Adaptive control of turbulence intensity is accelerated by frugal flow sampling.

    PubMed

    Quinn, Daniel B; van Halder, Yous; Lentink, David

    2017-11-01

    The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).

  13. A Survey on an Energy-Efficient and Energy-Balanced Routing Protocol for Wireless Sensor Networks.

    PubMed

    Ogundile, Olayinka O; Alfa, Attahiru S

    2017-05-10

    Wireless sensor networks (WSNs) form an important part of industrial application. There has been growing interest in the potential use of WSNs in applications such as environment monitoring, disaster management, health care monitoring, intelligence surveillance and defence reconnaissance. In these applications, the sensor nodes (SNs) are envisaged to be deployed in sizeable numbers in an outlying area, and it is quite difficult to replace these SNs after complete deployment in many scenarios. Therefore, as SNs are predominantly battery powered devices, the energy consumption of the nodes must be properly managed in order to prolong the network lifetime and functionality to a rational time. Different energy-efficient and energy-balanced routing protocols have been proposed in literature over the years. The energy-efficient routing protocols strive to increase the network lifetime by minimizing the energy consumption in each SN. On the other hand, the energy-balanced routing protocols protract the network lifetime by uniformly balancing the energy consumption among the nodes in the network. There have been various survey papers put forward by researchers to review the performance and classify the different energy-efficient routing protocols for WSNs. However, there seems to be no clear survey emphasizing the importance, concepts, and principles of load-balanced energy routing protocols for WSNs. In this paper, we provide a clear picture of both the energy-efficient and energy-balanced routing protocols for WSNs. More importantly, this paper presents an extensive survey of the different state-of-the-art energy-efficient and energy-balanced routing protocols. A taxonomy is introduced in this paper to classify the surveyed energy-efficient and energy-balanced routing protocols based on their proposed mode of communication towards the base station (BS). In addition, we classified these routing protocols based on the solution types or algorithms, and the input decision variables defined in the routing algorithm. The strengths and weaknesses of the choice of the decision variables used in the design of these energy-efficient and energy-balanced routing protocols are emphasised. Finally, we suggest possible research directions in order to optimize the energy consumption in sensor networks.

  14. A Survey on an Energy-Efficient and Energy-Balanced Routing Protocol for Wireless Sensor Networks

    PubMed Central

    Ogundile, Olayinka O.; Alfa, Attahiru S.

    2017-01-01

    Wireless sensor networks (WSNs) form an important part of industrial application. There has been growing interest in the potential use of WSNs in applications such as environment monitoring, disaster management, health care monitoring, intelligence surveillance and defence reconnaissance. In these applications, the sensor nodes (SNs) are envisaged to be deployed in sizeable numbers in an outlying area, and it is quite difficult to replace these SNs after complete deployment in many scenarios. Therefore, as SNs are predominantly battery powered devices, the energy consumption of the nodes must be properly managed in order to prolong the network lifetime and functionality to a rational time. Different energy-efficient and energy-balanced routing protocols have been proposed in literature over the years. The energy-efficient routing protocols strive to increase the network lifetime by minimizing the energy consumption in each SN. On the other hand, the energy-balanced routing protocols protract the network lifetime by uniformly balancing the energy consumption among the nodes in the network. There have been various survey papers put forward by researchers to review the performance and classify the different energy-efficient routing protocols for WSNs. However, there seems to be no clear survey emphasizing the importance, concepts, and principles of load-balanced energy routing protocols for WSNs. In this paper, we provide a clear picture of both the energy-efficient and energy-balanced routing protocols for WSNs. More importantly, this paper presents an extensive survey of the different state-of-the-art energy-efficient and energy-balanced routing protocols. A taxonomy is introduced in this paper to classify the surveyed energy-efficient and energy-balanced routing protocols based on their proposed mode of communication towards the base station (BS). In addition, we classified these routing protocols based on the solution types or algorithms, and the input decision variables defined in the routing algorithm. The strengths and weaknesses of the choice of the decision variables used in the design of these energy-efficient and energy-balanced routing protocols are emphasised. Finally, we suggest possible research directions in order to optimize the energy consumption in sensor networks. PMID:28489054

  15. Development of a job rotation scheduling algorithm for minimizing accumulated work load per body parts.

    PubMed

    Song, JooBong; Lee, Chaiwoo; Lee, WonJung; Bahn, Sangwoo; Jung, ChanJu; Yun, Myung Hwan

    2015-01-01

    For the successful implementation of job rotation, jobs should be scheduled systematically so that physical workload is evenly distributed with the use of various body parts. However, while the potential benefits are widely recognized by research and industry, there is still a need for a more effective and efficient algorithm that considers multiple work-related factors in job rotation scheduling. This study suggests a type of job rotation algorithm that aims to minimize musculoskeletal disorders with the approach of decreasing the overall workload. Multiple work characteristics are evaluated as inputs to the proposed algorithm. Important factors, such as physical workload on specific body parts, working height, involvement of heavy lifting, and worker characteristics such as physical disorders, are included in the algorithm. For evaluation of the overall workload in a given workplace, an objective function was defined to aggregate the scores from the individual factors. A case study, where the algorithm was applied at a workplace, is presented with an examination on its applicability and effectiveness. With the application of the suggested algorithm in case study, the value of the final objective function, which is the weighted sum of the workload in various body parts, decreased by 71.7% when compared to a typical sequential assignment and by 84.9% when compared to a single job assignment, which is doing one job all day. An algorithm was developed using the data from the ergonomic evaluation tool used in the plant and from the known factors related to workload. The algorithm was developed so that it can be efficiently applied with a small amount of required inputs, while covering a wide range of work-related factors. A case study showed that the algorithm was beneficial in determining a job rotation schedule aimed at minimizing workload across body parts.

  16. Intelligent emission-sensitive routing for plugin hybrid electric vehicles.

    PubMed

    Sun, Zhonghao; Zhou, Xingshe

    2016-01-01

    The existing transportation sector creates heavily environmental impacts and is a prime cause for the current climate change. The need to reduce emissions from this sector has stimulated efforts to speed up the application of electric vehicles (EVs). A subset of EVs, called plug-in hybrid electric vehicles (PHEVs), backup batteries with combustion engine, which makes PHEVs have a comparable driving range to conventional vehicles. However, this hybridization comes at a cost of higher emissions than all-electric vehicles. This paper studies the routing problem for PHEVs to minimize emissions. The existing shortest-path based algorithms cannot be applied to solving this problem, because of the several new challenges: (1) an optimal route may contain circles caused by detour for recharging; (2) emissions of PHEVs not only depend on the driving distance, but also depend on the terrain and the state of charge (SOC) of batteries; (3) batteries can harvest energy by regenerative braking, which makes some road segments have negative energy consumption. To address these challenges, this paper proposes a green navigation algorithm (GNA) which finds the optimal strategies: where to go and where to recharge. GNA discretizes the SOC, then makes the PHEV routing problem to satisfy the principle of optimality. Finally, GNA adopts dynamic programming to solve the problem. We evaluate GNA using synthetic maps generated by the delaunay triangulation. The results show that GNA can save more than 10 % energy and reduce 10 % emissions when compared to the shortest path algorithm. We also observe that PHEVs with the battery capacity of 10-15 KWh detour most and nearly no detour when larger than 30 KWh. This observation gives some insights when developing PHEVs.

  17. Optimal Decision Model for Sustainable Hospital Building Renovation—A Case Study of a Vacant School Building Converting into a Community Public Hospital

    PubMed Central

    Juan, Yi-Kai; Cheng, Yu-Ching; Perng, Yeng-Horng; Castro-Lacouture, Daniel

    2016-01-01

    Much attention has been paid to hospitals environments since modern pandemics have emerged. The building sector is considered to be the largest world energy consumer, so many global organizations are attempting to create a sustainable environment in building construction by reducing energy consumption. Therefore, maintaining high standards of hygiene while reducing energy consumption has become a major task for hospitals. This study develops a decision model based on genetic algorithms and A* graph search algorithms to evaluate existing hospital environmental conditions and to recommend an optimal scheme of sustainable renovation strategies, considering trade-offs among minimal renovation cost, maximum quality improvement, and low environmental impact. Reusing vacant buildings is a global and sustainable trend. In Taiwan, for example, more and more school space will be unoccupied due to a rapidly declining birth rate. Integrating medical care with local community elder-care efforts becomes important because of the aging population. This research introduces a model that converts a simulated vacant school building into a community public hospital renovation project in order to validate the solutions made by hospital managers and suggested by the system. The result reveals that the system performs well and its solutions are more successful than the actions undertaken by decision-makers. This system can improve traditional hospital building condition assessment while making it more effective and efficient. PMID:27347986

  18. Optimal Decision Model for Sustainable Hospital Building Renovation-A Case Study of a Vacant School Building Converting into a Community Public Hospital.

    PubMed

    Juan, Yi-Kai; Cheng, Yu-Ching; Perng, Yeng-Horng; Castro-Lacouture, Daniel

    2016-06-24

    Much attention has been paid to hospitals environments since modern pandemics have emerged. The building sector is considered to be the largest world energy consumer, so many global organizations are attempting to create a sustainable environment in building construction by reducing energy consumption. Therefore, maintaining high standards of hygiene while reducing energy consumption has become a major task for hospitals. This study develops a decision model based on genetic algorithms and A* graph search algorithms to evaluate existing hospital environmental conditions and to recommend an optimal scheme of sustainable renovation strategies, considering trade-offs among minimal renovation cost, maximum quality improvement, and low environmental impact. Reusing vacant buildings is a global and sustainable trend. In Taiwan, for example, more and more school space will be unoccupied due to a rapidly declining birth rate. Integrating medical care with local community elder-care efforts becomes important because of the aging population. This research introduces a model that converts a simulated vacant school building into a community public hospital renovation project in order to validate the solutions made by hospital managers and suggested by the system. The result reveals that the system performs well and its solutions are more successful than the actions undertaken by decision-makers. This system can improve traditional hospital building condition assessment while making it more effective and efficient.

  19. Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction

    NASA Astrophysics Data System (ADS)

    Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing

    2018-02-01

    Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.

  20. Continued research on selected parameters to minimize community annoyance from airplane noise

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1981-01-01

    Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.

  1. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    NASA Astrophysics Data System (ADS)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  2. Targeted Acoustic Data Processing for Ocean Ecological Studies

    NASA Astrophysics Data System (ADS)

    Sidorovskaia, N.; Li, K.; Tiemann, C.; Ackleh, A. S.; Tang, T.; Ioup, G. E.; Ioup, J. W.

    2015-12-01

    The Gulf of Mexico is home to many species of deep diving marine mammals. In recent years several ecological studies have collected large volumes of Passive Acoustic Monitoring (PAM) data to investigate the effects of anthropogenic activities on protected and endangered marine mammal species. To utilize these data to their fullest potential for abundance estimates and habitat preference studies, automated detection and classification algorithms are needed to extract species acoustic encounters from a continuous stream of data. The species which phonate in overlapping frequency bands represent a particular challenge. This paper analyzes the performance of a newly developed automated detector for the classification of beaked whale clicks in the Northern Gulf of Mexico. Current used beaked whale classification algorithms rely heavily on experienced human operator involvement in manually associating potential events with a particular species of beaked whales. Our detection algorithm is two-stage: the detector is triggered when the species-representative phonation band energy exceeds the baseline detection threshold. Then multiple event attributes (temporal click duration, central frequency, frequency band, frequency sweep rate, Choi-Williams distribution shape indices) are measured. An attribute vector is then used to discriminate among different species of beaked whales present in the Gulf of Mexico and Risso's dolphins which were recognized to mask the detections of beaked whales in the case of widely used energy-band detectors. The detector is applied to the PAM data collected by the Littoral Acoustic Demonstration Center to estimate abundance trends of beaked whales in the vicinity of the 2010 oil spill before and after the disaster. This algorithm will allow automated processing with minimal operator involvement for new and archival PAM data. [The research is supported by a BP/GOMRI 2015-2017 consortium grant.

  3. Artificial Neural Network as the FPGA Trigger in the Cyclone V based Front-End for a Detection of Neutrino-Origin Showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szadkowski, Zbigniew; Glas, Dariusz; Pytel, Krzysztof

    Neutrinos play a fundamental role in the understanding of the origin of ultra-high-energy cosmic rays. They interact through charged and neutral currents in the atmosphere generating extensive air showers. However, their a very low rate of events potentially generated by neutrinos is a significant challenge for a detection technique and requires both sophisticated algorithms and high-resolution hardware. A trigger based on a artificial neural network was implemented into the Cyclone{sup R} V E FPGA 5CEFA9F31I7 - the heart of the prototype Front-End boards developed for tests of new algorithms in the Pierre Auger surface detectors. Showers for muon and taumore » neutrino initiating particles on various altitudes, angles and energies were simulated in CORSICA and Offline platforms giving pattern of ADC traces in Auger water Cherenkov detectors. The 3-layer 12-8-1 neural network was taught in MATLAB by simulated ADC traces according the Levenberg-Marquardt algorithm. Results show that a probability of a ADC traces generation is very low due to a small neutrino cross-section. Nevertheless, ADC traces, if occur, for 1-10 EeV showers are relatively short and can be analyzed by 16-point input algorithm. We optimized the coefficients from MATLAB to get a maximal range of potentially registered events and for fixed-point FPGA processing to minimize calculation errors. New sophisticated triggers implemented in Cyclone{sup R} V E FPGAs with large amount of DSP blocks, embedded memory running with 120 - 160 MHz sampling may support a discovery of neutrino events in the Pierre Auger Observatory. (authors)« less

  4. A Note on Alternating Minimization Algorithm for the Matrix Completion Problem

    DOE PAGES

    Gamarnik, David; Misra, Sidhant

    2016-06-06

    Here, we consider the problem of reconstructing a low-rank matrix from a subset of its entries and analyze two variants of the so-called alternating minimization algorithm, which has been proposed in the past.We establish that when the underlying matrix has rank one, has positive bounded entries, and the graph underlying the revealed entries has diameter which is logarithmic in the size of the matrix, both algorithms succeed in reconstructing the matrix approximately in polynomial time starting from an arbitrary initialization.We further provide simulation results which suggest that the second variant which is based on the message passing type updates performsmore » significantly better.« less

  5. Minimal Increase Network Coding for Dynamic Networks.

    PubMed

    Zhang, Guoyin; Fan, Xu; Wu, Yanxia

    2016-01-01

    Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery.

  6. Minimal Increase Network Coding for Dynamic Networks

    PubMed Central

    Wu, Yanxia

    2016-01-01

    Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery. PMID:26867211

  7. QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1990-01-01

    The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.

  8. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  9. Minimizing inner product data dependencies in conjugate gradient iteration

    NASA Technical Reports Server (NTRS)

    Vanrosendale, J.

    1983-01-01

    The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).

  10. Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    León-Javier, Alejandro; Cruz-Cortés, Nareli; Moreno-Armendáriz, Marco A.; Orantes-Jiménez, Sandra

    The addition chains with minimal length are the basic block to the optimal computation of finite field exponentiations. It has very important applications in the areas of error-correcting codes and cryptography. However, obtaining the shortest addition chains for a given exponent is a NP-hard problem. In this work we propose the adaptation of a Particle Swarm Optimization algorithm to deal with this problem. Our proposal is tested on several exponents whose addition chains are considered hard to find. We obtained very promising results.

  11. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    NASA Astrophysics Data System (ADS)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  12. Optimization of fuel-cell tram operation based on two dimension dynamic programming

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbin; Lu, Xuecheng; Zhao, Jingsong; Li, Jianqiu

    2018-02-01

    This paper proposes an optimal control strategy based on the two-dimension dynamic programming (2DDP) algorithm targeting at minimizing operation energy consumption for a fuel-cell tram. The energy consumption model with the tram dynamics is firstly deduced. Optimal control problem are analyzed and the 2DDP strategy is applied to solve the problem. The optimal tram speed profiles are obtained for each interstation which consist of three stages: accelerate to the set speed with the maximum traction power, dynamically adjust to maintain a uniform speed and decelerate to zero speed with the maximum braking power at a suitable timing. The optimal control curves of all the interstations are connected with the parking time to form the optimal control method of the whole line. The optimized speed profiles are also simplified for drivers to follow.

  13. Co-optimization of Energy and Demand-Side Reserves in Day-Ahead Electricity Markets

    NASA Astrophysics Data System (ADS)

    Surender Reddy, S.; Abhyankar, A. R.; Bijwe, P. R.

    2015-04-01

    This paper presents a new multi-objective day-ahead market clearing (DAMC) mechanism with demand-side reserves/demand response (DR) offers, considering realistic voltage-dependent load modeling. The paper proposes objectives such as social welfare maximization (SWM) including demand-side reserves, and load served error (LSE) minimization. In this paper, energy and demand-side reserves are cleared simultaneously through co-optimization process. The paper clearly brings out the unsuitability of conventional SWM for DAMC in the presence of voltage-dependent loads, due to reduction of load served (LS). Under such circumstances multi-objective DAMC with DR offers is essential. Multi-objective Strength Pareto Evolutionary Algorithm 2+ (SPEA 2+) has been used to solve the optimization problem. The effectiveness of the proposed scheme is confirmed with results obtained from IEEE 30 bus system.

  14. An iterative three-dimensional electron density imaging algorithm using uncollimated compton scattered x rays from a polyenergetic primary pencil beam.

    PubMed

    Van Uytven, Eric; Pistorius, Stephen; Gordon, Richard

    2007-01-01

    X-ray film-screen mammography is currently the gold standard for detecting breast cancer. However, one disadvantage is that it projects a three-dimensional (3D) object onto a two-dimensional (2D) image, reducing contrast between small lesions and layers of normal tissue. Another limitation is its reduced sensitivity in women with mammographically dense breasts. Computed tomography (CT) produces a 3D image yet has had a limited role in mammography due to its relatively high dose, low resolution, and low contrast. As a first step towards implementing quantitative 3D mammography, which may improve the ability to detect and specify breast tumors, we have developed an analytical technique that can use Compton scatter to obtain 3D information of an object from a single projection. Imaging material with a pencil beam of polychromatic x rays produces a characteristic scattered photon spectrum at each point on the detector plane. A comparable distribution may be calculated using a known incident x-ray energy spectrum, beam shape, and an initial estimate of the object's 3D mass attenuation and electron density. Our iterative minimization algorithm changes the initially arbitrary electron density voxel matrix to reduce regular differences between the analytically predicted and experimentally measured spectra at each point on the detector plane. The simulated electron density converges to that of the object as the differences are minimized. The reconstruction algorithm has been validated using simulated data produced by the EGSnrc Monte Carlo code system. We applied the imaging algorithm to a cylindrically symmetric breast tissue phantom containing multiple inhomogeneities. A preliminary ROC analysis scores greater than 0.96, which indicate that under the described simplifying conditions, this approach shows promise in identifying and localizing inhomogeneities which simulate 0.5 mm calcifications with an image voxel resolution of 0.25 cm and at a dose comparable to mammography.

  15. Real-time segmentation in 4D ultrasound with continuous max-flow

    NASA Astrophysics Data System (ADS)

    Rajchl, M.; Yuan, J.; Peters, T. M.

    2012-02-01

    We present a novel continuous Max-Flow based method to segment the inner left ventricular wall from 3D trans-esophageal echocardiography image sequences, which minimizes an energy functional encoding two Fisher-Tippett distributions and a geometrical constraint in form of a Euclidean distance map in a numerically efficient and accurate way. After initialization the method is fully automatic and is able to perform at up to 10Hz making it available for image-guided interventions. Results are shown on 4D TEE data sets from 18 patients with pathological cardiac conditions and the speed of the algorithm is assessed under a variety of conditions.

  16. Design of a bio-inspired controller for dynamic soaring in a simulated unmanned aerial vehicle.

    PubMed

    Barate, Renaud; Doncieux, Stéphane; Meyer, Jean-Arcady

    2006-09-01

    This paper is inspired by the way birds such as albatrosses are able to exploit wind gradients at the surface of the ocean for staying aloft for very long periods while minimizing their energy expenditure. The corresponding behaviour has been partially reproduced here via a set of Takagi-Sugeno-Kang fuzzy rules controlling a simulated glider. First, the rules were hand-designed. Then, they were optimized with an evolutionary algorithm that improved their efficiency at coping with challenging conditions. Finally, the robustness properties of the controller generated were assessed with a view to its applicability to a real platform.

  17. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  18. Fast Transformation of Temporal Plans for Efficient Execution

    NASA Technical Reports Server (NTRS)

    Tsamardinos, Ioannis; Muscettola, Nicola; Morris, Paul

    1998-01-01

    Temporal plans permit significant flexibility in specifying the occurrence time of events. Plan execution can make good use of that flexibility. However, the advantage of execution flexibility is counterbalanced by the cost during execution of propagating the time of occurrence of events throughout the flexible plan. To minimize execution latency, this propagation needs to be very efficient. Previous work showed that every temporal plan can be reformulated as a dispatchable plan, i.e., one for which propagation to immediate neighbors is sufficient. A simple algorithm was given that finds a dispatchable plan with a minimum number of edges in cubic time and quadratic space. In this paper, we focus on the efficiency of the reformulation process, and improve on that result. A new algorithm is presented that uses linear space and has time complexity equivalent to Johnson s algorithm for all-pairs shortest-path problems. Experimental evidence confirms the practical effectiveness of the new algorithm. For example, on a large commercial application, the performance is improved by at least two orders of magnitude. We further show that the dispatchable plan, already minimal in the total number of edges, can also be made minimal in the maximum number of edges incoming or outgoing at any node.

  19. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  20. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  2. Phase retrieval from intensity-only data by relative entropy minimization.

    PubMed

    Deming, Ross W

    2007-11-01

    A recursive algorithm, which appears to be new, is presented for estimating the amplitude and phase of a wave field from intensity-only measurements on two or more scan planes at different axial positions. The problem is framed as a nonlinear optimization, in which the angular spectrum of the complex field model is adjusted in order to minimize the relative entropy, or Kullback-Leibler divergence, between the measured and reconstructed intensities. The most common approach to this so-called phase retrieval problem is a variation of the well-known Gerchberg-Saxton algorithm devised by Misell (J. Phys. D6, L6, 1973), which is efficient and extremely simple to implement. The new algorithm has a computational structure that is very similar to Misell's approach, despite the fundamental difference in the optimization criteria used for each. Based upon results from noisy simulated data, the new algorithm appears to be more robust than Misell's approach and to produce better results from low signal-to-noise ratio data. The convergence of the new algorithm is examined.

  3. Robust transceiver design for reciprocal M × N interference channel based on statistical linearization approximation

    NASA Astrophysics Data System (ADS)

    Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad

    2017-12-01

    This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.

  4. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  5. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  6. Characterisation of a MOSFET-based detector for dose measurement under megavoltage electron beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Jong, W. L.; Ung, N. M.; Tiong, A. H. L.; Rosenfeld, A. B.; Wong, J. H. D.

    2018-03-01

    The aim of this study is to investigate the fundamental dosimetric characteristics of the MOSkin detector for megavoltage electron beam dosimetry. The reproducibility, linearity, energy dependence, dose rate dependence, depth dose measurement, output factor measurement, and surface dose measurement under megavoltage electron beam were tested. The MOSkin detector showed excellent reproducibility (>98%) and linearity (R2= 1.00) up to 2000 cGy for 4-20 MeV electron beams. The MOSkin detector also showed minimal dose rate dependence (within ±3%) and energy dependence (within ±2%) over the clinical range of electron beams, except for an energy dependence at 4 MeV electron beam. An energy dependence correction factor of 1.075 is needed when the MOSkin detector is used for 4 MeV electron beam. The output factors measured by the MOSkin detector were within ±2% compared to those measured with the EBT3 film and CC13 chamber. The measured depth doses using the MOSkin detector agreed with those measured using the CC13 chamber, except at the build-up region due to the dose volume averaging effect of the CC13 chamber. For surface dose measurements, MOSkin measurements were in agreement within ±3% to those measured using EBT3 film. Measurements using the MOSkin detector were also compared to electron dose calculation algorithms namely the GGPB and eMC algorithms. Both algorithms were in agreement with measurements to within ±2% and ±4% for output factor (except for the 4 × 4 cm2 field size) and surface dose, respectively. With the uncertainties taken into account, the MOSkin detector was found to be a suitable detector for dose measurement under megavoltage electron beam. This has been demonstrated in the in vivo skin dose measurement on patients during electron boost to the breast tumour bed.

  7. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  8. Implementation of Tree and Butterfly Barriers with Optimistic Time Management Algorithms for Discrete Event Simulation

    NASA Astrophysics Data System (ADS)

    Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia

    The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.

  9. Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Luo, Hongbin; Li, Lemin; Yu, Hongfang

    2006-12-01

    Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.

  10. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    PubMed Central

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and  P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557

  11. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    PubMed

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  12. [Clinical experience with various techniques integrated treat the wounded with gunshot fractures of limbs].

    PubMed

    Kozlov, V K; Akhmedov, B G; Chililov, A M

    To increase an efficiency of complex treatment of patients with diaphyseal gunshot fractures of long bones by introduction of modern minimally invasive surgical techniques of internal osteosynthesis into clinical practice of civil health care and to improve the outcomes in victims. Prospective comparative clinical trial included 104 victims from the Republic of Yemen with gunshot wounds of limbs of various severity for the period 2009-2011. There were diaphyseal fractures of long bones of limbs associated with soft tissue injuries. Men were predominant (80.7%). Age ranged from 15 to 80 years (mean 38,5 ± 5,7 years). Various surgical techniques of simultaneous and staged treatment were used for gunshot fractures of long bones of limbs. Additional immune therapy was prescribed to prevent infectious complications in the most severe cases. Victims were comprehensively treated according to different staged treatment: conventional surgical treatment with external fixation devices or early primary minimally invasive functionally stable osteosynthesis with LCP/BIOS plates were applied for low-energy fractures; in case of high-energy fractures the first stage included external fixation devices deployment followed by their subsequent replacement during delayed minimally invasive osteosynthesis. The essence of improvement is pursuit to simultaneous minimally invasive surgery by using of current plates for osteosynthesis and preventive immunotherapy of immune dysfunction to eliminate infectious complications. As a result, we obtained 2-fold decrease of surgical invasiveness (r≤0,01) and hospital-stay (r≤0,01). Repeated osteosynthesis was not made. Also 4-fold and 40-fold reduction of infectious and noninfectious complications was observed. This management was accompanied by reduced rehabilitation tine and significantly improved quality of life. Improved technique and algorithm of complex treatment of diaphyseal gunshot fractures of long bones of limbs were described. Early minimally invasive functionally stable osteosynthesis with modern implants and non-specific immune prevention of infectious complications are more effective and economically justified compared with conventional treatment including external fixation devices without immunoactive therapy.

  13. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  14. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  15. An Injury Severity-, Time Sensitivity-, and Predictability-Based Advanced Automatic Crash Notification Algorithm Improves Motor Vehicle Crash Occupant Triage.

    PubMed

    Stitzel, Joel D; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Schoell, Samantha L; Doud, Andrea N; Martin, R Shayn; Meredith, J Wayne

    2016-06-01

    Advanced Automatic Crash Notification algorithms use vehicle telemetry measurements to predict risk of serious motor vehicle crash injury. The objective of the study was to develop an Advanced Automatic Crash Notification algorithm to reduce response time, increase triage efficiency, and improve patient outcomes by minimizing undertriage (<5%) and overtriage (<50%), as recommended by the American College of Surgeons. A list of injuries associated with a patient's need for Level I/II trauma center treatment known as the Target Injury List was determined using an approach based on 3 facets of injury: severity, time sensitivity, and predictability. Multivariable logistic regression was used to predict an occupant's risk of sustaining an injury on the Target Injury List based on crash severity and restraint factors for occupants in the National Automotive Sampling System - Crashworthiness Data System 2000-2011. The Advanced Automatic Crash Notification algorithm was optimized and evaluated to minimize triage rates, per American College of Surgeons recommendations. The following rates were achieved: <50% overtriage and <5% undertriage in side impacts and 6% to 16% undertriage in other crash modes. Nationwide implementation of our algorithm is estimated to improve triage decisions for 44% of undertriaged and 38% of overtriaged occupants. Annually, this translates to more appropriate care for >2,700 seriously injured occupants and reduces unnecessary use of trauma center resources for >162,000 minimally injured occupants. The algorithm could be incorporated into vehicles to inform emergency personnel of recommended motor vehicle crash triage decisions. Lower under- and overtriage was achieved, and nationwide implementation of the algorithm would yield improved triage decision making for an estimated 165,000 occupants annually. Copyright © 2016. Published by Elsevier Inc.

  16. An algorithm for improving the quality of structural images of turbid media in endoscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.

    2018-04-01

    High-quality OCT structural images reconstruction algorithm for endoscopic optical coherence tomography of biological tissue is described. The key features of the presented algorithm are: (1) raster scanning and averaging of adjacent Ascans and pixels; (2) speckle level minimization. The described algorithm can be used in the gastroenterology, urology, gynecology, otorhinolaryngology for mucous membranes and skin diagnostics in vivo and in situ.

  17. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  18. Protein Frustratometer 2: a tool to localize energetic frustration in protein molecules, now with electrostatics.

    PubMed

    Parra, R Gonzalo; Schafer, Nicholas P; Radusky, Leandro G; Tsai, Min-Yeh; Guzovsky, A Brenda; Wolynes, Peter G; Ferreiro, Diego U

    2016-07-08

    The protein frustratometer is an energy landscape theory-inspired algorithm that aims at localizing and quantifying the energetic frustration present in protein molecules. Frustration is a useful concept for analyzing proteins' biological behavior. It compares the energy distributions of the native state with respect to structural decoys. The network of minimally frustrated interactions encompasses the folding core of the molecule. Sites of high local frustration often correlate with functional regions such as binding sites and regions involved in allosteric transitions. We present here an upgraded version of a webserver that measures local frustration. The new implementation that allows the inclusion of electrostatic energy terms, important to the interactions with nucleic acids, is significantly faster than the previous version enabling the analysis of large macromolecular complexes within a user-friendly interface. The webserver is freely available at URL: http://frustratometer.qb.fcen.uba.ar. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Designing train-speed trajectory with energy efficiency and service quality

    NASA Astrophysics Data System (ADS)

    Jia, Jiannan; Yang, Kai; Yang, Lixing; Gao, Yuan; Li, Shukai

    2018-05-01

    With the development of automatic train operations, optimal trajectory design is significant to the performance of train operations in railway transportation systems. Considering energy efficiency and service quality, this article formulates a bi-objective train-speed trajectory optimization model to minimize simultaneously the energy consumption and travel time in an inter-station section. This article is distinct from previous studies in that more sophisticated train driving strategies characterized by the acceleration/deceleration gear, the cruising speed, and the speed-shift site are specifically considered. For obtaining an optimal train-speed trajectory which has equal satisfactory degree on both objectives, a fuzzy linear programming approach is applied to reformulate the objectives. In addition, a genetic algorithm is developed to solve the proposed train-speed trajectory optimization problem. Finally, a series of numerical experiments based on a real-world instance of Beijing-Tianjin Intercity Railway are implemented to illustrate the practicability of the proposed model as well as the effectiveness of the solution methodology.

  20. Electrical Heart Defibrillation with Ion Channel Blockers

    NASA Astrophysics Data System (ADS)

    Feeney, Erin; Clark, Courtney; Puwal, Steffan

    Heart disease is the leading cause of mortality in the United States. Rotary electrical waves within heart muscle underlie electrical disorders of the heart termed fibrillation; their propagation and breakup leads to a complex distribution of electrical activation of the tissue (and of the ensuing mechanical contraction that comes from electrical activation). Successful heart defibrillation has, thus far, been limited to delivering large electrical shocks to activate the entire heart and reset its electrical activity. In theory, defibrillation of a system this nonlinear should be possible with small electrical perturbations (stimulations). A successful algorithm for such a low-energy defibrillator continues to elude researchers. We propose to examine in silica whether low-energy electrical stimulations can be combined with antiarrhythmic, ion channel-blocking drugs to achieve a higher rate of defibrillation and whether the antiarrhythmic drugs should be delivered before or after electrical stimulation has commenced. Progress toward a more successful, low-energy defibrillator will greatly minimize the adverse effects noted in defibrillation and will assist in the development of pediatric defibrillators.

Top