Sample records for utility maximization problem

  1. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    PubMed Central

    Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868

  2. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  3. Limit order placement as an utility maximization problem and the origin of power law distribution of limit order prices

    NASA Astrophysics Data System (ADS)

    Lillo, F.

    2007-02-01

    I consider the problem of the optimal limit order price of a financial asset in the framework of the maximization of the utility function of the investor. The analytical solution of the problem gives insight on the origin of the recently empirically observed power law distribution of limit order prices. In the framework of the model, the most likely proximate cause of this power law is a power law heterogeneity of traders' investment time horizons.

  4. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less

  5. A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter

    DTIC Science & Technology

    2015-06-27

    function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically

  6. Influencing Busy People in a Social Network

    PubMed Central

    Sarkar, Kaushik; Sundaram, Hari

    2016-01-01

    We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach. PMID:27711127

  7. Influencing Busy People in a Social Network.

    PubMed

    Sarkar, Kaushik; Sundaram, Hari

    2016-01-01

    We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach.

  8. Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.

    PubMed

    Lennartsson, Jan; Lindberg, Carl

    2015-01-01

    To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.

  9. Equilibrium in a Production Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiarolla, Maria B., E-mail: maria.chiarolla@uniroma1.it; Haussmann, Ulrich G., E-mail: uhaus@math.ubc.ca

    2011-06-15

    Consider a closed production-consumption economy with multiple agents and multiple resources. The resources are used to produce the consumption good. The agents derive utility from holding resources as well as consuming the good produced. They aim to maximize their utility while the manager of the production facility aims to maximize profits. With the aid of a representative agent (who has a multivariable utility function) it is shown that an Arrow-Debreu equilibrium exists. In so doing we establish technical results that will be used to solve the stochastic dynamic problem (a case with infinite dimensional commodity space so the General Equilibriummore » Theory does not apply) elsewhere.« less

  10. On Reverse Stackelberg Game and Optimal Mean Field Control for a Large Population of Thermostatically Controlled Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This paper studies a multi-stage pricing problem for a large population of thermostatically controlled loads. The problem is formulated as a reverse Stackelberg game that involves a mean field game in the hierarchy of decision making. In particular, in the higher level, a coordinator needs to design a pricing function to motivate individual agents to maximize the social welfare. In the lower level, the individual utility maximization problem of each agent forms a mean field game coupled through the pricing function that depends on the average of the population control/state. We derive the solution to the reverse Stackelberg game bymore » connecting it to a team problem and the competitive equilibrium, and we show that this solution corresponds to the optimal mean field control that maximizes the social welfare. Realistic simulations are presented to validate the proposed methods.« less

  11. Medical Problem-Solving: A Critique of the Literature.

    ERIC Educational Resources Information Center

    McGuire, Christine H.

    1985-01-01

    Prescriptive, decision-analysis of medical problem-solving has been based on decision theory that involves calculation and manipulation of complex probability and utility values to arrive at optimal decisions that will maximize patient benefits. The studies offer a methodology for improving clinical judgment. (Author/MLW)

  12. On the Teaching of Portfolio Theory.

    ERIC Educational Resources Information Center

    Biederman, Daniel K.

    1992-01-01

    Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…

  13. Effective Teaching of Economics: A Constrained Optimization Problem?

    ERIC Educational Resources Information Center

    Hultberg, Patrik T.; Calonge, David Santandreu

    2017-01-01

    One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…

  14. A note on the modelling of circular smallholder migration.

    PubMed

    Bigsten, A

    1988-01-01

    "It is argued that circular migration [in Africa] should be seen as an optimization problem, where the household allocates its labour resources across activities, including work which requires migration, so as to maximize the joint family utility function. The migration problem is illustrated in a simple diagram, which makes it possible to analyse economic aspects of migration." excerpt

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, He; Sun, Yannan; Carroll, Thomas E.

    We propose a coordination algorithm for cooperative power allocation among a collection of commercial buildings within a campus. We introduced thermal and power models of a typical commercial building Heating, Ventilation, and Air Conditioning (HVAC) system, and utilize model predictive control to characterize their power flexibility. The power allocation problem is formulated as a cooperative game using the Nash Bargaining Solution (NBS) concept, in which buildings collectively maximize the product of their utilities subject to their local flexibility constraints and a total power limit set by the campus coordinator. To solve the optimal allocation problem, a distributed protocol is designedmore » using dual decomposition of the Nash bargaining problem. Numerical simulations are performed to demonstrate the efficacy of our proposed allocation method« less

  16. Product-line selection and pricing with remanufacturing under availability constraints

    NASA Astrophysics Data System (ADS)

    Aras, Necati; Esenduran, G.÷k.‡e.; Altinel, I. Kuban

    2004-12-01

    Product line selection and pricing are two crucial decisions for the profitability of a manufacturing firm. Remanufacturing, on the other hand, may be a profitable strategy that captures the remaining value in used products. In this paper we develop a mixed-integer nonlinear programming model form the perspective of an original equipment manufacturer (OEM). The objective of the OEM is to select products to manufacture and remanufacture among a set of given alternatives and simultaneously determine their prices so as to maximize its profit. It is assumed that the probability a customer selects a product is proportional to its utility and inversely proportional to its price. The utility of a product is an increasing function of its perceived quality. In our base model, products are discriminated by their unit production costs and utilities. We also analyze a case where remanufacturing is limited by the available quantity of collected remanufacturable products. We show that the resulting problem is decomposed into the pricing and product line selection subproblems. Pricing problem is solved by a variant of the simplex search procedure which can also handle constraints, while complete enumeration and a genetic algorithm are used for the solution of the product line selection problem. A number of experiments are carried out to identify conditions under which it is economically viable for the firm to sell remanufactured products. We also determine the optimal utility and unit production cost values of a remanufactured product, which maximizes the total profit of the OEM.

  17. Utilizing Partnerships to Maximize Resources in College Counseling Services

    ERIC Educational Resources Information Center

    Stewart, Allison; Moffat, Meridith; Travers, Heather; Cummins, Douglas

    2015-01-01

    Research indicates an increasing number of college students are experiencing severe psychological problems that are impacting their academic performance. However, many colleges and universities operate with constrained budgets that limit their ability to provide adequate counseling services for their student population. Moreover, accessing…

  18. Program Monitoring: Problems and Cases.

    ERIC Educational Resources Information Center

    Lundin, Edward; Welty, Gordon

    Designed as the major component of a comprehensive model of educational management, a behavioral model of decision making is presented that approximates the synoptic model of neoclassical economic theory. The synoptic model defines all possible alternatives and provides a basis for choosing that alternative which maximizes expected utility. The…

  19. A Bayesian Approach to Interactive Retrieval

    ERIC Educational Resources Information Center

    Tague, Jean M.

    1973-01-01

    A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…

  20. Medical benefits from the NASA biomedical applications program

    NASA Technical Reports Server (NTRS)

    Sigmon, J. L.

    1974-01-01

    To achieve its goals the NASA Biomedical Applications Program performs four basic tasks: (1) identification of major medical problems which lend themselves to solution by relevant aerospace technology; (2) identification of relevant aerospace technology which can be applied to those problems; (3) application of that technology to demonstrate the feasibility as real solutions to the identified problems; and, (4) motivation of the industrial community to manufacture and market the identified solution to maximize the utilization of aerospace solutions to the biomedical community.

  1. Distributed Control with Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Wheeler, Kevin R.; Tumer, Kagan

    1998-01-01

    We consider systems of interacting reinforcement learning (RL) algorithms that do not work at cross purposes , in that their collective behavior maximizes a global utility function. We call such systems COllective INtelligences (COINs). We present the theory of designing COINs. Then we present experiments validating that theory in the context of two distributed control problems: We show that COINs perform near-optimally in a difficult variant of Arthur's bar problem [Arthur] (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance in the master-slave problem.

  2. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  3. Tripartite-to-Bipartite Entanglement Transformation by Stochastic Local Operations and Classical Communication and the Structure of Matrix Spaces

    NASA Astrophysics Data System (ADS)

    Li, Yinan; Qiao, Youming; Wang, Xin; Duan, Runyao

    2018-03-01

    We study the problem of transforming a tripartite pure state to a bipartite one using stochastic local operations and classical communication (SLOCC). It is known that the tripartite-to-bipartite SLOCC convertibility is characterized by the maximal Schmidt rank of the given tripartite state, i.e. the largest Schmidt rank over those bipartite states lying in the support of the reduced density operator. In this paper, we further study this problem and exhibit novel results in both multi-copy and asymptotic settings, utilizing powerful results from the structure of matrix spaces. In the multi-copy regime, we observe that the maximal Schmidt rank is strictly super-multiplicative, i.e. the maximal Schmidt rank of the tensor product of two tripartite pure states can be strictly larger than the product of their maximal Schmidt ranks. We then provide a full characterization of those tripartite states whose maximal Schmidt rank is strictly super-multiplicative when taking tensor product with itself. Notice that such tripartite states admit strict advantages in tripartite-to-bipartite SLOCC transformation when multiple copies are provided. In the asymptotic setting, we focus on determining the tripartite-to-bipartite SLOCC entanglement transformation rate. Computing this rate turns out to be equivalent to computing the asymptotic maximal Schmidt rank of the tripartite state, defined as the regularization of its maximal Schmidt rank. Despite the difficulty caused by the super-multiplicative property, we provide explicit formulas for evaluating the asymptotic maximal Schmidt ranks of two important families of tripartite pure states by resorting to certain results of the structure of matrix spaces, including the study of matrix semi-invariants. These formulas turn out to be powerful enough to give a sufficient and necessary condition to determine whether a given tripartite pure state can be transformed to the bipartite maximally entangled state under SLOCC, in the asymptotic setting. Applying the recent progress on the non-commutative rank problem, we can verify this condition in deterministic polynomial time.

  4. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  5. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  6. Active inference and epistemic value.

    PubMed

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  7. Cournot games with network effects for electric power markets

    NASA Astrophysics Data System (ADS)

    Spezia, Carl John

    The electric utility industry is moving from regulated monopolies with protected service areas to an open market with many wholesale suppliers competing for consumer load. This market is typically modeled by a Cournot game oligopoly where suppliers compete by selecting profit maximizing quantities. The classical Cournot model can produce multiple solutions when the problem includes typical power system constraints. This work presents a mathematical programming formulation of oligopoly that produces unique solutions when constraints limit the supplier outputs. The formulation casts the game as a supply maximization problem with power system physical limits and supplier incremental profit functions as constraints. The formulation gives Cournot solutions identical to other commonly used algorithms when suppliers operate within the constraints. Numerical examples demonstrate the feasibility of the theory. The results show that the maximization formulation will give system operators more transmission capacity when compared to the actions of suppliers in a classical constrained Cournot game. The results also show that the profitability of suppliers in constrained networks depends on their location relative to the consumers' load concentration.

  8. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  9. Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach

    NASA Technical Reports Server (NTRS)

    Chien, S.; Gratch, J.

    1994-01-01

    One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.

  10. Network efficient power control for wireless communication systems.

    PubMed

    Campos-Delgado, Daniel U; Luna-Rivera, Jose Martin; Martinez-Sánchez, C J; Gutierrez, Carlos A; Tecpanecatl-Xihuitl, J L

    2014-01-01

    We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network.

  11. Network Efficient Power Control for Wireless Communication Systems

    PubMed Central

    Campos-Delgado, Daniel U.; Luna-Rivera, Jose Martin; Martinez-Sánchez, C. J.; Gutierrez, Carlos A.; Tecpanecatl-Xihuitl, J. L.

    2014-01-01

    We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network. PMID:24683350

  12. Time-Extended Policies in Mult-Agent Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian K.

    2004-01-01

    Reinforcement learning methods perform well in many domains where a single agent needs to take a sequence of actions to perform a task. These methods use sequences of single-time-step rewards to create a policy that tries to maximize a time-extended utility, which is a (possibly discounted) sum of these rewards. In this paper we build on our previous work showing how these methods can be extended to a multi-agent environment where each agent creates its own policy that works towards maximizing a time-extended global utility over all agents actions. We show improved methods for creating time-extended utilities for the agents that are both "aligned" with the global utility and "learnable." We then show how to crate single-time-step rewards while avoiding the pi fall of having rewards aligned with the global reward leading to utilities not aligned with the global utility. Finally, we apply these reward functions to the multi-agent Gridworld problem. We explicitly quantify a utility's learnability and alignment, and show that reinforcement learning agents using the prescribed reward functions successfully tradeoff learnability and alignment. As a result they outperform both global (e.g., team games ) and local (e.g., "perfectly learnable" ) reinforcement learning solutions by as much as an order of magnitude.

  13. Optimal planning for the sustainable utilization of municipal solid waste.

    PubMed

    Santibañez-Aguilar, José Ezequiel; Ponce-Ortega, José María; Betzabe González-Campos, J; Serna-González, Medardo; El-Halwagi, Mahmoud M

    2013-12-01

    The increasing generation of municipal solid waste (MSW) is a major problem particularly for large urban areas with insufficient landfill capacities and inefficient waste management systems. Several options associated to the supply chain for implementing a MSW management system are available, however to determine the optimal solution several technical, economic, environmental and social aspects must be considered. Therefore, this paper proposes a mathematical programming model for the optimal planning of the supply chain associated to the MSW management system to maximize the economic benefit while accounting for technical and environmental issues. The optimization model simultaneously selects the processing technologies and their location, the distribution of wastes from cities as well as the distribution of products to markets. The problem was formulated as a multi-objective mixed-integer linear programing problem to maximize the profit of the supply chain and the amount of recycled wastes, where the results are showed through Pareto curves that tradeoff economic and environmental aspects. The proposed approach is applied to a case study for the west-central part of Mexico to consider the integration of MSW from several cities to yield useful products. The results show that an integrated utilization of MSW can provide economic, environmental and social benefits. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Learning in engineered multi-agent systems

    NASA Astrophysics Data System (ADS)

    Menon, Anup

    Consider the problem of maximizing the total power produced by a wind farm. Due to aerodynamic interactions between wind turbines, each turbine maximizing its individual power---as is the case in present-day wind farms---does not lead to optimal farm-level power capture. Further, there are no good models to capture the said aerodynamic interactions, rendering model based optimization techniques ineffective. Thus, model-free distributed algorithms are needed that help turbines adapt their power production on-line so as to maximize farm-level power capture. Motivated by such problems, the main focus of this dissertation is a distributed model-free optimization problem in the context of multi-agent systems. The set-up comprises of a fixed number of agents, each of which can pick an action and observe the value of its individual utility function. An individual's utility function may depend on the collective action taken by all agents. The exact functional form (or model) of the agent utility functions, however, are unknown; an agent can only measure the numeric value of its utility. The objective of the multi-agent system is to optimize the welfare function (i.e. sum of the individual utility functions). Such a collaborative task requires communications between agents and we allow for the possibility of such inter-agent communications. We also pay attention to the role played by the pattern of such information exchange on certain aspects of performance. We develop two algorithms to solve this problem. The first one, engineered Interactive Trial and Error Learning (eITEL) algorithm, is based on a line of work in the Learning in Games literature and applies when agent actions are drawn from finite sets. While in a model-free setting, we introduce a novel qualitative graph-theoretic framework to encode known directed interactions of the form "which agents' action affect which others' payoff" (interaction graph). We encode explicit inter-agent communications in a directed graph (communication graph) and, under certain conditions, prove convergence of agent joint action (under eITEL) to the welfare optimizing set. The main condition requires that the union of interaction and communication graphs be strongly connected; thus the algorithm combines an implicit form of communication (via interactions through utility functions) with explicit inter-agent communications to achieve the given collaborative goal. This work has kinship with certain evolutionary computation techniques such as Simulated Annealing; the algorithm steps are carefully designed such that it describes an ergodic Markov chain with a stationary distribution that has support over states where agent joint actions optimize the welfare function. The main analysis tool is perturbed Markov chains and results of broader interest regarding these are derived as well. The other algorithm, Collaborative Extremum Seeking (CES), uses techniques from extremum seeking control to solve the problem when agent actions are drawn from the set of real numbers. In this case, under the assumption of existence of a local minimizer for the welfare function and a connected undirected communication graph between agents, a result regarding convergence of joint action to a small neighborhood of a local optimizer of the welfare function is proved. Since extremum seeking control uses a simultaneous gradient estimation-descent scheme, gradient information available in the continuous action space formulation is exploited by the CES algorithm to yield improved convergence speeds. The effectiveness of this algorithm for the wind farm power maximization problem is evaluated via simulations. Lastly, we turn to a different question regarding role of the information exchange pattern on performance of distributed control systems by means of a case study for the vehicle platooning problem. In the vehicle platoon control problem, the objective is to design distributed control laws for individual vehicles in a platoon (or a road-train) that regulate inter-vehicle distances at a specified safe value while the entire platoon follows a leader-vehicle. While most of the literature on the problem deals with some inadequacy in control performance when the information exchange is of the nearest neighbor-type, we consider an arbitrary graph serving as information exchange pattern and derive a relationship between how a certain indicator of control performance is related to the information pattern. Such analysis helps in understanding qualitative features of the `right' information pattern for this problem.

  15. A Comparison of Team-Based Learning Formats: Can We Minimize Stress While Maximizing Results?

    ERIC Educational Resources Information Center

    Miller, Cynthia J.; Falcone, Jeff C.; Metz, Michael J.

    2015-01-01

    Team-Based Learning (TBL) is a collaborative teaching method in which students utilize course content to solve challenging problems. A modified version of TBL is used at the University of Louisville School of Medicine. Students complete questions on the Individual Readiness Assurance Test (iRAT) then gather in pre-assigned groups to retake the…

  16. Diameter Versus Mass in the Development of the Orion Life Support Umbilical: A Case Study in Systems Engineering

    NASA Technical Reports Server (NTRS)

    Jordan, Nicole; Falconi, Eric; Barido, Richard; Lewis, John

    2009-01-01

    Systems engineering could also be called the art of compromise. At its heart, systems engineering seeks to find that solution which maximizes the utility of the system, usually compromising the performance of each individual subsystem. While seemingly straightforward, systems engineering methodology is complicated when the utility to be maximized is unclear and the costs to each individual subsystem are not - or not easily - quantifiable. In this paper, we explore one such systems engineering problem within the Constellation Program as a case study in applied systems engineering. During suited operations, astronauts within Orion will be connected to an umbilical to receive and return breathing gas. The pressure drop associated with this umbilical must be overcome by the Orion vehicle. A smaller umbilical, which is desirable for crew operations, means a higher pressure drop, resulting in additional mass and power for the vehicle. We outline the technical considerations in the development of this integrated system and discuss the method by which we reached the ultimate solution. This paper, while just one example of the kind of problem solving that happens every day, offers insight into what happens when the theories of systems engineering are put into practice.

  17. An auxiliary graph based dynamic traffic grooming algorithm in spatial division multiplexing enabled elastic optical networks with multi-core fibers

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Tian, Rui; Yu, Xiaosong; Zhang, Jiawei; Zhang, Jie

    2017-03-01

    A proper traffic grooming strategy in dynamic optical networks can improve the utilization of bandwidth resources. An auxiliary graph (AG) is designed to solve the traffic grooming problem under a dynamic traffic scenario in spatial division multiplexing enabled elastic optical networks (SDM-EON) with multi-core fibers. Five traffic grooming policies achieved by adjusting the edge weights of an AG are proposed and evaluated through simulation: maximal electrical grooming (MEG), maximal optical grooming (MOG), maximal SDM grooming (MSG), minimize virtual hops (MVH), and minimize physical hops (MPH). Numeric results show that each traffic grooming policy has its own features. Among different traffic grooming policies, an MPH policy can achieve the lowest bandwidth blocking ratio, MEG can save the most transponders, and MSG can obtain the fewest cores for each request.

  18. On Social Optima of Non-Cooperative Mean Field Games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Zhao, Lin

    This paper studies the social optima in noncooperative mean-field games for a large population of agents with heterogeneous stochastic dynamic systems. Each agent seeks to maximize an individual utility functional, and utility functionals of different agents are coupled through a mean field term that depends on the mean of the population states/controls. The paper has the following contributions. First, we derive a set of control strategies for the agents that possess *-Nash equilibrium property, and converge to the mean-field Nash equilibrium as the population size goes to infinity. Second, we study the social optimal in the mean field game. Wemore » derive the conditions, termed the socially optimal conditions, under which the *-Nash equilibrium of the mean field game maximizes the social welfare. Third, a primal-dual algorithm is proposed to compute the *-Nash equilibrium of the mean field game. Since the *-Nash equilibrium of the mean field game is socially optimal, we can compute the equilibrium by solving the social welfare maximization problem, which can be addressed by a decentralized primal-dual algorithm. Numerical simulations are presented to demonstrate the effectiveness of the proposed approach.« less

  19. Utilizing an Artificial Outcrop to Scaffold Learning between Laboratory and Field Experiences in a College-Level Introductory Geology Course

    ERIC Educational Resources Information Center

    Wilson, Meredith

    2012-01-01

    Geologic field trips are among the most beneficial learning experiences for students as they engage the topic of geology, but they are also difficult environments to maximize learning. This action research study explored one facet of the problems associated with teaching geology in the field by attempting to improve the transition of undergraduate…

  20. Maximizing algebraic connectivity in air transportation networks

    NASA Astrophysics Data System (ADS)

    Wei, Peng

    In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the weight assignment can not be studied separately for the problem with operating cost constraint. Therefore a relaxed SDP method with golden section search is developed to solve both at the same time. The cluster decomposition is utilized to solve large scale networks.

  1. Entropy-based consensus clustering for patient stratification.

    PubMed

    Liu, Hongfu; Zhao, Rui; Fang, Hongsheng; Cheng, Feixiong; Fu, Yun; Liu, Yang-Yu

    2017-09-01

    Patient stratification or disease subtyping is crucial for precision medicine and personalized treatment of complex diseases. The increasing availability of high-throughput molecular data provides a great opportunity for patient stratification. Many clustering methods have been employed to tackle this problem in a purely data-driven manner. Yet, existing methods leveraging high-throughput molecular data often suffers from various limitations, e.g. noise, data heterogeneity, high dimensionality or poor interpretability. Here we introduced an Entropy-based Consensus Clustering (ECC) method that overcomes those limitations all together. Our ECC method employs an entropy-based utility function to fuse many basic partitions to a consensus one that agrees with the basic ones as much as possible. Maximizing the utility function in ECC has a much more meaningful interpretation than any other consensus clustering methods. Moreover, we exactly map the complex utility maximization problem to the classic K -means clustering problem, which can then be efficiently solved with linear time and space complexity. Our ECC method can also naturally integrate multiple molecular data types measured from the same set of subjects, and easily handle missing values without any imputation. We applied ECC to 110 synthetic and 48 real datasets, including 35 cancer gene expression benchmark datasets and 13 cancer types with four molecular data types from The Cancer Genome Atlas. We found that ECC shows superior performance against existing clustering methods. Our results clearly demonstrate the power of ECC in clinically relevant patient stratification. The Matlab package is available at http://scholar.harvard.edu/yyl/ecc . yunfu@ece.neu.edu or yyl@channing.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.

  3. The utilization of mind map painting on 3D shapes with curved faces

    NASA Astrophysics Data System (ADS)

    Nur Sholikhah, Ayuk; Usodo, Budi; Pramudya, Ikrar

    2017-12-01

    This paper aims to study on the use of mind map painting media on material with 3D shapes with curved faces and its effect on student’s interest. Observation and literature studies were applied as the research method with the sake design of utilization of mind map painting. The result of this research is the design of mind map painting media can improve students' ability to solve problems, improve the ability to think, and maximize brain power. In relation, mind map painting in learning activities is considered to improve student interest.

  4. Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization

    NASA Astrophysics Data System (ADS)

    Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.

    2016-06-01

    Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.

  5. Maximizing phylogenetic diversity in biodiversity conservation: Greedy solutions to the Noah's Ark problem.

    PubMed

    Hartmann, Klaas; Steel, Mike

    2006-08-01

    The Noah's Ark Problem (NAP) is a comprehensive cost-effectiveness methodology for biodiversity conservation that was introduced by Weitzman (1998) and utilizes the phylogenetic tree containing the taxa of interest to assess biodiversity. Given a set of taxa, each of which has a particular survival probability that can be increased at some cost, the NAP seeks to allocate limited funds to conserving these taxa so that the future expected biodiversity is maximized. Finding optimal solutions using this framework is a computationally difficult problem to which a simple and efficient "greedy" algorithm has been proposed in the literature and applied to conservation problems. We show that, although algorithms of this type cannot produce optimal solutions for the general NAP, there are two restricted scenarios of the NAP for which a greedy algorithm is guaranteed to produce optimal solutions. The first scenario requires the taxa to have equal conservation cost; the second scenario requires an ultrametric tree. The NAP assumes a linear relationship between the funding allocated to conservation of a taxon and the increased survival probability of that taxon. This relationship is briefly investigated and one variation is suggested that can also be solved using a greedy algorithm.

  6. Wireless Sensor Network-Based Service Provisioning by a Brokering Platform

    PubMed Central

    Guijarro, Luis; Pla, Vicent; Vidal, Jose R.; Naldi, Maurizio; Mahmoodi, Toktam

    2017-01-01

    This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility. PMID:28498347

  7. Wireless Sensor Network-Based Service Provisioning by a Brokering Platform.

    PubMed

    Guijarro, Luis; Pla, Vicent; Vidal, Jose R; Naldi, Maurizio; Mahmoodi, Toktam

    2017-05-12

    This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility.

  8. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    NASA Astrophysics Data System (ADS)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  9. Dynamic Distributed Cooperative Control of Multiple Heterogeneous Resources

    DTIC Science & Technology

    2012-10-01

    of the UAVs to maximize the total sensor footprint over the region of interest. The algorithm utilized to solve this problem was based on sampling a...and moving obstacles. Obstacle positions were assumed known a priori. Kingston and Beard [22] presented an algorithm to keep moving UAVs equally spaced...Planning Algorithms , Cambridge University Press, 2006. 11. Ma, C. S. and Miller, R. H., “Mixed integer linear programming trajectory generation for

  10. Reservoir Analysis Model for Battlefield Operations

    DTIC Science & Technology

    1989-05-01

    courtesy of the Imperial War Museum; Figure 2 is used courtesy of Frederick A. Praeger, Inc.; Figures 7, 8, and 9 are used courtesy of the Society of...operational and tactical levels of war . Military commanders today are confronted with problems of unprecedented complexity that require the application of...associated with operating reservoir systems in theaters of war . Without these tools the planner stands little chance of maximizing the utilization of his water

  11. Evidence for surprise minimization over value maximization in choice behavior

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-01-01

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686

  12. Continual planning and scheduling for managing patient tests in hospital laboratories.

    PubMed

    Marinagi, C C; Spyropoulos, C D; Papatheodorou, C; Kokkotos, S

    2000-10-01

    Hospital laboratories perform examination tests upon patients, in order to assist medical diagnosis or therapy progress. Planning and scheduling patient requests for examination tests is a complicated problem because it concerns both minimization of patient stay in hospital and maximization of laboratory resources utilization. In the present paper, we propose an integrated patient-wise planning and scheduling system which supports the dynamic and continual nature of the problem. The proposed combination of multiagent and blackboard architecture allows the dynamic creation of agents that share a set of knowledge sources and a knowledge base to service patient test requests.

  13. Nash Social Welfare in Multiagent Resource Allocation

    NASA Astrophysics Data System (ADS)

    Ramezani, Sara; Endriss, Ulle

    We study different aspects of the multiagent resource allocation problem when the objective is to find an allocation that maximizes Nash social welfare, the product of the utilities of the individual agents. The Nash solution is an important welfare criterion that combines efficiency and fairness considerations. We show that the problem of finding an optimal outcome is NP-hard for a number of different languages for representing agent preferences; we establish new results regarding convergence to Nash-optimal outcomes in a distributed negotiation framework; and we design and test algorithms similar to those applied in combinatorial auctions for computing such an outcome directly.

  14. Optimal Price Decision Problem for Simultaneous Multi-article Auction and Its Optimal Price Searching Method by Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Masuda, Kazuaki; Aiyoshi, Eitaro

    We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.

  15. Energy Efficient Data Transmission for Sensors with Wireless Charging

    PubMed Central

    Luo, Junzhou; Wu, Weiwei; Gao, Hong

    2018-01-01

    This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of (1−12m)(OPT−E) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2m2m−1OPT+1 performance bound for the general problem. Simulations confirm the analysis of the algorithms. PMID:29419770

  16. Energy Efficient Data Transmission for Sensors with Wireless Charging.

    PubMed

    Fang, Xiaolin; Luo, Junzhou; Wu, Weiwei; Gao, Hong

    2018-02-08

    This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of ( 1 - 1 2 m ) ( O P T - E ) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2 m 2 m - 1 O P T + 1 performance bound for the general problem. Simulations confirm the analysis of the algorithms.

  17. Electromagnetic interference-aware transmission scheduling and power control for dynamic wireless access in hospital environments.

    PubMed

    Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio

    2011-11-01

    We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.

  18. Spectrum Sharing Based on a Bertrand Game in Cognitive Radio Sensor Networks

    PubMed Central

    Zeng, Biqing; Zhang, Chi; Hu, Pianpian; Wang, Shengyu

    2017-01-01

    In the study of power control and allocation based on pricing, the utility of secondary users is usually studied from the perspective of the signal to noise ratio. The study of secondary user utility from the perspective of communication demand can not only promote the secondary users to meet the maximum communication needs, but also to maximize the utilization of spectrum resources, however, research in this area is lacking, so from the viewpoint of meeting the demand of network communication, this paper designs a two stage model to solve spectrum leasing and allocation problem in cognitive radio sensor networks (CRSNs). In the first stage, the secondary base station collects the secondary network communication requirements, and rents spectrum resources from several primary base stations using the Bertrand game to model the transaction behavior of the primary base station and secondary base station. The second stage, the subcarriers and power allocation problem of secondary base stations is defined as a nonlinear programming problem to be solved based on Nash bargaining. The simulation results show that the proposed model can satisfy the communication requirements of each user in a fair and efficient way compared to other spectrum sharing schemes. PMID:28067850

  19. Assessing the Problem Formulation in an Integrated Assessment Model: Implications for Climate Policy Decision-Support

    NASA Astrophysics Data System (ADS)

    Garner, G. G.; Reed, P. M.; Keller, K.

    2014-12-01

    Integrated assessment models (IAMs) are often used with the intent to aid in climate change decisionmaking. Numerous studies have analyzed the effects of parametric and/or structural uncertainties in IAMs, but uncertainties regarding the problem formulation are often overlooked. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the problem formulation. The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decisionmakers, however, may be concerned with a broader range of values and preferences that are not captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing both (ii) the costs of abatement and (iii) the damages due to climate change. We derive a set of Pareto-optimal solutions over which decisionmakers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.

  20. Optimizing the Energy and Throughput of a Water-Quality Monitoring System.

    PubMed

    Olatinwo, Segun O; Joubert, Trudi-H

    2018-04-13

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  1. Optimizing the Energy and Throughput of a Water-Quality Monitoring System

    PubMed Central

    Olatinwo, Segun O.

    2018-01-01

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity. PMID:29652866

  2. Environmental degradation and remediation: is economics part of the problem?

    PubMed

    Dore, Mohammed H I; Burton, Ian

    2003-01-01

    It is argued that standard environmental economic and 'ecological economics', have the same fundamentals of valuation in terms of money, based on a demand curve derived from utility maximization. But this approach leads to three different measures of value. An invariant measure of value exists only if the consumer has 'homothetic preferences'. In order to obtain a numerical estimate of value, specific functional forms are necessary, but typically these estimates do not converge. This is due to the fact that the underlying economic model is not structurally stable. According to neoclassical economics, any environmental remediation can be justified only in terms of increases in consumer satisfaction, balancing marginal gains against marginal costs. It is not surprising that the optimal policy obtained from this approach suggests only small reductions in greenhouse gases. We show that a unidimensional metric of consumer's utility measured in dollar terms can only trivialize the problem of global climate change.

  3. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  4. A general equilibrium model of guest-worker migration: the source-country perspective.

    PubMed

    Djajic, S; Milbourne, R

    1988-11-01

    "This paper examines the problem of guest-worker migration from an economy populated by identical, utility-maximizing individuals with finite working lives. The decision to migrate, the rate of saving while abroad, as well as the length of a migrant's stay in the foreign country, are all viewed as part of a solution to an intertemporal optimization problem. In addition to studying the microeconomic aspects of temporary migration, the paper analyses the determinants of the equilibrium flow of migrants, the corresponding domestic wage, and the level of welfare enjoyed by a typical worker. Effects of an emigration tax are also investigated." excerpt

  5. Defender-Attacker Decision Tree Analysis to Combat Terrorism.

    PubMed

    Garcia, Ryan J B; von Winterfeldt, Detlof

    2016-12-01

    We propose a methodology, called defender-attacker decision tree analysis, to evaluate defensive actions against terrorist attacks in a dynamic and hostile environment. Like most game-theoretic formulations of this problem, we assume that the defenders act rationally by maximizing their expected utility or minimizing their expected costs. However, we do not assume that attackers maximize their expected utilities. Instead, we encode the defender's limited knowledge about the attacker's motivations and capabilities as a conditional probability distribution over the attacker's decisions. We apply this methodology to the problem of defending against possible terrorist attacks on commercial airplanes, using one of three weapons: infrared-guided MANPADS (man-portable air defense systems), laser-guided MANPADS, or visually targeted RPGs (rocket propelled grenades). We also evaluate three countermeasures against these weapons: DIRCMs (directional infrared countermeasures), perimeter control around the airport, and hardening airplanes. The model includes deterrence effects, the effectiveness of the countermeasures, and the substitution of weapons and targets once a specific countermeasure is selected. It also includes a second stage of defensive decisions after an attack occurs. Key findings are: (1) due to the high cost of the countermeasures, not implementing countermeasures is the preferred defensive alternative for a large range of parameters; (2) if the probability of an attack and the associated consequences are large, a combination of DIRCMs and ground perimeter control are preferred over any single countermeasure. © 2016 Society for Risk Analysis.

  6. Cooperation, psychological game theory, and limitations of rationality in social interaction.

    PubMed

    Colman, Andrew M

    2003-04-01

    Rational choice theory enjoys unprecedented popularity and influence in the behavioral and social sciences, but it generates intractable problems when applied to socially interactive decisions. In individual decisions, instrumental rationality is defined in terms of expected utility maximization. This becomes problematic in interactive decisions, when individuals have only partial control over the outcomes, because expected utility maximization is undefined in the absence of assumptions about how the other participants will behave. Game theory therefore incorporates not only rationality but also common knowledge assumptions, enabling players to anticipate their co-players' strategies. Under these assumptions, disparate anomalies emerge. Instrumental rationality, conventionally interpreted, fails to explain intuitively obvious features of human interaction, yields predictions starkly at variance with experimental findings, and breaks down completely in certain cases. In particular, focal point selection in pure coordination games is inexplicable, though it is easily achieved in practice; the intuitively compelling payoff-dominance principle lacks rational justification; rationality in social dilemmas is self-defeating; a key solution concept for cooperative coalition games is frequently inapplicable; and rational choice in certain sequential games generates contradictions. In experiments, human players behave more cooperatively and receive higher payoffs than strict rationality would permit. Orthodox conceptions of rationality are evidently internally deficient and inadequate for explaining human interaction. Psychological game theory, based on nonstandard assumptions, is required to solve these problems, and some suggestions along these lines have already been put forward.

  7. Application of the maximal covering location problem to habitat reserve site selection: a review

    Treesearch

    Stephanie A. Snyder; Robert G. Haight

    2016-01-01

    The Maximal Covering Location Problem (MCLP) is a classic model from the location science literature which has found wide application. One important application is to a fundamental problem in conservation biology, the Maximum Covering Species Problem (MCSP), which identifies land parcels to protect to maximize the number of species represented in the selected sites. We...

  8. Solving the influence maximization problem reveals regulatory organization of the yeast cell cycle.

    PubMed

    Gibbs, David L; Shmulevich, Ilya

    2017-06-01

    The Influence Maximization Problem (IMP) aims to discover the set of nodes with the greatest influence on network dynamics. The problem has previously been applied in epidemiology and social network analysis. Here, we demonstrate the application to cell cycle regulatory network analysis for Saccharomyces cerevisiae. Fundamentally, gene regulation is linked to the flow of information. Therefore, our implementation of the IMP was framed as an information theoretic problem using network diffusion. Utilizing more than 26,000 regulatory edges from YeastMine, gene expression dynamics were encoded as edge weights using time lagged transfer entropy, a method for quantifying information transfer between variables. By picking a set of source nodes, a diffusion process covers a portion of the network. The size of the network cover relates to the influence of the source nodes. The set of nodes that maximizes influence is the solution to the IMP. By solving the IMP over different numbers of source nodes, an influence ranking on genes was produced. The influence ranking was compared to other metrics of network centrality. Although the top genes from each centrality ranking contained well-known cell cycle regulators, there was little agreement and no clear winner. However, it was found that influential genes tend to directly regulate or sit upstream of genes ranked by other centrality measures. The influential nodes act as critical sources of information flow, potentially having a large impact on the state of the network. Biological events that affect influential nodes and thereby affect information flow could have a strong effect on network dynamics, potentially leading to disease. Code and data can be found at: https://github.com/gibbsdavidl/miergolf.

  9. Competitive Facility Location with Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2009-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.

  10. Team Formation and Communication Restrictions in Collectives

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Turner, Kagan

    2003-01-01

    A collective of agents often needs to maximize a "world utility" function which rates the performance of an entire system, while subject to communication restrictions among the agents. Such communication restrictions make it difficult for agents which try to pursue their own "private" utilities to take actions that also help optimize the world utility. Team formation presents a solution to this problem, where by joining other agents, an agent can significantly increase its knowledge about the environment and improve its chances of both optimizing its own utility and that its doing so will contribute to the world utility. In this article we show how utilities that have been previously shown to be effective in collectives can be modified to be more effective in domains with moderate communication restrictions resulting in performance improvements of up to 75%. Additionally we show that even severe communication constraints can be overcome by forming teams where each agent of a team shares the same utility, increasing performance an additional 25%. We show that utilities and team sizes can be manipulated to form the best compromise between how "aligned" an agent s utility is with the world utility and how easily an agent can learn that utility.

  11. Adaptive design optimization: a mutual information-based approach to model discrimination in cognitive science.

    PubMed

    Cavagnaro, Daniel R; Myung, Jay I; Pitt, Mark A; Kujala, Janne V

    2010-04-01

    Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.

  12. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.

    PubMed

    Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo

    2018-05-10

    The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  13. Robust Coordination for Large Sets of Simple Rovers

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian

    2006-01-01

    The ability to coordinate sets of rovers in an unknown environment is critical to the long-term success of many of NASA;s exploration missions. Such coordination policies must have the ability to adapt in unmodeled or partially modeled domains and must be robust against environmental noise and rover failures. In addition such coordination policies must accommodate a large number of rovers, without excessive and burdensome hand-tuning. In this paper we present a distributed coordination method that addresses these issues in the domain of controlling a set of simple rovers. The application of these methods allows reliable and efficient robotic exploration in dangerous, dynamic, and previously unexplored domains. Most control policies for space missions are directly programmed by engineers or created through the use of planning tools, and are appropriate for single rover missions or missions requiring the coordination of a small number of rovers. Such methods typically require significant amounts of domain knowledge, and are difficult to scale to large numbers of rovers. The method described in this article aims to address cases where a large number of rovers need to coordinate to solve a complex time dependent problem in a noisy environment. In this approach, each rover decomposes a global utility, representing the overall goal of the system, into rover-specific utilities that properly assign credit to the rover s actions. Each rover then has the responsibility to create a control policy that maximizes its own rover-specific utility. We show a method of creating rover-utilities that are "aligned" with the global utility, such that when the rovers maximize their own utility, they also maximize the global utility. In addition we show that our method creates rover-utilities that allow the rovers to create their control policies quickly and reliably. Our distributed learning method allows large sets rovers be used unmodeled domains, while providing robustness against rover failures and changing environments. In experimental simulations we show that our method scales well with large numbers of rovers in addition to being robust against noisy sensor inputs and noisy servo control. The results show that our method is able to scale to large numbers of rovers and achieves up to 400% performance improvement over standard machine learning methods.

  14. A Maximal Element Theorem in FWC-Spaces and Its Applications

    PubMed Central

    Hu, Qingwen; Miao, Yulin

    2014-01-01

    A maximal element theorem is proved in finite weakly convex spaces (FWC-spaces, in short) which have no linear, convex, and topological structure. Using the maximal element theorem, we develop new existence theorems of solutions to variational relation problem, generalized equilibrium problem, equilibrium problem with lower and upper bounds, and minimax problem in FWC-spaces. The results represented in this paper unify and extend some known results in the literature. PMID:24782672

  15. Resource Allocation Algorithms for the Next Generation Cellular Networks

    NASA Astrophysics Data System (ADS)

    Amzallag, David; Raz, Danny

    This chapter describes recent results addressing resource allocation problems in the context of current and future cellular technologies. We present models that capture several fundamental aspects of planning and operating these networks, and develop new approximation algorithms providing provable good solutions for the corresponding optimization problems. We mainly focus on two families of problems: cell planning and cell selection. Cell planning deals with choosing a network of base stations that can provide the required coverage of the service area with respect to the traffic requirements, available capacities, interference, and the desired QoS. Cell selection is the process of determining the cell(s) that provide service to each mobile station. Optimizing these processes is an important step towards maximizing the utilization of current and future cellular networks.

  16. Quantum speedup in solving the maximal-clique problem

    NASA Astrophysics Data System (ADS)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  17. An automated system for reduction of the firm's employees under maximal overall efficiency

    NASA Astrophysics Data System (ADS)

    Yonchev, Yoncho; Nikolov, Simeon; Baeva, Silvia

    2012-11-01

    Achieving maximal overall efficiency is a priority in all companies. This problem is formulated as a knap-sack problem and afterwards as a linear assignment problem. An automated system is created for solving of this problem.

  18. The general utilization of scrapped PC board.

    PubMed

    Liu, Robert; Shieh, R S; Yeh, Ruth Y L; Lin, C H

    2009-11-01

    The traditional burning process is used to recover copper from scrapped PC board (printed circuit board) but it causes serious environmental problems. In this research a new process was developed which not only prevents pollution problems, but also maximizes the utility of all the materials on the waste board. First, the scrapped PC board was crushed and grounded, then placed in the NH3/NH5CO3 solution with aeration in order to dissolve copper. After distilling the copper NH3/NH5CO3 solution and then heating the distilled residue of copper carbonate, pure copper oxide was obtained with particle size of about 0.2 microm and the shape elliptical. The remaining solid residue after copper removal was then leached with 6N hydrochloric acid to remove tin and lead. The last residue was used as a filler in PVC plastics. The PVC plastics with PC board powder as filling material was found to have the same tensile strength as unfilled plastics, but had higher elastic modulus, higher abrasion resistance, and was cheaper.

  19. Power maximization of variable-speed variable-pitch wind turbines using passive adaptive neural fault tolerant control

    NASA Astrophysics Data System (ADS)

    Habibi, Hamed; Rahimi Nohooji, Hamed; Howard, Ian

    2017-09-01

    Power maximization has always been a practical consideration in wind turbines. The question of how to address optimal power capture, especially when the system dynamics are nonlinear and the actuators are subject to unknown faults, is significant. This paper studies the control methodology for variable-speed variable-pitch wind turbines including the effects of uncertain nonlinear dynamics, system fault uncertainties, and unknown external disturbances. The nonlinear model of the wind turbine is presented, and the problem of maximizing extracted energy is formulated by designing the optimal desired states. With the known system, a model-based nonlinear controller is designed; then, to handle uncertainties, the unknown nonlinearities of the wind turbine are estimated by utilizing radial basis function neural networks. The adaptive neural fault tolerant control is designed passively to be robust on model uncertainties, disturbances including wind speed and model noises, and completely unknown actuator faults including generator torque and pitch actuator torque. The Lyapunov direct method is employed to prove that the closed-loop system is uniformly bounded. Simulation studies are performed to verify the effectiveness of the proposed method.

  20. Accelerating the Mining of Influential Nodes in Complex Networks through Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halappanavar, Mahantesh; Sathanur, Arun V.; Nandi, Apurba

    Computing the set of influential nodes with a given size to ensure maximal spread of influence on a complex network is a challenging problem impacting multiple applications. A rigorous approach to influence maximization involves utilization of optimization routines that comes with a high computational cost. In this work, we propose to exploit the existence of communities in complex networks to accelerate the mining of influential seeds. We provide intuitive reasoning to explain why our approach should be able to provide speedups without significantly degrading the extent of the spread of influence when compared to the case of influence maximization withoutmore » using the community information. Additionally, we have parallelized the complete workflow by leveraging an existing parallel implementation of the Louvain community detection algorithm. We then conduct a series of experiments on a dataset with three representative graphs to first verify our implementation and then demonstrate the speedups. Our method achieves speedups ranging from 3x - 28x for graphs with small number of communities while nearly matching or even exceeding the activation performance on the entire graph. Complexity analysis reveals that dramatic speedups are possible for larger graphs that contain a correspondingly larger number of communities. In addition to the speedups obtained from the utilization of the community structure, scalability results show up to 6.3x speedup on 20 cores relative to the baseline run on 2 cores. Finally, current limitations of the approach are outlined along with the planned next steps.« less

  1. Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.

    PubMed

    Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo

    2017-10-01

    This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.

  2. Maximizing Resource Utilization in Video Streaming Systems

    ERIC Educational Resources Information Center

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  3. Intergenerational redistribution in a small open economy with endogenous fertility.

    PubMed

    Kolmar, M

    1997-08-01

    The literature comparing fully funded (FF) and pay-as-you-go (PAYG) financed public pension systems in small, open economies stresses the importance of the Aaron condition as an empirical measure to decide which system can be expected to lead to a higher long-run welfare. A country with a PAYG system has a higher level of utility than a country with a FF system if the growth rate of total wage income exceeds the interest rate. Endogenizing population growth makes one determinant of the growth rate of wage incomes endogenous. The author demonstrates why the Aaron condition ceases to be a good indicator in this case. For PAYG-financed pension systems, claims can be calculated according to individual contributions or the number of children in a family. Analysis determined that for both structural determinants there is no interior solution of the problem of intergenerational utility maximization. Pure systems are therefore always welfare maximizing. Moreover, children-related pension claims induce a fiscal externality which tends to be positive. The determination of the optimal contribution rate shows that the Aaron condition is generally a misleading indicator for the comparison of FF and PAYG-financed pension systems.

  4. Triplet supertree heuristics for the tree of life

    PubMed Central

    Lin, Harris T; Burleigh, J Gordon; Eulenstein, Oliver

    2009-01-01

    Background There is much interest in developing fast and accurate supertree methods to infer the tree of life. Supertree methods combine smaller input trees with overlapping sets of taxa to make a comprehensive phylogenetic tree that contains all of the taxa in the input trees. The intrinsically hard triplet supertree problem takes a collection of input species trees and seeks a species tree (supertree) that maximizes the number of triplet subtrees that it shares with the input trees. However, the utility of this supertree problem has been limited by a lack of efficient and effective heuristics. Results We introduce fast hill-climbing heuristics for the triplet supertree problem that perform a step-wise search of the tree space, where each step is guided by an exact solution to an instance of a local search problem. To realize time efficient heuristics we designed the first nontrivial algorithms for two standard search problems, which greatly improve on the time complexity to the best known (naïve) solutions by a factor of n and n2 (the number of taxa in the supertree). These algorithms enable large-scale supertree analyses based on the triplet supertree problem that were previously not possible. We implemented hill-climbing heuristics that are based on our new algorithms, and in analyses of two published supertree data sets, we demonstrate that our new heuristics outperform other standard supertree methods in maximizing the number of triplets shared with the input trees. Conclusion With our new heuristics, the triplet supertree problem is now computationally more tractable for large-scale supertree analyses, and it provides a potentially more accurate alternative to existing supertree methods. PMID:19208181

  5. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    NASA Astrophysics Data System (ADS)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  6. Cartesian control of redundant robots

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1989-01-01

    A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.

  7. Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel

    PubMed Central

    Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.

    2017-01-01

    Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591

  8. Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.

    PubMed

    Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo

    2017-12-07

    Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.

  9. Maximizing the Range of a Projectile.

    ERIC Educational Resources Information Center

    Brown, Ronald A.

    1992-01-01

    Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)

  10. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    ERIC Educational Resources Information Center

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  11. The Application of a Three-Tier Model of Intervention to Parent Training

    PubMed Central

    Phaneuf, Leah; McIntyre, Laura Lee

    2015-01-01

    A three-tier intervention system was designed for use with parents with preschool children with developmental disabilities to modify parent–child interactions. A single-subject changing-conditions design was used to examine the utility of a three-tier intervention system in reducing negative parenting strategies, increasing positive parenting strategies, and reducing child behavior problems in parent–child dyads (n = 8). The three intervention tiers consisted of (a) self-administered reading material, (b) group training, and (c) individualized video feedback sessions. Parental behavior was observed to determine continuation or termination of intervention. Results support the utility of a tiered model of intervention to maximize treatment outcomes and increase efficiency by minimizing the need for more costly time-intensive interventions for participants who may not require them. PMID:26213459

  12. Optimizing area under the ROC curve using semi-supervised learning

    PubMed Central

    Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.

    2014-01-01

    Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.1 PMID:25395692

  13. Optimizing area under the ROC curve using semi-supervised learning.

    PubMed

    Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M

    2015-01-01

    Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.

  14. The design of optimal electric power demand management contracts

    NASA Astrophysics Data System (ADS)

    Fahrioglu, Murat

    1999-11-01

    Our society derives a quantifiable benefit from electric power. In particular, forced outages or blackouts have enormous consequences on society, one of which is loss of economic surplus. Electric utilities try to provide reliable supply of electric power to their customers. Maximum customer benefit derives from minimum cost and sufficient supply availability. Customers willing to share in "availability risk" can derive further benefit by participating in controlled outage programs. Specifically, whenever utilities foresee dangerous loading patterns, there is a need for a rapid reduction in demand either system-wide or at specific locations. The utility needs to get relief in order to solve its problems quickly and efficiently. This relief can come from customers who agree to curtail their loads upon request in exchange for an incentive fee. This thesis shows how utilities can get efficient load relief while maximizing their economic benefit. This work also shows how estimated customer cost functions can be calibrated, using existing utility data, to help in designing efficient demand management contracts. In order to design such contracts, optimal mechanism design is adopted from "Game Theory" and applied to the interaction between a utility and its customers. The idea behind mechanism design is to design an incentive structure that encourages customers to sign up for the right contract and reveal their true value of power. If a utility has demand management contracts with customers at critical locations, most operational problems can be solved efficiently. This thesis illustrates how locational attributes of customers incorporated into demand management contract design can have a significant impact in solving system problems. This kind of demand management contracts can also be used by an Independent System Operator (ISO). During times of congestion a loss of economic surplus occurs. When the market is too slow or cannot help relieve congestion, demand management can help solve the problem. Another tool the ISO requires for security purposes is reserves. Even though demand management contracts may not be a good substitute for spinning reserves, they are adequate to augment or replace supplemental and backup reserves.

  15. Team Formation in Partially Observable Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2004-01-01

    Sets of multi-agent teams often need to maximize a global utility rating the performance of the entire system where a team cannot fully observe other teams agents. Such limited observability hinders team-members trying to pursue their team utilities to take actions that also help maximize the global utility. In this article, we show how team utilities can be used in partially observable systems. Furthermore, we show how team sizes can be manipulated to provide the best compromise between having easy to learn team utilities and having them aligned with the global utility, The results show that optimally sized teams in a partially observable environments outperform one team in a fully observable environment, by up to 30%.

  16. Optimal Resource Allocation for NOMA-TDMA Scheme with α-Fairness in Industrial Internet of Things.

    PubMed

    Sun, Yanjing; Guo, Yiyu; Li, Song; Wu, Dapeng; Wang, Bin

    2018-05-15

    In this paper, a joint non-orthogonal multiple access and time division multiple access (NOMA-TDMA) scheme is proposed in Industrial Internet of Things (IIoT), which allowed multiple sensors to transmit in the same time-frequency resource block using NOMA. The user scheduling, time slot allocation, and power control are jointly optimized in order to maximize the system α -fair utility under transmit power constraint and minimum rate constraint. The optimization problem is nonconvex because of the fractional objective function and the nonconvex constraints. To deal with the original problem, we firstly convert the objective function in the optimization problem into a difference of two convex functions (D.C.) form, and then propose a NOMA-TDMA-DC algorithm to exploit the global optimum. Numerical results show that the NOMA-TDMA scheme significantly outperforms the traditional orthogonal multiple access scheme in terms of both spectral efficiency and user fairness.

  17. A Coalitional Game for Distributed Inference in Sensor Networks With Dependent Observations

    NASA Astrophysics Data System (ADS)

    He, Hao; Varshney, Pramod K.

    2016-04-01

    We consider the problem of collaborative inference in a sensor network with heterogeneous and statistically dependent sensor observations. Each sensor aims to maximize its inference performance by forming a coalition with other sensors and sharing information within the coalition. It is proved that the inference performance is a nondecreasing function of the coalition size. However, in an energy constrained network, the energy consumption of inter-sensor communication also increases with increasing coalition size, which discourages the formation of the grand coalition (the set of all sensors). In this paper, the formation of non-overlapping coalitions with statistically dependent sensors is investigated under a specific communication constraint. We apply a game theoretical approach to fully explore and utilize the information contained in the spatial dependence among sensors to maximize individual sensor performance. Before formulating the distributed inference problem as a coalition formation game, we first quantify the gain and loss in forming a coalition by introducing the concepts of diversity gain and redundancy loss for both estimation and detection problems. These definitions, enabled by the statistical theory of copulas, allow us to characterize the influence of statistical dependence among sensor observations on inference performance. An iterative algorithm based on merge-and-split operations is proposed for the solution and the stability of the proposed algorithm is analyzed. Numerical results are provided to demonstrate the superiority of our proposed game theoretical approach.

  18. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  19. General form of a cooperative gradual maximal covering location problem

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh

    2018-07-01

    Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.

  20. A hybrid algorithm optimization approach for machine loading problem in flexible manufacturing system

    NASA Astrophysics Data System (ADS)

    Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.

    2012-05-01

    The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.

  1. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    PubMed

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.

  2. A Utility Maximizing and Privacy Preserving Approach for Protecting Kinship in Genomic Databases.

    PubMed

    Kale, Gulce; Ayday, Erman; Tastan, Oznur

    2017-09-12

    Rapid and low cost sequencing of genomes enabled widespread use of genomic data in research studies and personalized customer applications, where genomic data is shared in public databases. Although the identities of the participants are anonymized in these databases, sensitive information about individuals can still be inferred. One such information is kinship. We define two routes kinship privacy can leak and propose a technique to protect kinship privacy against these risks while maximizing the utility of shared data. The method involves systematic identification of minimal portions of genomic data to mask as new participants are added to the database. Choosing the proper positions to hide is cast as an optimization problem in which the number of positions to mask is minimized subject to privacy constraints that ensure the familial relationships are not revealed.We evaluate the proposed technique on real genomic data. Results indicate that concurrent sharing of data pertaining to a parent and an offspring results in high risks of kinship privacy, whereas the sharing data from further relatives together is often safer. We also show arrival order of family members have a high impact on the level of privacy risks and on the utility of sharing data. Available at: https://github.com/tastanlab/Kinship-Privacy. erman@cs.bilkent.edu.tr or oznur.tastan@cs.bilkent.edu.tr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Single-shot detection of bacterial endospores via coherent Raman spectroscopy.

    PubMed

    Pestov, Dmitry; Wang, Xi; Ariunbold, Gombojav O; Murawski, Robert K; Sautenkov, Vladimir A; Dogariu, Arthur; Sokolov, Alexei V; Scully, Marlan O

    2008-01-15

    Recent advances in coherent Raman spectroscopy hold exciting promise for many potential applications. For example, a technique, mitigating the nonresonant four-wave-mixing noise while maximizing the Raman-resonant signal, has been developed and applied to the problem of real-time detection of bacterial endospores. After a brief review of the technique essentials, we show how extensions of our earlier experimental work [Pestov D, et al. (2007) Science 316:265-268] yield single-shot identification of a small sample of Bacillus subtilis endospores (approximately 10(4) spores). The results convey the utility of the technique and its potential for "on-the-fly" detection of biohazards, such as Bacillus anthracis. The application of optimized coherent anti-Stokes Raman scattering scheme to problems requiring chemical specificity and short signal acquisition times is demonstrated.

  4. The effect of chronic orthopedic infection on quality of life.

    PubMed

    Cheatle, M D

    1991-07-01

    The patient with chronic orthopedic infection presents a unique challenge to the orthopedic surgeon. The orthopedic surgeon must not only possess an expertise in constantly evolving diagnostic and treatment techniques but also be able to identify numerous related problems and direct the patient in receiving the most appropriate treatment. This demands a commitment of time by the treating surgeon to the individual patient to properly assess the need for support, the extent of psychologic distress, the intensity of pain, and the requirement for medication management. The effective utilization of a multidisciplinary team of health care providers (e.g., specialists in infectious disease, physical medicine and rehabilitation, psychiatry, nursing, pharmacology) can provide an optimal treatment program for this multifaceted problem and maximize the potential for a favorable outcome.

  5. An entropy maximization problem related to optical communication

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Swanson, L.

    1986-01-01

    In relation to a problem in optical communication, the paper considers the general problem of maximizing the entropy of a stationary radom process that is subject to an average transition cost constraint. By using a recent result of Justesen and Hoholdt, an exact solution to the problem is presented and a class of finite state encoders that give a good approximation to the exact solution is suggested.

  6. Planning Routes Across Economic Terrains: Maximizing Utility, Following Heuristics

    PubMed Central

    Zhang, Hang; Maddula, Soumya V.; Maloney, Laurence T.

    2010-01-01

    We designed an economic task to investigate human planning of routes in landscapes where travel in different kinds of terrain incurs different costs. Participants moved their finger across a touch screen from a starting point to a destination. The screen was divided into distinct kinds of terrain and travel within each kind of terrain imposed a cost proportional to distance traveled. We varied costs and spatial configurations of terrains and participants received fixed bonuses minus the total cost of the routes they chose. We first compared performance to a model maximizing gain. All but one of 12 participants failed to adopt least-cost routes and their failure to do so reduced their winnings by about 30% (median value). We tested in detail whether participants’ choices of routes satisfied three necessary conditions (heuristics) for a route to maximize gain. We report failures of one heuristic for 7 out of 12 participants. Last of all, we modeled human performance with the assumption that participants assign subjective utilities to costs and maximize utility. For 7 out 12 participants, the fitted utility function was an accelerating power function of actual cost and for the remaining 5, a decelerating power function. We discuss connections between utility aggregation in route planning and decision under risk. Our task could be adapted to investigate human strategy and optimality of route planning in full-scale landscapes. PMID:21833269

  7. Emergent Aerospace Designs Using Negotiating Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Deshmukh, Abhijit; Middelkoop, Timothy; Krothapalli, Anjaneyulu; Smith, Charles

    2000-01-01

    This paper presents a distributed design methodology where designs emerge as a result of the negotiations between different stake holders in the process, such as cost, performance, reliability, etc. The proposed methodology uses autonomous agents to represent design decision makers. Each agent influences specific design parameters in order to maximize their utility. Since the design parameters depend on the aggregate demand of all the agents in the system, design agents need to negotiate with others in the market economy in order to reach an acceptable utility value. This paper addresses several interesting research issues related to distributed design architectures. First, we present a flexible framework which facilitates decomposition of the design problem. Second, we present overview of a market mechanism for generating acceptable design configurations. Finally, we integrate learning mechanisms in the design process to reduce the computational overhead.

  8. AUC-Maximizing Ensembles through Metalearning.

    PubMed

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  9. AUC-Maximizing Ensembles through Metalearning

    PubMed Central

    LeDell, Erin; van der Laan, Mark J.; Peterson, Maya

    2016-01-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721

  10. Using a Pareto-optimal solution set to characterize trade-offs between a broad range of values and preferences in climate risk management

    NASA Astrophysics Data System (ADS)

    Garner, Gregory; Reed, Patrick; Keller, Klaus

    2015-04-01

    Integrated assessment models (IAMs) are often used to inform the design of climate risk management strategies. Previous IAM studies have broken important new ground on analyzing the effects of parametric uncertainties, but they are often silent on the implications of uncertainties regarding the problem formulation. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the definition of the objective(s). The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decision makers, however, are often concerned with a broader range of values and preferences that may be poorly captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing (ii) the costs of abatement and (iii) the climate change damages. We use advanced multi-objective optimization methods to derive a set of Pareto-optimal solutions over which decision makers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.

  11. Stochastic user equilibrium model with a tradable credit scheme and application in maximizing network reserve capacity

    NASA Astrophysics Data System (ADS)

    Han, Fei; Cheng, Lin

    2017-04-01

    The tradable credit scheme (TCS) outperforms congestion pricing in terms of social equity and revenue neutrality, apart from the same perfect performance on congestion mitigation. This article investigates the effectiveness and efficiency of TCS on enhancing transportation network capacity in a stochastic user equilibrium (SUE) modelling framework. First, the SUE and credit market equilibrium conditions are presented; then an equivalent general SUE model with TCS is established by virtue of two constructed functions, which can be further simplified under a specific probability distribution. To enhance the network capacity by utilizing TCS, a bi-level mathematical programming model is established for the optimal TCS design problem, with the upper level optimization objective maximizing network reserve capacity and lower level being the proposed SUE model. The heuristic sensitivity analysis-based algorithm is developed to solve the bi-level model. Three numerical examples are provided to illustrate the improvement effect of TCS on the network in different scenarios.

  12. Maximum Data Collection Rate Routing Protocol Based on Topology Control for Rechargeable Wireless Sensor Networks

    PubMed Central

    Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei

    2016-01-01

    In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network’s performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks. PMID:27483282

  13. Maximum Data Collection Rate Routing Protocol Based on Topology Control for Rechargeable Wireless Sensor Networks.

    PubMed

    Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei

    2016-07-30

    In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.

  14. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    PubMed Central

    Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ. PMID:26029092

  15. Demand side management in recycling and electricity retail pricing

    NASA Astrophysics Data System (ADS)

    Kazan, Osman

    This dissertation addresses several problems from the recycling industry and electricity retail market. The first paper addresses a real-life scheduling problem faced by a national industrial recycling company. Based on their practices, a scheduling problem is defined, modeled, analyzed, and a solution is approximated efficiently. The recommended application is tested on the real-life data and randomly generated data. The scheduling improvements and the financial benefits are presented. The second problem is from electricity retail market. There are well-known patterns in daily usage in hours. These patterns change in shape and magnitude by seasons and days of the week. Generation costs are multiple times higher during the peak hours of the day. Yet most consumers purchase electricity at flat rates. This work explores analytic pricing tools to reduce peak load electricity demand for retailers. For that purpose, a nonlinear model that determines optimal hourly prices is established based on two major components: unit generation costs and consumers' utility. Both are analyzed and estimated empirically in the third paper. A pricing model is introduced to maximize the electric retailer's profit. As a result, a closed-form expression for the optimal price vector is obtained. Possible scenarios are evaluated for consumers' utility distribution. For the general case, we provide a numerical solution methodology to obtain the optimal pricing scheme. The models recommended are tested under various scenarios that consider consumer segmentation and multiple pricing policies. The recommended model reduces the peak load significantly in most cases. Several utility companies offer hourly pricing to their customers. They determine prices using historical data of unit electricity cost over time. In this dissertation we develop a nonlinear model that determines optimal hourly prices with parameter estimation. The last paper includes a regression analysis of the unit generation cost function obtained from Independent Service Operators. A consumer experiment is established to replicate the peak load behavior. As a result, consumers' utility function is estimated and optimal retail electricity prices are computed.

  16. Time-Extended Payoffs for Collectives of Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian K.

    2002-01-01

    A collective is a set of self-interested agents which try to maximize their own utilities, along with a a well-defined, time-extended world utility function which rates the performance of the entire system. In this paper, we use theory of collectives to design time-extended payoff utilities for agents that are both aligned with the world utility, and are "learnable", i.e., the agents can readily see how their behavior affects their utility. We show that in systems where each agent aims to optimize such payoff functions, coordination arises as a byproduct of the agents selfishly pursuing their own goals. A game theoretic analysis shows that such payoff functions have the net effect of aligning the Nash equilibrium, Pareto optimal solution and world utility optimum, thus eliminating undesirable behavior such as agents working at cross-purposes. We then apply collective-based payoff functions to the token collection in a gridworld problem where agents need to optimize the aggregate value of tokens collected across an episode of finite duration (i.e., an abstracted version of rovers on Mars collecting scientifically interesting rock samples, subject to power limitations). We show that, regardless of the initial token distribution, reinforcement learning agents using collective-based payoff functions significantly outperform both natural extensions of single agent algorithms and global reinforcement learning solutions based on "team games".

  17. J.-L. Lions' problem concerning maximal regularity of equations governed by non-autonomous forms

    NASA Astrophysics Data System (ADS)

    Fackler, Stephan

    2017-05-01

    An old problem due to J.-L. Lions going back to the 1960s asks whether the abstract Cauchy problem associated to non-autonomous forms has maximal regularity if the time dependence is merely assumed to be continuous or even measurable. We give a negative answer to this question and discuss the minimal regularity needed for positive results.

  18. Motor control is decision-making.

    PubMed

    Wolpert, Daniel M; Landy, Michael S

    2012-12-01

    Motor behavior may be viewed as a problem of maximizing the utility of movement outcome in the face of sensory, motor and task uncertainty. Viewed in this way, and allowing for the availability of prior knowledge in the form of a probability distribution over possible states of the world, the choice of a movement plan and strategy for motor control becomes an application of statistical decision theory. This point of view has proven successful in recent years in accounting for movement under risk, inferring the loss function used in motor tasks, and explaining motor behavior in a wide variety of circumstances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Metadata and annotations for multi-scale electrophysiological data.

    PubMed

    Bower, Mark R; Stead, Matt; Brinkmann, Benjamin H; Dufendach, Kevin; Worrell, Gregory A

    2009-01-01

    The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for "big data" files is presented.

  20. Application of next-generation sequencing methods for microbial monitoring of anaerobic digestion of lignocellulosic biomass.

    PubMed

    Bozan, Mahir; Akyol, Çağrı; Ince, Orhan; Aydin, Sevcan; Ince, Bahar

    2017-09-01

    The anaerobic digestion of lignocellulosic wastes is considered an efficient method for managing the world's energy shortages and resolving contemporary environmental problems. However, the recalcitrance of lignocellulosic biomass represents a barrier to maximizing biogas production. The purpose of this review is to examine the extent to which sequencing methods can be employed to monitor such biofuel conversion processes. From a microbial perspective, we present a detailed insight into anaerobic digesters that utilize lignocellulosic biomass and discuss some benefits and disadvantages associated with the microbial sequencing techniques that are typically applied. We further evaluate the extent to which a hybrid approach incorporating a variation of existing methods can be utilized to develop a more in-depth understanding of microbial communities. It is hoped that this deeper knowledge will enhance the reliability and extent of research findings with the end objective of improving the stability of anaerobic digesters that manage lignocellulosic biomass.

  1. Stability region maximization by decomposition-aggregation method. [Skylab stability

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Cuk, S. M.

    1974-01-01

    This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.

  2. Polarity related influence maximization in signed social networks.

    PubMed

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.

  3. Polarity Related Influence Maximization in Signed Social Networks

    PubMed Central

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986

  4. Changes of glucose utilization by erythrocytes, lactic acid concentration in the serum and blood cells, and haematocrit value during one hour rest after maximal effort in individuals differing in physical efficiency.

    PubMed

    Tomasik, M

    1982-01-01

    Glucose utilization by the erythrocytes, lactic acid concentration in the blood and erythrocytes, and haematocrit value were determined before exercise and during one hour rest following maximal exercise in 97 individuals of either sex differing in physical efficiency. In the investigations reported by the author individuals with strikingly high physical fitness performed maximal work one-third greater than that performed by individuals with medium fitness. The serum concentration of lactic acid was in all individuals above the resting value still after 60 minutes of rest. On the other hand, this concentration returned to the normal level in the erythrocytes but only in individuals with strikingly high efficiency. Glucose utilization by the erythrocytes during the restitution period was highest immediately after the exercise in all studied individuals and showed a tendency for more rapid return to resting values again in individuals with highest efficiency. The investigation of very efficient individuals repeated twice demonstrated greater utilization of glucose by the erythrocytes at the time of greater maximal exercise. This was associated with greater lactic acid concentration in the serum and erythrocytes throughout the whole one-hour rest period. The observed facts suggest an active participation of erythrocytes in the process of adaptation of the organism to exercise.

  5. Mass and Volume Optimization of Space Flight Medical Kits

    NASA Technical Reports Server (NTRS)

    Keenan, A. B.; Foy, Millennia Hope; Myers, Jerry

    2014-01-01

    Resource allocation is a critical aspect of space mission planning. All resources, including medical resources, are subject to a number of mission constraints such a maximum mass and volume. However, unlike many resources, there is often limited understanding in how to optimize medical resources for a mission. The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulates outcomes and describes the impact of medical events in terms of lost crew time, medical resource usage, and the potential for medically required evacuation. Previously published work describes an approach that uses the IMM to generate optimized medical kits that maximize benefit to the crew subject to mass and volume constraints. We improve upon the results obtained previously and extend our approach to minimize mass and volume while meeting some benefit threshold. METHODS We frame the medical kit optimization problem as a modified knapsack problem and implement an algorithm utilizing dynamic programming. Using this algorithm, optimized medical kits were generated for 3 mission scenarios with the goal of minimizing the medical kit mass and volume for a specified likelihood of evacuation or Crew Health Index (CHI) threshold. The algorithm was expanded to generate medical kits that maximize likelihood of evacuation or CHI subject to mass and volume constraints. RESULTS AND CONCLUSIONS In maximizing benefit to crew health subject to certain constraints, our algorithm generates medical kits that more closely resemble the unlimited-resource scenario than previous approaches which leverage medical risk information generated by the IMM. Our work here demonstrates that this algorithm provides an efficient and effective means to objectively allocate medical resources for spaceflight missions and provides an effective means of addressing tradeoffs in medical resource allocations and crew mission success parameters.

  6. Potential Use of Halophytes to Remediate Saline Soils

    PubMed Central

    Hasanuzzaman, Mirza; Nahar, Kamrun; Alam, Md. Mahabub; Bhowmik, Prasanta C.; Hossain, Md. Amzad; Rahman, Motior M.; Prasad, Majeti Narasimha Vara; Ozturk, Munir; Fujita, Masayuki

    2014-01-01

    Salinity is one of the rising problems causing tremendous yield losses in many regions of the world especially in arid and semiarid regions. To maximize crop productivity, these areas should be brought under utilization where there are options for removing salinity or using the salt-tolerant crops. Use of salt-tolerant crops does not remove the salt and hence halophytes that have capacity to accumulate and exclude the salt can be an effective way. Methods for salt removal include agronomic practices or phytoremediation. The first is cost- and labor-intensive and needs some developmental strategies for implication; on the contrary, the phytoremediation by halophyte is more suitable as it can be executed very easily without those problems. Several halophyte species including grasses, shrubs, and trees can remove the salt from different kinds of salt-affected problematic soils through salt excluding, excreting, or accumulating by their morphological, anatomical, physiological adaptation in their organelle level and cellular level. Exploiting halophytes for reducing salinity can be good sources for meeting the basic needs of people in salt-affected areas as well. This review focuses on the special adaptive features of halophytic plants under saline condition and the possible ways to utilize these plants to remediate salinity. PMID:25110683

  7. A Distributed Dynamic Programming-Based Solution for Load Management in Smart Grids

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xu, Yinliang; Li, Sisi; Zhou, MengChu; Liu, Wenxin; Xu, Ying

    2018-03-01

    Load management is being recognized as an important option for active user participation in the energy market. Traditional load management methods usually require a centralized powerful control center and a two-way communication network between the system operators and energy end-users. The increasing user participation in smart grids may limit their applications. In this paper, a distributed solution for load management in emerging smart grids is proposed. The load management problem is formulated as a constrained optimization problem aiming at maximizing the overall utility of users while meeting the requirement for load reduction requested by the system operator, and is solved by using a distributed dynamic programming algorithm. The algorithm is implemented via a distributed framework and thus can deliver a highly desired distributed solution. It avoids the required use of a centralized coordinator or control center, and can achieve satisfactory outcomes for load management. Simulation results with various test systems demonstrate its effectiveness.

  8. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.

  9. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  10. Self-testing properties of Gisin's elegant Bell inequality

    NASA Astrophysics Data System (ADS)

    Andersson, Ole; Badzi&aogo; g, Piotr; Bengtsson, Ingemar; Dumitru, Irina; Cabello, Adán

    2017-09-01

    An experiment in which the Clauser-Horne-Shimony-Holt inequality is maximally violated is self-testing (i.e., it certifies in a device-independent way both the state and the measurements). We prove that an experiment maximally violating Gisin's elegant Bell inequality is not similarly self-testing. The reason can be traced back to the problem of distinguishing an operator from its complex conjugate. We provide a complete and explicit characterization of all scenarios in which the elegant Bell inequality is maximally violated. This enables us to see exactly how the problem plays out.

  11. Scheduling Algorithm for the Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Ichharam, Jaimal; Stubbs, Christopher

    2015-01-01

    The Large Synoptic Survey Telescope (LSST) is a wide-field telescope currently under construction and scheduled to be deployed in Chile by 2022 and operate for a ten-year survey. As a ground-based telescope with the largest etendue ever constructed, and the ability to take images approximately once every eighteen seconds, the LSST will be able to capture the entirety of the observable sky every few nights in six different band passes. With these remarkable features, LSST is primed to provide the scientific community with invaluable data in numerous areas of astronomy, including the observation of near-Earth asteroids, the detection of transient optical events such as supernovae, and the study of dark matter and energy through weak gravitational lensing.In order to maximize the utility that LSST will provide toward achieving these scientific objectives, it proves necessary to develop a flexible scheduling algorithm for the telescope which both optimizes its observational efficiency and allows for adjustment based on the evolving needs of the astronomical community.This work defines a merit function that incorporates the urgency of observing a particular field in the sky as a function of time elapsed since last observed, dynamic viewing conditions (in particular transparency and sky brightness), and a measure of scientific interest in the field. The problem of maximizing this merit function, summed across the entire observable sky, is then reduced to a classic variant of the dynamic traveling salesman problem. We introduce a new approximation technique that appears particularly well suited for this situation. We analyze its effectiveness in resolving this problem, obtaining some promising initial results.

  12. Course Presentation of the Joint-Products Problem with Costs Associated with Dumping

    ERIC Educational Resources Information Center

    Borland, Melvin V.; Howsen, Roy M.

    2009-01-01

    The typical profit-maximization solution for the joint-production problem found in intermediate texts, managerial texts, and other texts concerned with optimal pricing is oversimplified and inconsistent with profit maximization, unless there is either no excess of any of the joint products or no costs associated with dumping. However, it is an…

  13. Determination of optimal self-drive tourism route using the orienteering problem method

    NASA Astrophysics Data System (ADS)

    Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah

    2013-04-01

    This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.

  14. Unifying cost and information in information-theoretic competitive learning.

    PubMed

    Kamimura, Ryotaro

    2005-01-01

    In this paper, we introduce costs into the framework of information maximization and try to maximize the ratio of information to its associated cost. We have shown that competitive learning is realized by maximizing mutual information between input patterns and competitive units. One shortcoming of the method is that maximizing information does not necessarily produce representations faithful to input patterns. Information maximizing primarily focuses on some parts of input patterns that are used to distinguish between patterns. Therefore, we introduce the cost, which represents average distance between input patterns and connection weights. By minimizing the cost, final connection weights reflect input patterns well. We applied the method to a political data analysis, a voting attitude problem and a Wisconsin cancer problem. Experimental results confirmed that, when the cost was introduced, representations faithful to input patterns were obtained. In addition, improved generalization performance was obtained within a relatively short learning time.

  15. Model of refrigerated display-space allocation for multi agro-perishable products considering markdown policy

    NASA Astrophysics Data System (ADS)

    Satiti, D.; Rusdiansyah, A.

    2018-04-01

    Problems that need more attention in the agri-food supply chain are loss and waste as consequences from improper quality control and excessive inventories. The use of cold storage is still being one of favourite technologies in controlling product quality by majority of retailers. We considerate the temperature of cold storage in determining the inventory and pricing strategies based on identified product quality. This study aims to minimize the agri-food waste, utility of cold storage facilities and maximize retailer’s profit through determining the refrigerated display-space allocation and markdown policy based on identified food shelf life. The proposed model evaluated with several different scenarios to find out the right strategy.

  16. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mather, Barry

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less

  17. Theory of Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2003-01-01

    In this chapter an analysis of the behavior of an arbitrary (perhaps massive) collective of computational processes in terms of an associated "world" utility function is presented We concentrate on the situation where each process in the collective can be viewed as though it were striving to maximize its own private utility function. For such situations the central design issue is how to initialize/update the collective's structure, and in particular the private utility functions, so as to induce the overall collective to behave in a way that has large values of the world utility. Traditional "team game" approaches to this problem simply set each private utility function equal to the world utility function. The "Collective Intelligence" (COIN) framework is a semi-formal set of heuristics that recently have been used to construct private utility. functions that in many experiments have resulted in world utility values up to orders of magnitude superior to that ensuing from use of the team game utility. In this paper we introduce a formal mathematics for analyzing and designing collectives. We also use this mathematics to suggest new private utilities that should outperform the COIN heuristics in certain kinds of domains. In accompanying work we use that mathematics to explain previous experimental results concerning the superiority of COIN heuristics. In that accompanying work we also use the mathematics to make numerical predictions, some of which we then test. In this way these two papers establish the study of collectives as a proper science, involving theory, explanation of old experiments, prediction concerning new experiments, and engineering insights.

  18. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  19. Is there a trade-off between longevity and quality of life in Grossman's pure investment model?

    PubMed

    Eisenring, C

    2000-12-01

    The question is posed whether an individual maximizes lifetime or trades off longevity for quality of life in Grossman's pure investment (PI)-model. It is shown that the answer critically hinges on the assumed production function for healthy time. If the production function for healthy time produces a trade-off between life-span and quality of life, one has to solve a sequence of fixed time problems. The one offering maximal intertemporal utility determines optimal longevity. Comparative static results of optimal longevity for a simplified version of the PI-model are derived. The obtained results predict that higher initial endowments of wealth and health, a rise in the wage rate, or improvements in the technology of producing healthy time, all increase the optimal length of life. On the other hand, optimal longevity is decreasing in the depreciation and interest rate. From a technical point of view, the paper illustrates that a discrete time equivalent to the transversality condition for optimal longevity employed in continuous optimal control models does not exist. Copyright 2000 John Wiley & Sons, Ltd.

  20. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  1. A distributed algorithm for demand-side management: Selling back to the grid.

    PubMed

    Latifi, Milad; Khalili, Azam; Rastegarnia, Amir; Zandi, Sajad; Bazzi, Wael M

    2017-11-01

    Demand side energy consumption scheduling is a well-known issue in the smart grid research area. However, there is lack of a comprehensive method to manage the demand side and consumer behavior in order to obtain an optimum solution. The method needs to address several aspects, including the scale-free requirement and distributed nature of the problem, consideration of renewable resources, allowing consumers to sell electricity back to the main grid, and adaptivity to a local change in the solution point. In addition, the model should allow compensation to consumers and ensurance of certain satisfaction levels. To tackle these issues, this paper proposes a novel autonomous demand side management technique which minimizes consumer utility costs and maximizes consumer comfort levels in a fully distributed manner. The technique uses a new logarithmic cost function and allows consumers to sell excess electricity (e.g. from renewable resources) back to the grid in order to reduce their electric utility bill. To develop the proposed scheme, we first formulate the problem as a constrained convex minimization problem. Then, it is converted to an unconstrained version using the segmentation-based penalty method. At each consumer location, we deploy an adaptive diffusion approach to obtain the solution in a distributed fashion. The use of adaptive diffusion makes it possible for consumers to find the optimum energy consumption schedule with a small number of information exchanges. Moreover, the proposed method is able to track drifts resulting from changes in the price parameters and consumer preferences. Simulations and numerical results show that our framework can reduce the total load demand peaks, lower the consumer utility bill, and improve the consumer comfort level.

  2. Optimum electric utility spot price determinations for small power producing facilities operating under PURPA provisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoudjehbaklou, H.; Puttgen, H.B.

    This paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility strives to use spot prices which maximize itsmore » revenues from any given Small Power Producing Facility, SPPF, a schedule while respecting the general regulatory and contractual framework. the proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less

  3. Optimum electric utility spot price determinations for small power producing facilities operating under PURPA provisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoudjehbaklou, H.; Puttgen, H.B.

    The present paper outlines an optimum spot price determination procedure in the general context of the Public Utility Regulatory Policies Act, PURPA, provisions. PURPA stipulates that local utilities must offer to purchase all available excess electric energy from Qualifying Facilities, QF, at fair market prices. As a direct consequence of these PURPA regulations, a growing number of owners are installing power producing facilities and optimize their operational schedules to minimize their utility related costs or, in some cases, actually maximize their revenues from energy sales to the local utility. In turn, the utility will strive to use spot prices whichmore » maximize its revenues from any given Small Power Producing Facility, SPPF, schedule while respecting the general regulatory and contractual framework. The proposed optimum spot price determination procedure fully models the SPPF operation, it enforces the contractual and regulatory restrictions, and it ensures the uniqueness of the optimum SPPF schedule.« less

  4. Designing Agent Collectives For Systems With Markovian Dynamics

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lawson, John W.

    2004-01-01

    The Collective Intelligence (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided world utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics of an agent's utility function are observable. We investigate this transformation in simulations involving both hear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low opacity (analogous to having high signal to noise) but are not factored (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.

  5. Prospective Analysis of Behavioral Economic Predictors of Stable Moderation Drinking Among Problem Drinkers Attempting Natural Recovery

    PubMed Central

    Tucker, Jalie A.; Cheong, JeeWon; Chandler, Susan D.; Lambert, Brice H.; Pietrzak, Brittney; Kwok, Heather; Davies, Susan L.

    2016-01-01

    Background As interventions have expanded beyond clinical treatment to include brief interventions for persons with less severe alcohol problems, predicting who can achieve stable moderation drinking has gained importance. Recent behavioral economic (BE) research on natural recovery has shown that active problem drinkers who allocate their monetary expenditures on alcohol and saving for the future over longer time horizons tend to have better subsequent recovery outcomes, including maintenance of stable moderation drinking. The present study compared the predictive utility of this money-based “Alcohol-Savings Discretionary Expenditure” (ASDE) index with multiple BE analogue measures of behavioral impulsivity and self-control, which have seldom been investigated together, to predict outcomes of natural recovery attempts. Methods Community-dwelling problem drinkers, enrolled shortly after stopping abusive drinking without treatment, were followed prospectively for up to a year (N = 175 [75.4% male], M age = 50.65 years). They completed baseline assessments of pre-resolution drinking practices and problems; analogue behavioral choice tasks (Delay Discounting, Melioration-Maximization, and Alcohol Purchase Tasks); and a Timeline Followback interview including expenditures on alcohol compared to voluntary savings (ASDE index) during the pre-resolution year. Results Multinomial logistic regression models showed that, among the BE measures, only the ASDE index predicted stable moderation drinking compared to stable abstinence or unstable resolutions involving relapse. As hypothesized, stable moderation was associated with more balanced pre-resolution allocations to drinking and savings (OR = 1.77, 95% CI = 1.02 ∼ 3.08, p < .05), suggesting it is associated with longer term behavior regulation processes than abstinence. Conclusions The ASDE's unique predictive utility may rest on its comprehensive representation of contextual elements to support this patterning of behavioral allocation. Stable low risk drinking, but not abstinence, requires such regulatory processes. PMID:27775161

  6. Asset Management for Water and Wastewater Utilities

    EPA Pesticide Factsheets

    Renewing and replacing the nation's public water infrastructure is an ongoing task. Asset management can help a utility maximize the value of its capital as well as its operations and maintenance dollars.

  7. The Balloon Popping Problem Revisited: Lower and Upper Bounds

    NASA Astrophysics Data System (ADS)

    Jung, Hyunwoo; Chwa, Kyung-Yong

    We consider the balloon popping problem introduced by Immorlica et al. in 2007 [13]. This problem is directly related to the problem of profit maximization in online auctions, where an auctioneer is selling a collection of identical items to anonymous unit-demand bidders. The auctioneer has the full knowledge of bidders’ private valuations for the items and tries to maximize his profit. Compared with the profit of fixed price schemes, the competitive ratio of Immorlica et al.’s algorithm was in the range [1.64, 4.33]. In this paper, we narrow the gap to [1.659, 2].

  8. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  9. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  10. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.

  11. Power Converters Maximize Outputs Of Solar Cell Strings

    NASA Technical Reports Server (NTRS)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  12. The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.

    PubMed

    Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B

    2016-08-01

    We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Self-Learning Intelligent Agents for Dynamic Traffic Routing on Transportation Networks

    NASA Astrophysics Data System (ADS)

    Sadek, Add; Basha, Nagi

    Intelligent Transportation Systems (ITS) are designed to take advantage of recent advances in communications, electronics, and Information Technology in improving the efficiency and safety of transportation systems. Among the several ITS applications is the notion of Dynamic Traffic Routing (DTR), which involves generating "optimal" routing recommendations to drivers with the aim of maximizing network utilizing. In this paper, we demonstrate the feasibility of using a self-learning intelligent agent to solve the DTR problem to achieve traffic user equilibrium in a transportation network. The core idea is to deploy an agent to a simulation model of a highway. The agent then learns by itself by interacting with the simulation model. Once the agent reaches a satisfactory level of performance, it can then be deployed to the real-world, where it would continue to learn how to refine its control policies over time. To test this concept in this paper, the Cell Transmission Model (CTM) developed by Carlos Daganzo of the University of California at Berkeley is used to simulate a simple highway with two main alternative routes. With the model developed, a Reinforcement Learning Agent (RLA) is developed to learn how to best dynamically route traffic, so as to maximize the utilization of existing capacity. Preliminary results obtained from our experiments are promising. RL, being an adaptive online learning technique, appears to have a great potential for controlling a stochastic dynamic systems such as a transportation system. Furthermore, the approach is highly scalable and applicable to a variety of networks and roadways.

  14. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less

  15. Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de

    In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochasticmore » Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.« less

  16. Maximally multipartite entangled states

    NASA Astrophysics Data System (ADS)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  17. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  18. Strategic Placement of Treatments (SPOTS): Maximizing the Effectiveness of Fuel and Vegetation Treatments on Problem Fire Behavior and Effects

    Treesearch

    Diane M. Gercke; Susan A. Stewart

    2006-01-01

    In 2005, eight U.S. Forest Service and Bureau of Land Management interdisciplinary teams participated in a test of strategic placement of treatments (SPOTS) techniques to maximize the effectiveness of fuel treatments in reducing problem fire behavior, adverse fire effects, and suppression costs. This interagency approach to standardizing the assessment of risks and...

  19. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.

  20. Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.

    ERIC Educational Resources Information Center

    Ravizza, Kenneth; Rotella, Robert

    Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…

  1. Economics of Red Pine Management for Utility Pole Timber

    Treesearch

    Gerald H. Grossman; Karen Potter-Witter

    1991-01-01

    Including utility poles in red pine management regimes leads to distinctly different management recommendations. Where utility pole markets exist, managing for poles will maximize net returns. To do so, plantations should be maintained above 110 ft2/ac, higher than usually recommended. In Michigan's northern lower peninsula, approximately...

  2. An improved game-theoretic approach to uncover overlapping communities

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Liang; Ch'Ng, Eugene; Yong, Xi; Garibaldi, Jonathan M.; See, Simon; Chen, Duan-Bing

    How can we uncover overlapping communities from complex networks to understand the inherent structures and functions? Chen et al. firstly proposed a community game (Game) to study this problem, and the overlapping communities have been discovered when the game is convergent. It is based on the assumption that each vertex of the underlying network is a rational game player to maximize its utility. In this paper, we investigate how similar vertices affect the formation of community game. The Adamic-Adar Index (AA Index) has been employed to define the new utility function. This novel method has been evaluated on both synthetic and real-world networks. Experimental study shows that it has significant improvement of accuracy (from 4.8% to 37.6%) compared with the Game on 10 real networks. It is more efficient on Facebook networks (FN) and Amazon co-purchasing networks than on other networks. This result implicates that “friend circles of friends” of Facebook are valuable to understand the overlapping community division.

  3. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  4. Sort-Mid tasks scheduling algorithm in grid computing

    PubMed Central

    Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.

    2014-01-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  5. An application of prospect theory to a SHM-based decision problem

    NASA Astrophysics Data System (ADS)

    Bolognani, Denise; Verzobio, Andrea; Tonelli, Daniel; Cappello, Carlo; Glisic, Branko; Zonta, Daniele

    2017-04-01

    Decision making investigates choices that have uncertain consequences and that cannot be completely predicted. Rational behavior may be described by the so-called expected utility theory (EUT), whose aim is to help choosing among several solutions to maximize the expectation of the consequences. However, Kahneman and Tversky developed an alternative model, called prospect theory (PT), showing that the basic axioms of EUT are violated in several instances. In respect of EUT, PT takes into account irrational behaviors and heuristic biases. It suggests an alternative approach, in which probabilities are replaced by decision weights, which are strictly related to the decision maker's preferences and may change for different individuals. In particular, people underestimate the utility of uncertain scenarios compared to outcomes obtained with certainty, and show inconsistent preferences when the same choice is presented in different forms. The goal of this paper is precisely to analyze a real case study involving a decision problem regarding the Streicker Bridge, a pedestrian bridge on Princeton University campus. By modelling the manager of the bridge with the EUT first, and with PT later, we want to verify the differences between the two approaches and to investigate how the two models are sensitive to unpacking probabilities, which represent a common cognitive bias in irrational behaviors.

  6. Network formation: neighborhood structures, establishment costs, and distributed learning.

    PubMed

    Chasparis, Georgios C; Shamma, Jeff S

    2013-12-01

    We consider the problem of network formation in a distributed fashion. Network formation is modeled as a strategic-form game, where agents represent nodes that form and sever unidirectional links with other nodes and derive utilities from these links. Furthermore, agents can form links only with a limited set of neighbors. Agents trade off the benefit from links, which is determined by a distance-dependent reward function, and the cost of maintaining links. When each agent acts independently, trying to maximize its own utility function, we can characterize “stable” networks through the notion of Nash equilibrium. In fact, the introduced reward and cost functions lead to Nash equilibria (networks), which exhibit several desirable properties such as connectivity, bounded-hop diameter, and efficiency (i.e., minimum number of links). Since Nash networks may not necessarily be efficient, we also explore the possibility of “shaping” the set of Nash networks through the introduction of state-based utility functions. Such utility functions may represent dynamic phenomena such as establishment costs (either positive or negative). Finally, we show how Nash networks can be the outcome of a distributed learning process. In particular, we extend previous learning processes to so-called “state-based” weakly acyclic games, and we show that the proposed network formation games belong to this class of games.

  7. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  8. Partitioning-based mechanisms under personalized differential privacy.

    PubMed

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-05-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t -round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms.

  9. Partitioning-based mechanisms under personalized differential privacy

    PubMed Central

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-01-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t-round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms. PMID:28932827

  10. Strategic Style in Pared-Down Poker

    NASA Astrophysics Data System (ADS)

    Burns, Kevin

    This chapter deals with the manner of making diagnoses and decisions, called strategic style, in a gambling game called Pared-down Poker. The approach treats style as a mental mode in which choices are constrained by expected utilities. The focus is on two classes of utility, i.e., money and effort, and how cognitive styles compare to normative strategies in optimizing these utilities. The insights are applied to real-world concerns like managing the war against terror networks and assessing the risks of system failures. After "Introducing the Interactions" involved in playing poker, the contents are arranged in four sections, as follows. "Underpinnings of Utility" outlines four classes of utility and highlights the differences between them: economic utility (money), ergonomic utility (effort), informatic utility (knowledge), and aesthetic utility (pleasure). "Inference and Investment" dissects the cognitive challenges of playing poker and relates them to real-world situations of business and war, where the key tasks are inference (of cards in poker, or strength in war) and investment (of chips in poker, or force in war) to maximize expected utility. "Strategies and Styles" presents normative (optimal) approaches to inference and investment, and compares them to cognitive heuristics by which people play poker--focusing on Bayesian methods and how they differ from human styles. The normative strategy is then pitted against cognitive styles in head-to-head tournaments, and tournaments are also held between different styles. The results show that style is ergonomically efficient and economically effective, i.e., style is smart. "Applying the Analysis" explores how style spaces, of the sort used to model individual behavior in Pared-down Poker, might also be applied to real-world problems where organizations evolve in terror networks and accidents arise from system failures.

  11. Optimal scheduling of micro grids based on single objective programming

    NASA Astrophysics Data System (ADS)

    Chen, Yue

    2018-04-01

    Faced with the growing demand for electricity and the shortage of fossil fuels, how to optimally optimize the micro-grid has become an important research topic to maximize the economic, technological and environmental benefits of the micro-grid. This paper considers the role of the battery and the micro-grid and power grid to allow the exchange of power not exceeding 150kW preconditions, the main study of the economy to load for the goal is to minimize the electricity cost (abandonment of wind), to establish an optimization model, and to solve the problem by genetic algorithm. The optimal scheduling scheme is obtained and the utilization of renewable energy and the impact of the battery involved in regulation are analyzed.

  12. A decision theoretical approach for diffusion promotion

    NASA Astrophysics Data System (ADS)

    Ding, Fei; Liu, Yun

    2009-09-01

    In order to maximize cost efficiency from scarce marketing resources, marketers are facing the problem of which group of consumers to target for promotions. We propose to use a decision theoretical approach to model this strategic situation. According to one promotion model that we develop, marketers balance between probabilities of successful persuasion and the expected profits on a diffusion scale, before making their decisions. In the other promotion model, the cost for identifying influence information is considered, and marketers are allowed to ignore individual heterogeneity. We apply the proposed approach to two threshold influence models, evaluate the utility of each promotion action, and provide discussions about the best strategy. Our results show that efforts for targeting influentials or easily influenced people might be redundant under some conditions.

  13. Relay selection in energy harvesting cooperative networks with rateless codes

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei

    2018-04-01

    This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.

  14. Negative correlation learning for customer churn prediction: a comparison study.

    PubMed

    Rodan, Ali; Fayyoumi, Ayham; Faris, Hossam; Alsakran, Jamal; Al-Kadi, Omar

    2015-01-01

    Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis.

  15. A decentralized mechanism for improving the functional robustness of distribution networks.

    PubMed

    Shi, Benyun; Liu, Jiming

    2012-10-01

    Most real-world distribution systems can be modeled as distribution networks, where a commodity can flow from source nodes to sink nodes through junction nodes. One of the fundamental characteristics of distribution networks is the functional robustness, which reflects the ability of maintaining its function in the face of internal or external disruptions. In view of the fact that most distribution networks do not have any centralized control mechanisms, we consider the problem of how to improve the functional robustness in a decentralized way. To achieve this goal, we study two important problems: 1) how to formally measure the functional robustness, and 2) how to improve the functional robustness of a network based on the local interaction of its nodes. First, we derive a utility function in terms of network entropy to characterize the functional robustness of a distribution network. Second, we propose a decentralized network pricing mechanism, where each node need only communicate with its distribution neighbors by sending a "price" signal to its upstream neighbors and receiving "price" signals from its downstream neighbors. By doing so, each node can determine its outflows by maximizing its own payoff function. Our mathematical analysis shows that the decentralized pricing mechanism can produce results equivalent to those of an ideal centralized maximization with complete information. Finally, to demonstrate the properties of our mechanism, we carry out a case study on the U.S. natural gas distribution network. The results validate the convergence and effectiveness of our mechanism when comparing it with an existing algorithm.

  16. Efficient Agent-Based Cluster Ensembles

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Numerous domains ranging from distributed data acquisition to knowledge reuse need to solve the cluster ensemble problem of combining multiple clusterings into a single unified clustering. Unfortunately current non-agent-based cluster combining methods do not work in a distributed environment, are not robust to corrupted clusterings and require centralized access to all original clusterings. Overcoming these issues will allow cluster ensembles to be used in fundamentally distributed and failure-prone domains such as data acquisition from satellite constellations, in addition to domains demanding confidentiality such as combining clusterings of user profiles. This paper proposes an efficient, distributed, agent-based clustering ensemble method that addresses these issues. In this approach each agent is assigned a small subset of the data and votes on which final cluster its data points should belong to. The final clustering is then evaluated by a global utility, computed in a distributed way. This clustering is also evaluated using an agent-specific utility that is shown to be easier for the agents to maximize. Results show that agents using the agent-specific utility can achieve better performance than traditional non-agent based methods and are effective even when up to 50% of the agents fail.

  17. Optimizing Industrial Consumer Demand Response Through Disaggregation, Hour-Ahead Pricing, and Momentary Autonomous Control

    NASA Astrophysics Data System (ADS)

    Abdulaal, Ahmed

    The work in this study addresses the current limitations of the price-driven demand response (DR) approach. Mainly, the dependability on consumers to respond in an energy aware conduct, the response timeliness, the difficulty of applying DR in a busy industrial environment, and the problem of load synchronization are of utmost concern. In order to conduct a simulation study, realistic price simulation model and consumers' building load models are created using real data. DR action is optimized using an autonomous control method, which eliminates the dependency on frequent consumer engagement. Since load scheduling and long-term planning approaches are infeasible in the industrial environment, the proposed method utilizes instantaneous DR in response to hour-ahead price signals (RTP-HA). Preliminary simulation results concluded savings at the consumer-side at the cost of increased supplier-side burden due to the aggregate effect of the universal DR policies. Therefore, a consumer disaggregation strategy is briefly discussed. Finally, a refined discrete-continuous control system is presented, which utilizes multi-objective Pareto optimization, evolutionary programming, utility functions, and bidirectional loads. Demonstrated through a virtual testbed fit with real data, the new system achieves momentary optimized DR in real-time while maximizing the consumer's wellbeing.

  18. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  19. Using Classification Trees to Predict Alumni Giving for Higher Education

    ERIC Educational Resources Information Center

    Weerts, David J.; Ronca, Justin M.

    2009-01-01

    As the relative level of public support for higher education declines, colleges and universities aim to maximize alumni-giving to keep their programs competitive. Anchored in a utility maximization framework, this study employs the classification and regression tree methodology to examine characteristics of alumni donors and non-donors at a…

  20. The Self in Decision Making and Decision Implementation.

    ERIC Educational Resources Information Center

    Beach, Lee Roy; Mitchell, Terence R.

    Since the early 1950's the principal prescriptive model in the psychological study of decision making has been maximization of Subjective Expected Utility (SEU). This SEU maximization has come to be regarded as a description of how people go about making decisions. However, while observed decision processes sometimes resemble the SEU model,…

  1. Designing Agent Collectives For Systems With Markovian Dynamics

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lawson, John W.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The "Collective Intelligence" (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided "world" utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics, in particular scenarios where not all of the arguments of an agent's utility function are observable. We investigate this transformation in simulations involving both linear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low "opacity (analogous to having high signal to noise) but are not "factored" (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.

  2. Source selection problem of competitive power plants under government intervention: a game theory approach

    NASA Astrophysics Data System (ADS)

    Mahmoudi, Reza; Hafezalkotob, Ashkan; Makui, Ahmad

    2014-06-01

    Pollution and environmental protection in the present century are extremely significant global problems. Power plants as the largest pollution emitting industry have been the cause of a great deal of scientific researches. The fuel or source type used to generate electricity by the power plants plays an important role in the amount of pollution produced. Governments should take visible actions to promote green fuel. These actions are often called the governmental financial interventions that include legislations such as green subsidiaries and taxes. In this paper, by considering the government role in the competition of two power plants, we propose a game theoretical model that will help the government to determine the optimal taxes and subsidies. The numerical examples demonstrate how government could intervene in a competitive market of electricity to achieve the environmental objectives and how power plants maximize their utilities in each energy source. The results also reveal that the government's taxes and subsidiaries effectively influence the selected fuel types of power plants in the competitive market.

  3. Pricing Resources in LTE Networks through Multiobjective Optimization

    PubMed Central

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889

  4. Pricing resources in LTE networks through multiobjective optimization.

    PubMed

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.

  5. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  6. Hybrid protection algorithms based on game theory in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming

    2011-12-01

    With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.

  7. Impacts of Maximizing Tendencies on Experience-Based Decisions.

    PubMed

    Rim, Hye Bin

    2017-06-01

    Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.

  8. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  9. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  10. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  11. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    NASA Technical Reports Server (NTRS)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  12. Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  13. Social and Professional Participation of Individuals Who Are Deaf: Utilizing the Psychosocial Potential Maximization Framework

    ERIC Educational Resources Information Center

    Jacobs, Paul G.; Brown, P. Margaret; Paatsch, Louise

    2012-01-01

    This article documents a strength-based understanding of how individuals who are deaf maximize their social and professional potential. This exploratory study was conducted with 49 adult participants who are deaf (n = 30) and who have typical hearing (n = 19) residing in America, Australia, England, and South Africa. The findings support a…

  14. Using Debate to Maximize Learning Potential: A Case Study

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda

    2007-01-01

    Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…

  15. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  16. Can differences in breast cancer utilities explain disparities in breast cancer care?

    PubMed

    Schleinitz, Mark D; DePalo, Dina; Blume, Jeffrey; Stein, Michael

    2006-12-01

    Black, older, and less affluent women are less likely to receive adjuvant breast cancer therapy than their counterparts. Whereas preference contributes to disparities in other health care scenarios, it is unclear if preference explains differential rates of breast cancer care. To ascertain utilities from women of diverse backgrounds for the different stages of, and treatments for, breast cancer and to determine whether a treatment decision modeled from utilities is associated with socio-demographic characteristics. A stratified sample (by age and race) of 156 English-speaking women over 25 years old not currently undergoing breast cancer treatment. We assessed utilities using standard gamble for 5 breast cancer stages, and time-tradeoff for 3 therapeutic modalities. We incorporated each subject's utilities into a Markov model to determine whether her quality-adjusted life expectancy would be maximized with chemotherapy for a hypothetical, current diagnosis of stage II breast cancer. We used logistic regression to determine whether socio-demographic variables were associated with this optimal strategy. Median utilities for the 8 health states were: stage I disease, 0.91 (interquartile range 0.50 to 1.00); stage II, 0.75 (0.26 to 0.99); stage III, 0.51 (0.25 to 0.94); stage IV (estrogen receptor positive), 0.36 (0 to 0.75); stage IV (estrogen receptor negative), 0.40 (0 to 0.79); chemotherapy 0.50 (0 to 0.92); hormonal therapy 0.58 (0 to 1); and radiation therapy 0.83 (0.10 to 1). Utilities for early stage disease and treatment modalities, but not metastatic disease, varied with socio-demographic characteristics. One hundred and twenty-two of 156 subjects had utilities that maximized quality-adjusted life expectancy given stage II breast cancer with chemotherapy. Age over 50, black race, and low household income were associated with at least 5-fold lower odds of maximizing quality-adjusted life expectancy with chemotherapy, whereas women who were married or had a significant other were 4-fold more likely to maximize quality-adjusted life expectancy with chemotherapy. Differences in utility for breast cancer health states may partially explain the lower rate of adjuvant therapy for black, older, and less affluent women. Further work must clarify whether these differences result from health preference alone or reflect women's perceptions of sources of disparity, such as access to care, poor communication with providers, limitations in health knowledge or in obtaining social and workplace support during therapy.

  17. Conjunctive management of multi-reservoir network system and groundwater system

    NASA Astrophysics Data System (ADS)

    Mani, A.; Tsai, F. T. C.

    2015-12-01

    This study develops a successive mixed-integer linear fractional programming (successive MILFP) method to conjunctively manage water resources provided by a multi-reservoir network system and a groundwater system. The conjunctive management objectives are to maximize groundwater withdrawals and maximize reservoir storages while satisfying water demands and raising groundwater level to a target level. The decision variables in the management problem are reservoir releases and spills, network flows and groundwater pumping rates. Using the fractional programming approach, the objective function is defined as a ratio of total groundwater withdraws to total reservoir storage deficits from the maximum storages. Maximizing this ratio function tends to maximizing groundwater use and minimizing surface water use. This study introduces a conditional constraint on groundwater head in order to sustain aquifers from overpumping: if current groundwater level is less than a target level, groundwater head at the next time period has to be raised; otherwise, it is allowed to decrease up to a certain extent. This conditional constraint is formulated into a set of mixed binary nonlinear constraints and results in a mixed-integer nonlinear fractional programming (MINLFP) problem. To solve the MINLFP problem, we first use the response matrix approach to linearize groundwater head with respect to pumping rate and reduce the problem to an MILFP problem. Using the Charnes-Cooper transformation, the MILFP is transformed to an equivalent mixed-integer linear programming (MILP). The solution of the MILP is successively updated by updating the response matrix in every iteration. The study uses IBM CPLEX to solve the MILP problem. The methodology is applied to water resources management in northern Louisiana. This conjunctive management approach aims to recover the declining groundwater level of the stressed Sparta aquifer by using surface water from a network of four reservoirs as an alternative source of supply.

  18. The Profit-Maximizing Firm: Old Wine in New Bottles.

    ERIC Educational Resources Information Center

    Felder, Joseph

    1990-01-01

    Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)

  19. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  20. Budget Allocation in a Competitive Communication Spectrum Economy

    NASA Astrophysics Data System (ADS)

    Lin, Ming-Hua; Tsai, Jung-Fa; Ye, Yinyu

    2009-12-01

    This study discusses how to adjust "monetary budget" to meet each user's physical power demand, or balance all individual utilities in a competitive "spectrum market" of a communication system. In the market, multiple users share a common frequency or tone band and each of them uses the budget to purchase its own transmit power spectra (taking others as given) in maximizing its Shannon utility or pay-off function that includes the effect of interferences. A market equilibrium is a budget allocation, price spectrum, and tone power distribution that independently and simultaneously maximizes each user's utility. The equilibrium conditions of the market are formulated and analyzed, and the existence of an equilibrium is proved. Computational results and comparisons between the competitive equilibrium and Nash equilibrium solutions are also presented, which show that the competitive market equilibrium solution often provides more efficient power distribution.

  1. Undecidability in macroeconomics

    NASA Technical Reports Server (NTRS)

    Chandra, Siddharth; Chandra, Tushar Deepak

    1993-01-01

    In this paper we study the difficulty of solving problems in economics. For this purpose, we adopt the notion of undecidability from recursion theory. We show that certain problems in economics are undecidable, i.e., cannot be solved by a Turing Machine, a device that is at least as powerful as any computational device that can be constructed. In particular, we prove that even in finite closed economies subject to a variable initial condition, in which a social planner knows the behavior of every agent in the economy, certain important social planning problems are undecidable. Thus, it may be impossible to make effective policy decisions. Philosophically, this result formally brings into question the Rational Expectations Hypothesis which assumes that each agent is able to determine what it should do if it wishes to maximize its utility. We show that even when an optimal rational forecast exists for each agency (based on the information currently available to it), agents may lack the ability to make these forecasts. For example, Lucas describes economic models as 'mechanical, artificial world(s), populated by ... interacting robots'. Since any mechanical robot can be at most as computationally powerful as a Turing Machine, such economies are vulnerable to the phenomenon of undecidability.

  2. Seeing is believing: video classification for computed tomographic colonography using multiple-instance learning.

    PubMed

    Wang, Shijun; McKenna, Matthew T; Nguyen, Tan B; Burns, Joseph E; Petrick, Nicholas; Sahiner, Berkman; Summers, Ronald M

    2012-05-01

    In this paper, we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3-D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods.

  3. Seeing is Believing: Video Classification for Computed Tomographic Colonography Using Multiple-Instance Learning

    PubMed Central

    Wang, Shijun; McKenna, Matthew T.; Nguyen, Tan B.; Burns, Joseph E.; Petrick, Nicholas; Sahiner, Berkman

    2012-01-01

    In this paper we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods. PMID:22552333

  4. Solving delay differential equations in S-ADAPT by method of steps.

    PubMed

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  5. Speed over efficiency: locusts select body temperatures that favour growth rate over efficient nutrient utilization

    PubMed Central

    Miller, Gabriel A.; Clissold, Fiona J.; Mayntz, David; Simpson, Stephen J.

    2009-01-01

    Ectotherms have evolved preferences for particular body temperatures, but the nutritional and life-history consequences of such temperature preferences are not well understood. We measured thermal preferences in Locusta migratoria (migratory locusts) and used a multi-factorial experimental design to investigate relationships between growth/development and macronutrient utilization (conversion of ingesta to body mass) as a function of temperature. A range of macronutrient intake values for insects at 26, 32 and 38°C was achieved by offering individuals high-protein diets, high-carbohydrate diets or a choice between both. Locusts placed in a thermal gradient selected temperatures near 38°C, maximizing rates of weight gain; however, this enhanced growth rate came at the cost of poor protein and carbohydrate utilization. Protein and carbohydrate were equally digested across temperature treatments, but once digested both macronutrients were converted to growth most efficiently at the intermediate temperature (32°C). Body temperature preference thus yielded maximal growth rates at the expense of efficient nutrient utilization. PMID:19625322

  6. Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HyeongKae Park; Robert Nourgaliev; Vincent Mousseau

    2008-07-01

    There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective ismore » to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.« less

  7. Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.

    A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.

  8. The Design of Distributed Micro Grid Energy Storage System

    NASA Astrophysics Data System (ADS)

    Liang, Ya-feng; Wang, Yan-ping

    2018-03-01

    Distributed micro-grid runs in island mode, the energy storage system is the core to maintain the micro-grid stable operation. For the problems that it is poor to adjust at work and easy to cause the volatility of micro-grid caused by the existing energy storage structure of fixed connection. In this paper, an array type energy storage structure is proposed, and the array type energy storage system structure and working principle are analyzed. Finally, the array type energy storage structure model is established based on MATLAB, the simulation results show that the array type energy storage system has great flexibility, which can maximize the utilization of energy storage system, guarantee the reliable operation of distributed micro-grid and achieve the function of peak clipping and valley filling.

  9. Negative Correlation Learning for Customer Churn Prediction: A Comparison Study

    PubMed Central

    Faris, Hossam

    2015-01-01

    Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons (MLP) whose training is obtained using negative correlation learning (NCL) for predicting customer churn in a telecommunication company. Experiments results confirm that NCL based MLP ensemble can achieve better generalization performance (high churn rate) compared with ensemble of MLP without NCL (flat ensemble) and other common data mining techniques used for churn analysis. PMID:25879060

  10. Optimal Regulation of Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea

    This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less

  11. Research on the Intensive Material Management System of Biomass Power Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Ruosi; Hao, Tianyi; Li, Yunxiao; Zhang, Fangqing; Ding, Sheng

    2017-05-01

    In view of the universal problem which the material management is loose, and lack of standardization and interactive real-time in the biomass power plant, a system based on the method of intensive management is proposed in this paper to control the whole process of power plant material. By analysing the whole process of power plant material management and applying the Internet of Things, the method can simplify the management process. By making use of the resources to maximize and data mining, material utilization, circulation rate and quality control management can be improved. The system has been applied in Gaotang power plant, which raised the level of materials management and economic effectiveness greatly. It has an important significance for safe, cost-effective and highly efficient operation of the plant.

  12. Optimal rates for phylogenetic inference and experimental design in the era of genome-scale datasets.

    PubMed

    Dornburg, Alex; Su, Zhuo; Townsend, Jeffrey P

    2018-06-25

    With the rise of genome- scale datasets there has been a call for increased data scrutiny and careful selection of loci appropriate for attempting the resolution of a phylogenetic problem. Such loci are desired to maximize phylogenetic information content while minimizing the risk of homoplasy. Theory posits the existence of characters that evolve under such an optimum rate, and efforts to determine optimal rates of inference have been a cornerstone of phylogenetic experimental design for over two decades. However, both theoretical and empirical investigations of optimal rates have varied dramatically in their conclusions: spanning no relationship to a tight relationship between the rate of change and phylogenetic utility. Here we synthesize these apparently contradictory views, demonstrating both empirical and theoretical conditions under which each is correct. We find that optimal rates of characters-not genes-are generally robust to most experimental design decisions. Moreover, consideration of site rate heterogeneity within a given locus is critical to accurate predictions of utility. Factors such as taxon sampling or the targeted number of characters providing support for a topology are additionally critical to the predictions of phylogenetic utility based on the rate of character change. Further, optimality of rates and predictions of phylogenetic utility are not equivalent, demonstrating the need for further development of comprehensive theory of phylogenetic experimental design.

  13. A Multi Agent-Based Framework for Simulating Household PHEV Distribution and Electric Distribution Network Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Liu, Cheng; Kim, Hoe Kyoung

    2011-01-01

    The variation of household attributes such as income, travel distance, age, household member, and education for different residential areas may generate different market penetration rates for plug-in hybrid electric vehicle (PHEV). Residential areas with higher PHEV ownership could increase peak electric demand locally and require utilities to upgrade the electric distribution infrastructure even though the capacity of the regional power grid is under-utilized. Estimating the future PHEV ownership distribution at the residential household level can help us understand the impact of PHEV fleet on power line congestion, transformer overload and other unforeseen problems at the local residential distribution network level.more » It can also help utilities manage the timing of recharging demand to maximize load factors and utilization of existing distribution resources. This paper presents a multi agent-based simulation framework for 1) modeling spatial distribution of PHEV ownership at local residential household level, 2) discovering PHEV hot zones where PHEV ownership may quickly increase in the near future, and 3) estimating the impacts of the increasing PHEV ownership on the local electric distribution network with different charging strategies. In this paper, we use Knox County, TN as a case study to show the simulation results of the agent-based model (ABM) framework. However, the framework can be easily applied to other local areas in the US.« less

  14. Creating an Agent Based Framework to Maximize Information Utility

    DTIC Science & Technology

    2008-03-01

    information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings

  15. Prospective Analysis of Behavioral Economic Predictors of Stable Moderation Drinking Among Problem Drinkers Attempting Natural Recovery.

    PubMed

    Tucker, Jalie A; Cheong, JeeWon; Chandler, Susan D; Lambert, Brice H; Pietrzak, Brittney; Kwok, Heather; Davies, Susan L

    2016-12-01

    As interventions have expanded beyond clinical treatment to include brief interventions for persons with less severe alcohol problems, predicting who can achieve stable moderation drinking has gained importance. Recent behavioral economic (BE) research on natural recovery has shown that active problem drinkers who allocate their monetary expenditures on alcohol and saving for the future over longer time horizons tend to have better subsequent recovery outcomes, including maintenance of stable moderation drinking. This study compared the predictive utility of this money-based "Alcohol-Savings Discretionary Expenditure" (ASDE) index with multiple BE analogue measures of behavioral impulsivity and self-control, which have seldom been investigated together, to predict outcomes of natural recovery attempts. Community-dwelling problem drinkers, enrolled shortly after stopping abusive drinking without treatment, were followed prospectively for up to a year (N = 175 [75.4% male], M age = 50.65 years). They completed baseline assessments of preresolution drinking practices and problems, analogue behavioral choice tasks (Delay Discounting, Melioration-Maximization, and Alcohol Purchase Tasks), and a Timeline Followback interview including expenditures on alcohol compared to voluntary savings (ASDE index) during the preresolution year. Multinomial logistic regression models showed that, among the BE measures, only the ASDE index predicted stable moderation drinking compared to stable abstinence or unstable resolutions involving relapse. As hypothesized, stable moderation was associated with more balanced preresolution allocations to drinking and savings (odds ratio = 1.77, 95% confidence interval = 1.02 to 3.08, p < 0.05), suggesting it is associated with longer-term behavior regulation processes than abstinence. The ASDE's unique predictive utility may rest on its comprehensive representation of contextual elements to support this patterning of behavioral allocation. Stable low-risk drinking, but not abstinence, requires such regulatory processes. Copyright © 2016 by the Research Society on Alcoholism.

  16. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  17. Multi-objective optimization of a continuous bio-dissimilation process of glycerol to 1, 3-propanediol.

    PubMed

    Xu, Gongxian; Liu, Ying; Gao, Qunwang

    2016-02-10

    This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    NASA Astrophysics Data System (ADS)

    Shaat, Musbah; Bader, Faouzi

    2010-12-01

    Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  19. Computational rationality: linking mechanism and behavior through bounded utility maximization.

    PubMed

    Lewis, Richard L; Howes, Andrew; Singh, Satinder

    2014-04-01

    We propose a framework for including information-processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the definition of rational action. Theories are specified as optimal program problems, defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing "levels" approaches to explanation, and to other optimality-based modeling approaches. Copyright © 2014 Cognitive Science Society, Inc.

  20. Hybrid cooperative spectrum sharing for cognitive radio networks: A contract-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Songwei; Mu, Xiaomin; Wang, Ning; Zhang, Dalong; Han, Gangtao

    2018-06-01

    In order to improve the spectral efficiency, a contract-based hybrid cooperative spectrum sharing approach is proposed in this paper, in which multiple primary users (PUs) and multiple secondary users (SUs) share the primary channels in a hybrid manner. Specifically, the SUs switch their transmission mode between underlay and overlay based on the second-order statistics of the primary links. The average transmission rates of PUs and SUs are analyzed for the two transmission modes, and an optimization problem is formulated to maximize the utility of PUs under the constraint that the utility of SUs is nonnegative, which is further solved by a contract-based approach in global statistical channel statistical information (S-CSI) scenarios and local S-CSI scenarios, individually. Numerical results show that the average transmission rate of the PUs is significantly improved by using the proposed method in both of the two scenarios, and in the meantime, the SUs can achieve a good average rate, especially while the SUs have the same number of the PUs in the local S-CSI scenarios.

  1. One-Shot Coherence Dilution.

    PubMed

    Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng

    2018-02-16

    Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost-the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.

  2. One-Shot Coherence Dilution

    NASA Astrophysics Data System (ADS)

    Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng

    2018-02-01

    Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost—the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.

  3. Basic Economic Principles

    NASA Technical Reports Server (NTRS)

    Tideman, T. N.

    1972-01-01

    An economic approach to design efficient transportation systems involves maximizing an objective function that reflects both goals and costs. A demand curve can be derived by finding the quantities of a good that solve the maximization problem as one varies the price of that commodity, holding income and the prices of all other goods constant. A supply curve is derived by applying the idea of profit maximization of firms. The production function determines the relationship between input and output.

  4. Mathematical problems of quantum teleportation

    NASA Astrophysics Data System (ADS)

    Tanaka, Yoshiharu; Asano, Masanari; Ohya, Masanori

    2011-03-01

    It has been considered that a maximal entangled state is needed for complete quantum teleportation. However, Kossakowski and Ohya proposed a scheme of complete teleportation for nonmaximal entangled state [1]. Basing on their scheme, we proposed a teleportation model of 2-level state with a non-maximal entangled state [2]. In the present study, we construct its expanded model, in which Alice can teleport m-level state even if non-maximal entangled state is used.

  5. Lobster processing by-products as valuable bioresource of marine functional ingredients, nutraceuticals, and pharmaceuticals.

    PubMed

    Nguyen, Trung T; Barber, Andrew R; Corbin, Kendall; Zhang, Wei

    2017-01-01

    The worldwide annual production of lobster was 165,367 tons valued over $3.32 billion in 2004, but this figure rose up to 304,000 tons in 2012. Over half the volume of the worldwide lobster production has been processed to meet the rising global demand in diversified lobster products. Lobster processing generates a large amount of by-products (heads, shells, livers, and eggs) which account for 50-70% of the starting material. Continued production of these lobster processing by-products (LPBs) without corresponding process development for efficient utilization has led to disposal issues associated with costs and pollutions. This review presents the promising opportunities to maximize the utilization of LPBs by economic recovery of their valuable components to produce high value-added products. More than 50,000 tons of LPBs are globally generated, which costs lobster processing companies upward of about $7.5 million/year for disposal. This not only presents financial and environmental burdens to the lobster processors but also wastes a valuable bioresource. LPBs are rich in a range of high-value compounds such as proteins, chitin, lipids, minerals, and pigments. Extracts recovered from LPBs have been demonstrated to possess several functionalities and bioactivities, which are useful for numerous applications in water treatment, agriculture, food, nutraceutical, pharmaceutical products, and biomedicine. Although LPBs have been studied for recovery of valuable components, utilization of these materials for the large-scale production is still very limited. Extraction of lobster components using microwave, ultrasonic, and supercritical fluid extraction were found to be promising techniques that could be used for large-scale production. LPBs are rich in high-value compounds that are currently being underutilized. These compounds can be extracted for being used as functional ingredients, nutraceuticals, and pharmaceuticals in a wide range of commercial applications. The efficient utilization of LPBs would not only generate significant economic benefits but also reduce the problems of waste management associated with the lobster industry. This comprehensive review highlights the availability of the global LPBs, the key components in LPBs and their current applications, the limitations to the extraction techniques used, and the suggested emerging techniques which may be promising on an industrial scale for the maximized utilization of LPBs. Graphical abstractLobster processing by-product as bioresource of several functional and bioactive compounds used in various value-added products.

  6. Choice Inconsistencies among the Elderly: Evidence from Plan Choice in the Medicare Part D Program: Comment.

    PubMed

    Ketcham, Jonathan D; Kuminoff, Nicolai V; Powers, Christopher A

    2016-12-01

    Consumers' enrollment decisions in Medicare Part D can be explained by Abaluck and Gruber’s (2011) model of utility maximization with psychological biases or by a neoclassical version of their model that precludes such biases. We evaluate these competing hypotheses by applying nonparametric tests of utility maximization and model validation tests to administrative data. We find that 79 percent of enrollment decisions from 2006 to 2010 satisfied basic axioms of consumer theory under the assumption of full information. The validation tests provide evidence against widespread psychological biases. In particular, we find that precluding psychological biases improves the structural model's out-of-sample predictions for consumer behavior.

  7. Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammadi, Shahin; Gleich, David F.; Kolda, Tamara G.

    2015-11-01

    Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangularmore » AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.« less

  8. Power allocation for target detection in radar networks based on low probability of intercept: A cooperative game theoretical strategy

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang

    2017-08-01

    Distributed radar network systems have been shown to have many unique features. Due to their advantage of signal and spatial diversities, radar networks are attractive for target detection. In practice, the netted radars in radar networks are supposed to maximize their transmit power to achieve better detection performance, which may be in contradiction with low probability of intercept (LPI). Therefore, this paper investigates the problem of adaptive power allocation for radar networks in a cooperative game-theoretic framework such that the LPI performance can be improved. Taking into consideration both the transmit power constraints and the minimum signal to interference plus noise ratio (SINR) requirement of each radar, a cooperative Nash bargaining power allocation game based on LPI is formulated, whose objective is to minimize the total transmit power by optimizing the power allocation in radar networks. First, a novel SINR-based network utility function is defined and utilized as a metric to evaluate power allocation. Then, with the well-designed network utility function, the existence and uniqueness of the Nash bargaining solution are proved analytically. Finally, an iterative Nash bargaining algorithm is developed that converges quickly to a Pareto optimal equilibrium for the cooperative game. Numerical simulations and theoretic analysis are provided to evaluate the effectiveness of the proposed algorithm.

  9. Donor selection criteria for liver transplantation in Argentina: are current standards too rigorous?

    PubMed

    Dirchwolf, Melisa; Ruf, Andrés E; Biggins, Scott W; Bisigniano, Liliana; Hansen Krogh, Daniela; Villamil, Federico G

    2015-02-01

    Organ shortage is the major limitation for the growth of deceased donor liver transplant worldwide. One strategy to ameliorate this problem is to maximize the liver utilization rate. To assess predictors of liver utilization in Argentina. The national database was used to analyze transplant activity in 2010. Donor, recipient, and transplant variables were evaluated as predictors of graft utilization of number of rejected donor offers before grafting and with the occurrence of primary nonfunction (PNF) or early post-transplant mortality (EM). Of the 582 deceased donors, 293 (50.3%) were recovered for liver transplant. Variables associated with the nonrecovery of the liver were age ≥46 years, umbilical perimeter ≥92 cm, organ procurement outside Gran Buenos Aires, AST ≥42 U/l and ALT ≥29 U/l. The median number of rejected offers before grafting was 4, and in 71 patients (25%), there were ≥13. The only independent predictor for the occurrence of PNF (3.4%) or EM (5.2%) was the recipient's emergency status. During 2010 in Argentina, the liver was recovered in only half of donors. The low incidence of PNF and EM and the characteristics of the nonrecovered liver donors suggest that organ acceptance criteria should be less rigorous. © 2014 Steunstichting ESOT.

  10. Sequence, assembly and annotation of the maize W22 genome

    USDA-ARS?s Scientific Manuscript database

    Since its adoption by Brink and colleagues in the 1950s and 60s, the maize W22 inbred has been utilized extensively to understand fundamental genetic and epigenetic processes such recombination, transposition and paramutation. To maximize the utility of W22 in gene discovery, we have Illumina sequen...

  11. Complete utilization of spent coffee to biodiesel, bio-oil and biochar

    USDA-ARS?s Scientific Manuscript database

    Energy production from renewable or waste biomass/material is a more attractive alternative compared to conventional feedstocks, such as corn and soybean. The objective of this study is to maximize utilization of any waste organic carbon material to produce renewable energy. This study presents tota...

  12. 76 FR 49473 - Petition to Maximize Practical Utility of List 1 Chemicals Screened Through EPA's Endocrine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-10

    ... Utility of List 1 Chemicals Screened Through EPA's Endocrine Disruptor Screening Program; Notice of... to the test orders issued under the Endocrine Disruptor Screening Program. DATES: Comments must be... testing of chemical substances for potential endocrine effects. Potentially affected entities, identified...

  13. A heuristic approach to worst-case carrier-to-interference ratio maximization in satellite system synthesis

    NASA Technical Reports Server (NTRS)

    Reilly, Charles H.; Walton, Eric K.; Mata, Fernando; Mount-Campbell, Clark A.; Olen, Carl A.

    1990-01-01

    Consideration is given to the problem of allotting GEO locations to communication satellites so as to maximize the smallest aggregate carrier-to-interference (C/I) ratio calculated at any test point (assumed earth station). The location allotted to each satellite must be within the satellite's service arc, and angular separation constraints are enforced for each pair of satellites to control single-entry EMI. Solutions to this satellite system synthesis problem (SSSP) are found by embedding two heuristic procedures for the satellite location problem (SLP), in a binary search routine to find an estimate of the largest increment to the angular separation values that permits a feasible solution to SLP and SSSP. Numerical results for a 183-satellite, 208-beam example problem are presented.

  14. Maximization of the Supportable Number of Sensors in QoS-Aware Cluster-Based Underwater Acoustic Sensor Networks

    PubMed Central

    Nguyen, Thi-Tham; Van Le, Duc; Yoon, Seokhoon

    2014-01-01

    This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class. PMID:24608009

  15. Maximization of the supportable number of sensors in QoS-aware cluster-based underwater acoustic sensor networks.

    PubMed

    Nguyen, Thi-Tham; Le, Duc Van; Yoon, Seokhoon

    2014-03-07

    This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class.

  16. Efficient Wideband Spectrum Sensing with Maximal Spectral Efficiency for LEO Mobile Satellite Systems

    PubMed Central

    Li, Feilong; Li, Zhiqiang; Li, Guangxia; Dong, Feihong; Zhang, Wei

    2017-01-01

    The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU) with sufficient protection to licensed primary user (PU). Hence, recent scientific literature has been focused on the tradeoff between spectrum reuse and PU protection within narrowband spectrum sensing (SS) in terrestrial wireless sensing networks. However, those narrowband SS techniques investigated in the context of terrestrial CR may not be applicable for detecting wideband satellite signals. In this paper, we mainly investigate the problem of joint designing sensing time and hard fusion scheme to maximize SU spectral efficiency in the scenario of low earth orbit (LEO) mobile satellite services based on wideband spectrum sensing. Compressed detection model is established to prove that there indeed exists one optimal sensing time achieving maximal spectral efficiency. Moreover, we propose novel wideband cooperative spectrum sensing (CSS) framework where each SU reporting duration can be utilized for its following SU sensing. The sensing performance benefits from the novel CSS framework because the equivalent sensing time is extended by making full use of reporting slot. Furthermore, in respect of time-varying channel, the spatiotemporal CSS (ST-CSS) is presented to attain space and time diversity gain simultaneously under hard decision fusion rule. Computer simulations show that the optimal sensing settings algorithm of joint optimization of sensing time, hard fusion rule and scheduling strategy achieves significant improvement in spectral efficiency. Additionally, the novel ST-CSS scheme performs much higher spectral efficiency than that of general CSS framework. PMID:28117712

  17. Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Pan, Feng

    2007-09-01

    In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.

  18. A multi-objective optimization model for hub network design under uncertainty: An inexact rough-interval fuzzy approach

    NASA Astrophysics Data System (ADS)

    Niakan, F.; Vahdani, B.; Mohammadi, M.

    2015-12-01

    This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.

  19. The control algorithm of the system ‘frequency converter - asynchronous motor’ of the batcher

    NASA Astrophysics Data System (ADS)

    Lyapushkin, S. V.; Martyushev, N. V.; Shiryaev, S. Y.

    2017-01-01

    The paper is devoted to the solution of the problem of optimum batching of bulk mixtures according to the criterion of accuracy and maximally possible performance. This problem is solved for applied utilization when running the system ‘frequency converter - asynchronous motor’ having pulse-width modulation of a screw batcher of agricultural equipment. The developed control algorithm allows batching small components of a bulk mixture with the prescribed accuracy due to the weight consideration of the falling column of the material being in the air after the screw stoppage. The paper also shows that in order to reduce the influence of the mass of the ‘falling column’ on the accuracy of batching, it is necessary to specify the sequence of batching of components inside of the recipe beginning from the largest component ending with the least one. To exclude the variable error of batching, which arises owing to the mass of the material column, falling into the batcher-bunker, the algorithm of dynamic correction of the task is used in the control system.

  20. Efficient 3D multi-region prostate MRI segmentation using dual optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.

  1. The Mass Distribution in Disk Galaxies

    NASA Astrophysics Data System (ADS)

    Courteau, Stéphane; Dutton, Aaron A.

    We present the relative fraction of baryons and dark matter at various radii in galaxies. For spiral galaxies, this fraction measured in a galaxy's inner parts is typically baryon-dominated (maximal) and dark-matter dominated (sub-maximal) in the outskirts. The transition from maximal to sub-maximal baryons occurs within the inner parts of low-mass disk galaxies (with V tot <= 200 km s-1) and in the outer disk for more massive systems. The mean mass fractions for late- and early-type galaxies vary significantly at the same fiducial radius and circular velocity, suggesting a range of galaxy formation mechanisms. A more detailed discussion, and resolution of the so-called ``maximal disk problem'', is presented in Courteau & Dutton, ApJL, 801, 20.

  2. Optimal control, investment and utilization schemes for energy storage under uncertainty

    NASA Astrophysics Data System (ADS)

    Mirhosseini, Niloufar Sadat

    Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency operations; (ii) optimal market strategy of buy and sell over 24-hour periods; (iii) optimal storage charge and discharge in much shorter time intervals.

  3. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  4. Harmony search algorithm: application to the redundancy optimization problem

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Thien-My, Dao

    2010-09-01

    The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.

  5. Maintaining Registered Nurses' Currency in Informatics

    ERIC Educational Resources Information Center

    Strawn, Jennifer Alaine

    2017-01-01

    Technology has changed how registered nurses (RNs) provide care at the bedside. As more technologies are utilized to improve quality of care, safety of care, maximize efficiencies, and decrease costs of care, one must question how well the information technologies (IT) are fully integrated and utilized by the front-line bedside nurse in his or her…

  6. Percolation of binary disk systems: Modeling and theory

    DOE PAGES

    Meeks, Kelsey; Tencer, John; Pantoya, Michelle L.

    2017-01-12

    The dispersion and connectivity of particles with a high degree of polydispersity is relevant to problems involving composite material properties and reaction decomposition prediction and has been the subject of much study in the literature. This paper utilizes Monte Carlo models to predict percolation thresholds for a two-dimensional systems containing disks of two different radii. Monte Carlo simulations and spanning probability are used to extend prior models into regions of higher polydispersity than those previously considered. A correlation to predict the percolation threshold for binary disk systems is proposed based on the extended dataset presented in this work and comparedmore » to previously published correlations. Finally, a set of boundary conditions necessary for a good fit is presented, and a condition for maximizing percolation threshold for binary disk systems is suggested.« less

  7. Effect of amendments addition on adsorption of landfill leachate

    NASA Astrophysics Data System (ADS)

    Bai, X. J.; Zhang, H. Y.; Wang, G. Q.; Gu, J.; Wang, J. H.; Duan, G. P.

    2018-03-01

    The disposal of leachate has become one of the most pressing problems for landfills. This study taking three kinds of amendments, corn straw, mushroom residue and garden waste as adsorbent materials, evaluates the different amendments on the leachate adsorption effect through analyzing indicators as the saturation adsorption ratio, sulfur containing odor emission, heat value. The results showed that all three kinds of amendments can effectively adsorb leachate, with saturation adsorption ratio between 1: 2 and 1: 4. Adding amendment could significantly reduce the sulfur containing odor emission of leachate. Compared the three kinds of amendments, mushroom residue could adsorb leachate at a maximize degree with a low concentration of sulfur containing odor emission. The industrial analysis showed that the heat values of the amendments after absorbing leachate are more than 14MJ/kg, and it can be utilized as a biomass fuel.

  8. China's medical savings accounts: an analysis of the price elasticity of demand for health care.

    PubMed

    Yu, Hao

    2017-07-01

    Although medical savings accounts (MSAs) have drawn intensive attention across the world for their potential in cost control, there is limited evidence of their impact on the demand for health care. This paper is intended to fill that gap. First, we built up a dynamic model of a consumer's problem of utility maximization in the presence of a nonlinear price schedule embedded in an MSA. Second, the model was implemented using data from a 2-year MSA pilot program in China. The estimated price elasticity under MSAs was between -0.42 and -0.58, i.e., higher than that reported in the literature. The relatively high price elasticity suggests that MSAs as an insurance feature may help control costs. However, the long-term effect of MSAs on health costs is subject to further analysis.

  9. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2017-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369

  10. Optimal experiment design for magnetic resonance fingerprinting.

    PubMed

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  11. Optimization of Multiple Related Negotiation through Multi-Negotiation Network

    NASA Astrophysics Data System (ADS)

    Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi

    In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.

  12. Maximally slicing a black hole.

    NASA Technical Reports Server (NTRS)

    Estabrook, F.; Wahlquist, H.; Christensen, S.; Dewitt, B.; Smarr, L.; Tsiang, E.

    1973-01-01

    Analytic and computer-derived solutions are presented of the problem of slicing the Schwarzschild geometry into asymptotically flat, asymptotically static, maximal spacelike hypersurfaces. The sequence of hypersurfaces advances forward in time in both halves (u greater than or equal to 0, u less than or equal to 0) of the Kruskal diagram, tending asymptotically to the hypersurface r = 3/2 M and avoiding the singularity at r = 0. Maximality is therefore a potentially useful condition to impose in obtaining computer solutions of Einstein's equations.

  13. Optimal Battery Utilization Over Lifetime for Parallel Hybrid Electric Vehicle to Maximize Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev

    This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of batterymore » aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.« less

  14. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Nikhar; Tom, Nathan M

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalmanmore » filter and autoregressive model to evaluate model predictive control performance.« less

  15. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Nikhar; Tom, Nathan

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalmanmore » filter and autoregressive model to evaluate model predictive control performance.« less

  16. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  17. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  18. Collective Intelligence. Chapter 17

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2003-01-01

    Many systems of self-interested agents have an associated performance criterion that rates the dynamic behavior of the overall system. This chapter presents an introduction to the science of such systems. Formally, collectives are defined as any system having the following two characteristics: First, the system must contain one or more agents each of which we view as trying to maximize an associated private utility; second, the system must have an associated world utility function that rates the possible behaviors of that overall system. In practice, collectives are often very large, distributed, and support little, if any, centralized communication and control, although those characteristics are not part of their formal definition. A naturally occurring example of a collective is a human economy. One can identify the agents and their private utilities as the human individuals in the economy and the associated personal rewards they are each trying to maximize. One could then identify the world utility as the time average of the gross domestic product. ("World utility" per se is not a construction internal to a human economy, but rather something defined from the outside.) To achieve high world utility it is necessary to avoid having the agents work at cross-purposes lest phenomena like liquidity traps or the Tragedy of the Commons (TOC) occur, in which agents' individually pursuing their private utilities lowers world utility. The obvious way to avoid such phenomena is by modifying the agents utility functions to be "aligned" with the world utility. This can be done via punitive legislation. A real-world example of an attempt to do this was the creation of antitrust regulations designed to prevent monopolistic practices.

  19. Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System

    NASA Astrophysics Data System (ADS)

    Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju

    2018-03-01

    A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.

  20. Metropolitan natural area protection to maximize public access and species representation

    Treesearch

    Jane A. Ruliffson; Robert G. Haight; Paul H. Gobster; Frances R. Homans

    2003-01-01

    In response to widespread urban development, local governments in metropolitan areas in the United States acquire and protect privately-owned open space. We addressed the planner's problem of allocating a fixed budget for open space protection among eligible natural areas with the twin objectives of maximizing public access and species representation. Both...

  1. Statistical mechanics of influence maximization with thermal noise

    NASA Astrophysics Data System (ADS)

    Lynn, Christopher W.; Lee, Daniel D.

    2017-03-01

    The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.

  2. Energy neutral and low power wireless communications

    NASA Astrophysics Data System (ADS)

    Orhan, Oner

    Wireless sensor nodes are typically designed to have low cost and small size. These design objectives impose restrictions on the capacity and efficiency of the transceiver components and energy storage units that can be used. As a result, energy becomes a bottleneck and continuous operation of the sensor network requires frequent battery replacements, increasing the maintenance cost. Energy harvesting and energy efficient transceiver architectures are able to overcome these challenges by collecting energy from the environment and utilizing the energy in an intelligent manner. However, due to the nature of the ambient energy sources, the amount of useful energy that can be harvested is limited and unreliable. Consequently, optimal management of the harvested energy and design of low power transceivers pose new challenges for wireless network design and operation. The first part of this dissertation is on energy neutral wireless networking, where optimal transmission schemes under different system setups and objectives are investigated. First, throughput maximization for energy harvesting two-hop networks with decode-and-forward half-duplex relays is studied. For a system with two parallel relays, various combinations of the following four transmission modes are considered: Broadcast from the source, multi-access from the relays, and successive relaying phases I and II. Next, the energy cost of the processing circuitry as well as the transmission energy are taken into account for communication over a broadband fading channel powered by an energy harvesting transmitter. Under this setup, throughput maximization, energy maximization, and transmission completion time minimization problems are studied. Finally, source and channel coding for an energy-limited wireless sensor node is investigated under various energy constraints including energy harvesting, processing and sampling costs. For each objective, optimal transmission policies are formulated as the solutions of a convex optimization problem, and the properties of these optimal policies are identified. In the second part of this thesis, low power transceiver design is considered for millimeter wave communication systems. In particular, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance.

  3. Use of the hyperinsulinemic euglycemic clamp to assess insulin sensitivity in guinea pigs: dose response, partitioned glucose metabolism, and species comparisons.

    PubMed

    Horton, Dane M; Saint, David A; Owens, Julie A; Gatford, Kathryn L; Kind, Karen L

    2017-07-01

    The guinea pig is an alternate small animal model for the study of metabolism, including insulin sensitivity. However, only one study to date has reported the use of the hyperinsulinemic euglycemic clamp in anesthetized animals in this species, and the dose response has not been reported. We therefore characterized the dose-response curve for whole body glucose uptake using recombinant human insulin in the adult guinea pig. Interspecies comparisons with published data showed species differences in maximal whole body responses (guinea pig ≈ human < rat < mouse) and the insulin concentrations at which half-maximal insulin responses occurred (guinea pig > human ≈ rat > mouse). In subsequent studies, we used concomitant d-[3- 3 H]glucose infusion to characterize insulin sensitivities of whole body glucose uptake, utilization, production, storage, and glycolysis in young adult guinea pigs at human insulin doses that produced approximately half-maximal (7.5 mU·min -1 ·kg -1 ) and near-maximal whole body responses (30 mU·min -1 ·kg -1 ). Although human insulin infusion increased rates of glucose utilization (up to 68%) and storage and, at high concentrations, increased rates of glycolysis in females, glucose production was only partially suppressed (~23%), even at high insulin doses. Fasting glucose, metabolic clearance of insulin, and rates of glucose utilization, storage, and production during insulin stimulation were higher in female than in male guinea pigs ( P < 0.05), but insulin sensitivity of these and whole body glucose uptake did not differ between sexes. This study establishes a method for measuring partitioned glucose metabolism in chronically catheterized conscious guinea pigs, allowing studies of regulation of insulin sensitivity in this species. Copyright © 2017 the American Physiological Society.

  4. Deterministic methods for multi-control fuel loading optimization

    NASA Astrophysics Data System (ADS)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  5. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  6. Learning and exploration in action-perception loops.

    PubMed

    Little, Daniel Y; Sommer, Friedrich T

    2013-01-01

    Discovering the structure underlying observed data is a recurring problem in machine learning with important applications in neuroscience. It is also a primary function of the brain. When data can be actively collected in the context of a closed action-perception loop, behavior becomes a critical determinant of learning efficiency. Psychologists studying exploration and curiosity in humans and animals have long argued that learning itself is a primary motivator of behavior. However, the theoretical basis of learning-driven behavior is not well understood. Previous computational studies of behavior have largely focused on the control problem of maximizing acquisition of rewards and have treated learning the structure of data as a secondary objective. Here, we study exploration in the absence of external reward feedback. Instead, we take the quality of an agent's learned internal model to be the primary objective. In a simple probabilistic framework, we derive a Bayesian estimate for the amount of information about the environment an agent can expect to receive by taking an action, a measure we term the predicted information gain (PIG). We develop exploration strategies that approximately maximize PIG. One strategy based on value-iteration consistently learns faster than previously developed reward-free exploration strategies across a diverse range of environments. Psychologists believe the evolutionary advantage of learning-driven exploration lies in the generalized utility of an accurate internal model. Consistent with this hypothesis, we demonstrate that agents which learn more efficiently during exploration are later better able to accomplish a range of goal-directed tasks. We will conclude by discussing how our work elucidates the explorative behaviors of animals and humans, its relationship to other computational models of behavior, and its potential application to experimental design, such as in closed-loop neurophysiology studies.

  7. A Hybrid Memetic Framework for Coverage Optimization in Wireless Sensor Networks.

    PubMed

    Chen, Chia-Pang; Mukhopadhyay, Subhas Chandra; Chuang, Cheng-Long; Lin, Tzu-Shiang; Liao, Min-Sheng; Wang, Yung-Chung; Jiang, Joe-Air

    2015-10-01

    One of the critical concerns in wireless sensor networks (WSNs) is the continuous maintenance of sensing coverage. Many particular applications, such as battlefield intrusion detection and object tracking, require a full-coverage at any time, which is typically resolved by adding redundant sensor nodes. With abundant energy, previous studies suggested that the network lifetime can be maximized while maintaining full coverage through organizing sensor nodes into a maximum number of disjoint sets and alternately turning them on. Since the power of sensor nodes is unevenly consumed over time, and early failure of sensor nodes leads to coverage loss, WSNs require dynamic coverage maintenance. Thus, the task of permanently sustaining full coverage is particularly formulated as a hybrid of disjoint set covers and dynamic-coverage-maintenance problems, and both have been proven to be nondeterministic polynomial-complete. In this paper, a hybrid memetic framework for coverage optimization (Hy-MFCO) is presented to cope with the hybrid problem using two major components: 1) a memetic algorithm (MA)-based scheduling strategy and 2) a heuristic recursive algorithm (HRA). First, the MA-based scheduling strategy adopts a dynamic chromosome structure to create disjoint sets, and then the HRA is utilized to compensate the loss of coverage by awaking some of the hibernated nodes in local regions when a disjoint set fails to maintain full coverage. The results obtained from real-world experiments using a WSN test-bed and computer simulations indicate that the proposed Hy-MFCO is able to maximize sensing coverage while achieving energy efficiency at the same time. Moreover, the results also show that the Hy-MFCO significantly outperforms the existing methods with respect to coverage preservation and energy efficiency.

  8. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  9. Maximum range of a projectile launched from a height h: a non-calculus treatment

    NASA Astrophysics Data System (ADS)

    Ganci, S.; Lagomarsino, D.

    2014-07-01

    The classical example of problem solving, maximizing the range of a projectile launched from height h with velocity v over the ground level, has received various solutions. In some of these, one can find the maximization of the range R by differentiating R as a function of an independent variable or through the implicit differentiation in Cartesian or polar coordinates. In other papers, various elegant non-calculus solutions can be found. In this paper, this problem is revisited on the basis of the elementary analytical geometry and the trigonometry only.

  10. Electromyographic and neuromuscular analysis in patients with post-polio syndrome.

    PubMed

    Corrêa, J C F; Rocco, C Chiusoli de Miranda; de Andrade, D Ventura; Peres, J Augusto; Corrêa, F Ishida

    2008-01-01

    Proceed to a comparative analysis of the electromyographic (EMG) activity of the muscles rectus femoris, vastus medialis and vastus lateralis, and to assess muscle strength and fatigue after maximal isometric contraction during knee extension. Eighteen patients with post-polio syndrome, age and weight matched, were utilized in this study. The signal acquisition system utilized consisted of three pairs of surface electrodes positioned on the motor point of the analyzed muscles. It was possible to observe with the results of this study a decreased endurance on initial muscle contraction and during contraction after 15 minutes of the initial maximal voluntary contraction, along with a muscle fatigue that was assessed through linear regression executed with Pearson's test. There were significant differences among the comparative analysis of EMG activity of the muscles rectus femoris, vastus medialis and vastus lateralis after maximal isometric contraction during knee extension. Initial muscle contraction and contraction after a 15 minute-rest from initial contraction decreased considerably, indicating a decreased endurance on muscle contraction, concluding that a lower limb muscle fatigue was present on the analyzed PPS patients.

  11. Morphology, mechanical, cross-linking, thermal, and tribological properties of nitrile and hydrogenated nitrile rubber/multi-walled carbon nanotubes composites prepared by melt compounding: The effect of acrylonitrile content and hydrogenation

    NASA Astrophysics Data System (ADS)

    Likozar, Blaž; Major, Zoltan

    2010-11-01

    The purpose of this work was to prepare nanocomposites by mixing multi-walled carbon nanotubes (MWCNT) with nitrile and hydrogenated nitrile elastomers (NBR and HNBR). Utilization of transmission electronic microscopy (TEM), scanning electron microscopy (SEM), and small- and wide-angle X-ray scattering techniques (SAXS and WAXS) for advanced morphology observation of conducting filler-reinforced nitrile and hydrogenated nitrile rubber composites is reported. Principal results were increases in hardness (maximally 97 Shore, type A), elastic modulus (maximally 981 MPa), tensile strength (maximally 27.7 MPa), elongation at break (maximally 216%), cross-link density (maximally 7.94 × 1028 m-3), density (maximally 1.16 g cm-3), and tear strength (11.2 kN m-1), which were clearly visible at particular acrylonitrile contents both for unhydrogenated and hydrogenated polymers due to enhanced distribution of carbon nanotubes (CNT) and their aggregated particles in the applied rubber matrix. Conclusion was that multi-walled carbon nanotubes improved the performance of nitrile and hydrogenated nitrile rubber nanocomposites prepared by melt compounding.

  12. The behavioral economics of consumer brand choice: patterns of reinforcement and utility maximization.

    PubMed

    Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C

    2004-06-30

    Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.

  13. Resource-aware taxon selection for maximizing phylogenetic diversity.

    PubMed

    Pardi, Fabio; Goldman, Nick

    2007-06-01

    Phylogenetic diversity (PD) is a useful metric for selecting taxa in a range of biological applications, for example, bioconservation and genomics, where the selection is usually constrained by the limited availability of resources. We formalize taxon selection as a conceptually simple optimization problem, aiming to maximize PD subject to resource constraints. This allows us to take into account the different amounts of resources required by the different taxa. Although this is a computationally difficult problem, we present a dynamic programming algorithm that solves it in pseudo-polynomial time. Our algorithm can also solve many instances of the Noah's Ark Problem, a more realistic formulation of taxon selection for biodiversity conservation that allows for taxon-specific extinction risks. These instances extend the set of problems for which solutions are available beyond previously known greedy-tractable cases. Finally, we discuss the relevance of our results to real-life scenarios.

  14. Trading off species protection and timber production in forests managed for multiple objectives

    Treesearch

    Vladimir Marianov; Stephanie Snyder; Charles ReVelle

    2004-01-01

    We address a multiobjective forest-management problem that maximizes harvested timber volume and maximizes the protection of species through the selection of protected habitat reserves. As opposed to reserving parcels of the forest for general habitat purposes, as most published works do, the model we present, and its several variants, concentrate on the preservation...

  15. Tug-Of-War Model for Two-Bandit Problem

    NASA Astrophysics Data System (ADS)

    Kim, Song-Ju; Aono, Masashi; Hara, Masahiko

    The amoeba of the true slime mold Physarum polycephalum shows high computational capabilities. In the so-called amoeba-based computing, some computing tasks including combinatorial optimization are performed by the amoeba instead of a digital computer. We expect that there must be problems living organisms are good at solving. The “multi-armed bandit problem” would be the one of such problems. Consider a number of slot machines. Each of the machines has an arm which gives a player a reward with a certain probability when pulled. The problem is to determine the optimal strategy for maximizing the total reward sum after a certain number of trials. To maximize the total reward sum, it is necessary to judge correctly and quickly which machine has the highest reward probability. Therefore, the player should explore many machines to gather much knowledge on which machine is the best, but should not fail to exploit the reward from the known best machine. We consider that living organisms follow some efficient method to solve the problem.

  16. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  17. Optimal joint detection and estimation that maximizes ROC-type curves

    PubMed Central

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.

    2017-01-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544

  18. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  19. Addressing practical challenges in utility optimization of mobile wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Eswaran, Sharanya; Misra, Archan; La Porta, Thomas; Leung, Kin

    2008-04-01

    This paper examines the practical challenges in the application of the distributed network utility maximization (NUM) framework to the problem of resource allocation and sensor device adaptation in a mission-centric wireless sensor network (WSN) environment. By providing rich (multi-modal), real-time information about a variety of (often inaccessible or hostile) operating environments, sensors such as video, acoustic and short-aperture radar enhance the situational awareness of many battlefield missions. Prior work on the applicability of the NUM framework to mission-centric WSNs has focused on tackling the challenges introduced by i) the definition of an individual mission's utility as a collective function of multiple sensor flows and ii) the dissemination of an individual sensor's data via a multicast tree to multiple consuming missions. However, the practical application and performance of this framework is influenced by several parameters internal to the framework and also by implementation-specific decisions. This is made further complex due to mobile nodes. In this paper, we use discrete-event simulations to study the effects of these parameters on the performance of the protocol in terms of speed of convergence, packet loss, and signaling overhead thereby addressing the challenges posed by wireless interference and node mobility in ad-hoc battlefield scenarios. This study provides better understanding of the issues involved in the practical adaptation of the NUM framework. It also helps identify potential avenues of improvement within the framework and protocol.

  20. Benchmarking the D-Wave Two

    NASA Astrophysics Data System (ADS)

    Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel

    2014-03-01

    We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.

  1. Columbus stowage optimization by cast (cargo accommodation support tool)

    NASA Astrophysics Data System (ADS)

    Fasano, G.; Saia, D.; Piras, A.

    2010-08-01

    A challenging issue related to the International Space Station utilization concerns the on-board stowage, implying a strong impact on habitability, safety and crew productivity. This holds in particular for the European Columbus laboratory, nowadays also utilized to provide the station with logistic support. The volume exploitation has to be maximized, in compliance with the given accommodation rules. At each upload step, the stowage problem must be solved quickly and efficiently. This leads to the comparison of different scenarios to select the most suitable one. Last minute upgrades, due to possible re-planning, may, moreover arise, imposing the further capability to rapidly readapt the current solution to the updated status. In this context, looking into satisfactory solutions represents a very demanding job, even for experienced designers. Thales Alenia Space Italia has achieved a remarkable expertise in the field of cargo accommodation and stowage. The company has recently developed CAST, a dedicated in-house software tool, to support the cargo accommodation of the European automated transfer vehicle. An ad hoc version, tailored to the Columbus stowage, has been further implemented and is going to be used from now on. This paper surveys the on-board stowage issue, pointing out the advantages of the proposed approach.

  2. Theory-Driven Hints in the Cheap Necklace Problem: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Chu, Yun; Dewald, Andrew D.; Chronicle, Edward P.

    2007-01-01

    Three experiments investigated the effects of two hints derived from the Criterion for Satisfactory Progress theory (CSP) and Representational Change Theory (RCT) on the cheap necklace problem (insight problem). In Experiment 1, fewer participants given the CSP hint used an incorrect (maximizing) first move than participants given the RCT hint or…

  3. A New Algorithm to Create Balanced Teams Promoting More Diversity

    ERIC Educational Resources Information Center

    Dias, Teresa Galvão; Borges, José

    2017-01-01

    The problem of assigning students to teams can be described as maximising their profiles diversity within teams while minimising the differences among teams. This problem is commonly known as the maximally diverse grouping problem and it is usually formulated as maximising the sum of the pairwise distances among students within teams. We propose…

  4. Optimal Base Station Density of Dense Network: From the Viewpoint of Interference and Load.

    PubMed

    Feng, Jianyuan; Feng, Zhiyong

    2017-09-11

    Network densification is attracting increasing attention recently due to its ability to improve network capacity by spatial reuse and relieve congestion by offloading. However, excessive densification and aggressive offloading can also cause the degradation of network performance due to problems of interference and load. In this paper, with consideration of load issues, we study the optimal base station density that maximizes the throughput of the network. The expected link rate and the utilization ratio of the contention-based channel are derived as the functions of base station density using the Poisson Point Process (PPP) and Markov Chain. They reveal the rules of deployment. Based on these results, we obtain the throughput of the network and indicate the optimal deployment density under different network conditions. Extensive simulations are conducted to validate our analysis and show the substantial performance gain obtained by the proposed deployment scheme. These results can provide guidance for the network densification.

  5. Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers

    PubMed Central

    Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.

    2016-01-01

    The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362

  6. Applying Probabilistic Decision Models to Clinical Trial Design

    PubMed Central

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075

  7. Understanding efficiency limits of dielectric elastomer driver circuitry

    NASA Astrophysics Data System (ADS)

    Lo, Ho Cheong; Calius, Emilio; Anderson, Iain

    2013-04-01

    Dielectric elastomers (DEs) can theoretically operate at efficiencies greater than that of electromagnetics. This is due to their unique mode of operation which involves charging and discharging a capacitive load at a few kilovolts (typically 1kV-4kV). Efficient recovery of the electrical energy stored in the capacitance of the DE is essential in achieving favourable efficiencies as actuators or generators. This is not a trivial problem because the DE acts as a voltage source with a low capacity and a large output resistance. These properties are not ideal for a power source, and will reduce the performance of any power conditioning circuit utilizing inductors or transformers. This paper briefly explores how circuit parameters affect the performance of a simple inductor circuit used to transfer energy from a DE to another capacitor. These parameters must be taken into account when designing the driving circuitry to maximize performance.

  8. New approach in the evaluation of a fitness program at a worksite.

    PubMed

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  9. Headache in the world: public health and research priorities.

    PubMed

    Steiner, Timothy J

    2013-02-01

    Headache disorders are ubiquitous, prevalent and disabling, yet under-recognized, underdiagnosed and undertreated everywhere. A recent WHO survey of headache disorders "illuminates the worldwide neglect of a major public health problem, and reveals the inadequacies of responses to it in countries throughout the world." In this depressing context, the most profitable future for headache research - in the sense of maximizing benefit to people with headache - lies in health services research. This, backed by health economic studies, is likely to show that reallocation of resources towards better healthcare delivery, more effectively using treatments already available, has greater potential to benefit than the search for new drugs. In a world in which the lives of most people with headache are untouched by treatment developments of the last 20 years, there is far greater utility gain from finding ways to reach them than from striving to do a little better in the relatively well-served small minority.

  10. Sparse Bayesian learning for DOA estimation with mutual coupling.

    PubMed

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-10-16

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.

  11. Fair Package Assignment

    NASA Astrophysics Data System (ADS)

    Lahaie, Sébastien; Parkes, David C.

    We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.

  12. Base Station Activation and Linear Transceiver Design for Optimal Resource Management in Heterogeneous Networks

    NASA Astrophysics Data System (ADS)

    Liao, Wei-Cheng; Hong, Mingyi; Liu, Ya-Feng; Luo, Zhi-Quan

    2014-08-01

    In a densely deployed heterogeneous network (HetNet), the number of pico/micro base stations (BS) can be comparable with the number of the users. To reduce the operational overhead of the HetNet, proper identification of the set of serving BSs becomes an important design issue. In this work, we show that by jointly optimizing the transceivers and determining the active set of BSs, high system resource utilization can be achieved with only a small number of BSs. In particular, we provide formulations and efficient algorithms for such joint optimization problem, under the following two common design criteria: i) minimization of the total power consumption at the BSs, and ii) maximization of the system spectrum efficiency. In both cases, we introduce a nonsmooth regularizer to facilitate the activation of the most appropriate BSs. We illustrate the efficiency and the efficacy of the proposed algorithms via extensive numerical simulations.

  13. Digital asset management.

    PubMed

    Humphrey, Clinton D; Tollefson, Travis T; Kriet, J David

    2010-05-01

    Facial plastic surgeons are accumulating massive digital image databases with the evolution of photodocumentation and widespread adoption of digital photography. Managing and maximizing the utility of these vast data repositories, or digital asset management (DAM), is a persistent challenge. Developing a DAM workflow that incorporates a file naming algorithm and metadata assignment will increase the utility of a surgeon's digital images. Copyright 2010 Elsevier Inc. All rights reserved.

  14. CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.

    PubMed

    Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan

    2017-06-24

    The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .

  15. Problem Solvers' Conceptions about Osmosis.

    ERIC Educational Resources Information Center

    Zuckerman, June T.

    1994-01-01

    Discusses the scheme and findings of a study designed to identify the conceptual knowledge used by high school students to solve a significant problem related to osmosis. Useful tips are provided to teachers to aid students in developing constructs that maximize understanding. (ZWH)

  16. Networking Micro-Processors for Effective Computer Utilization in Nursing

    PubMed Central

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.

  17. Factors shaping effective utilization of health information technology in urban safety-net clinics.

    PubMed

    George, Sheba; Garth, Belinda; Fish, Allison; Baker, Richard

    2013-09-01

    Urban safety-net clinics are considered prime targets for the adoption of health information technology innovations; however, little is known about their utilization in such safety-net settings. Current scholarship provides limited guidance on the implementation of health information technology into safety-net settings as it typically assumes that adopting institutions have sufficient basic resources. This study addresses this gap by exploring the unique challenges urban resource-poor safety-net clinics must consider when adopting and utilizing health information technology. In-depth interviews (N = 15) were used with key stakeholders (clinic chief executive officers, medical directors, nursing directors, chief financial officers, and information technology directors) from staff at four clinics to explore (a) nonhealth information technology-related clinic needs, (b) how health information technology may provide solutions, and (c) perceptions of and experiences with health information technology. Participants identified several challenges, some of which appear amenable to health information technology solutions. Also identified were requirements for effective utilization of health information technology including physical infrastructural improvements, funding for equipment/training, creation of user groups to share health information technology knowledge/experiences, and specially tailored electronic billing guidelines. We found that despite the potential benefit that can be derived from health information technologies, the unplanned and uninformed introduction of these tools into these settings might actually create more problems than are solved. From these data, we were able to identify a set of factors that should be considered when integrating health information technology into the existing workflows of low-resourced urban safety-net clinics in order to maximize their utilization and enhance the quality of health care in such settings.

  18. The Factory of the Future

    NASA Technical Reports Server (NTRS)

    Byman, J. E.

    1985-01-01

    A brief history of aircraft production techniques is given. A flexible machining cell is then described. It is a computer controlled system capable of performing 4-axis machining part cleaning, dimensional inspection and materials handling functions in an unmanned environment. The cell was designed to: allow processing of similar and dissimilar parts in random order without disrupting production; allow serial (one-shipset-at-a-time) manufacturing; reduce work-in-process inventory; maximize machine utilization through remote set-up; maximize throughput and minimize labor.

  19. A test of ecological optimality for semiarid vegetation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Salvucci, Guido D.; Eagleson, Peter S.; Turner, Edmund K.

    1992-01-01

    Three ecological optimality hypotheses which have utility in parameter reduction and estimation in a climate-soil-vegetation water balance model are reviewed and tested. The first hypothesis involves short term optimization of vegetative canopy density through equilibrium soil moisture maximization. The second hypothesis involves vegetation type selection again through soil moisture maximization, and the third involves soil genesis through plant induced modification of soil hydraulic properties to values which result in a maximum rate of biomass productivity.

  20. Further reduction of minimal first-met bad markings for the computationally efficient synthesis of a maximally permissive controller

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Chao, Daniel Yuh

    2015-08-01

    To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.

  1. Maximal clique enumeration with data-parallel primitives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lessley, Brenton; Perciano, Talita; Mathai, Manish

    The enumeration of all maximal cliques in an undirected graph is a fundamental problem arising in several research areas. We consider maximal clique enumeration on shared-memory, multi-core architectures and introduce an approach consisting entirely of data-parallel operations, in an effort to achieve efficient and portable performance across different architectures. We study the performance of the algorithm via experiments varying over benchmark graphs and architectures. Overall, we observe that our algorithm achieves up to a 33-time speedup and 9-time speedup over state-of-the-art distributed and serial algorithms, respectively, for graphs with higher ratios of maximal cliques to total cliques. Further, we attainmore » additional speedups on a GPU architecture, demonstrating the portable performance of our data-parallel design.« less

  2. The Secretary Problem from the Applicant's Point of View

    ERIC Educational Resources Information Center

    Glass, Darren

    2012-01-01

    A 1960 "Mathematical Games" column describes the problem, now known as the Secretary Problem, which asks how someone interviewing candidates for a position should maximize the chance of hiring the best applicant. This note looks at how an applicant should respond, if they know the interviewer uses this optimal strategy. We show that all but the…

  3. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  4. Feasibility of Stochastic Voltage/VAr Optimization Considering Renewable Energy Resources for Smart Grid

    NASA Astrophysics Data System (ADS)

    Momoh, James A.; Salkuti, Surender Reddy

    2016-06-01

    This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.

  5. Modeling road-cycling performance.

    PubMed

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.

  6. 77 FR 25145 - Commerce Spectrum Management Advisory Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-27

    ... innovation as possible, and make wireless services available to all Americans. (See charter, at http://www... federal capabilities and maximizing commercial utilization. NTIA will post a detailed agenda on its Web...

  7. Retrieval of cloud cover parameters from multispectral satellite images

    NASA Technical Reports Server (NTRS)

    Arking, A.; Childs, J. D.

    1985-01-01

    A technique is described for extracting cloud cover parameters from multispectral satellite radiometric measurements. Utilizing three channels from the AVHRR (Advanced Very High Resolution Radiometer) on NOAA polar orbiting satellites, it is shown that one can retrieve four parameters for each pixel: cloud fraction within the FOV, optical thickness, cloud-top temperature and a microphysical model parameter. The last parameter is an index representing the properties of the cloud particle and is determined primarily by the radiance at 3.7 microns. The other three parameters are extracted from the visible and 11 micron infrared radiances, utilizing the information contained in the two-dimensional scatter plot of the measured radiances. The solution is essentially one in which the distributions of optical thickness and cloud-top temperature are maximally clustered for each region, with cloud fraction for each pixel adjusted to achieve maximal clustering.

  8. Optimal population size and endogenous growth.

    PubMed

    Palivos, T; Yip, C K

    1993-01-01

    "Many applications in economics require the selection of an objective function which enables the comparison of allocations involving different population sizes. The two most commonly used criteria are the Benthamite and the Millian welfare functions, also known as classical and average utilitarianism, respectively. The former maximizes total utility of the society and thus represents individuals, while the latter maximizes average utility and so represents generations. Edgeworth (1925) was the first to conjecture, that the Benthamite principle leads to a larger population size and a lower standard of living.... The purpose of this paper is to examine Edgeworth's conjecture in an endogenous growth framework in which there are interactions between output and population growth rates. It is shown that, under conditions that ensure an optimum, the Benthamite criterion leads to smaller population and higher output growth rates than the Millian." excerpt

  9. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  10. Maintaining homeostasis by decision-making.

    PubMed

    Korn, Christoph W; Bach, Dominik R

    2015-05-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that--in both the foraging and the casino frames--participants' choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization.

  11. Maintaining Homeostasis by Decision-Making

    PubMed Central

    Korn, Christoph W.; Bach, Dominik R.

    2015-01-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that—in both the foraging and the casino frames—participants’ choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization. PMID:26024504

  12. Carnot cycle at finite power: attainability of maximal efficiency.

    PubMed

    Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G

    2013-08-02

    We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.

  13. Experimental entanglement distillation and 'hidden' non-locality.

    PubMed

    Kwiat, P G; Barraza-Lopez, S; Stefanov, A; Gisin, N

    2001-02-22

    Entangled states are central to quantum information processing, including quantum teleportation, efficient quantum computation and quantum cryptography. In general, these applications work best with pure, maximally entangled quantum states. However, owing to dissipation and decoherence, practically available states are likely to be non-maximally entangled, partially mixed (that is, not pure), or both. To counter this problem, various schemes of entanglement distillation, state purification and concentration have been proposed. Here we demonstrate experimentally the distillation of maximally entangled states from non-maximally entangled inputs. Using partial polarizers, we perform a filtering process to maximize the entanglement of pure polarization-entangled photon pairs generated by spontaneous parametric down-conversion. We have also applied our methods to initial states that are partially mixed. After filtering, the distilled states demonstrate certain non-local correlations, as evidenced by their violation of a form of Bell's inequality. Because the initial states do not have this property, they can be said to possess 'hidden' non-locality.

  14. Distributed-Memory Fast Maximal Independent Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less

  15. Utility of coupling nonlinear optimization methods with numerical modeling software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.

    1996-08-05

    Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less

  16. A Sampling-Based Bayesian Approach for Cooperative Multiagent Online Search With Resource Constraints.

    PubMed

    Xiao, Hu; Cui, Rongxin; Xu, Demin

    2018-06-01

    This paper presents a cooperative multiagent search algorithm to solve the problem of searching for a target on a 2-D plane under multiple constraints. A Bayesian framework is used to update the local probability density functions (PDFs) of the target when the agents obtain observation information. To obtain the global PDF used for decision making, a sampling-based logarithmic opinion pool algorithm is proposed to fuse the local PDFs, and a particle sampling approach is used to represent the continuous PDF. Then the Gaussian mixture model (GMM) is applied to reconstitute the global PDF from the particles, and a weighted expectation maximization algorithm is presented to estimate the parameters of the GMM. Furthermore, we propose an optimization objective which aims to guide agents to find the target with less resource consumptions, and to keep the resource consumption of each agent balanced simultaneously. To this end, a utility function-based optimization problem is put forward, and it is solved by a gradient-based approach. Several contrastive simulations demonstrate that compared with other existing approaches, the proposed one uses less overall resources and shows a better performance of balancing the resource consumption.

  17. Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.

    PubMed

    Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli

    2015-05-01

    2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.

  18. Improved Approximation Algorithms for Item Pricing with Bounded Degree and Valuation

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya

    When a store sells items to customers, the store wishes to decide the prices of the items to maximize its profit. If the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. It would be hard for the store to decide the prices of items. Assume that a store has a set V of n items and there is a set C of m customers who wish to buy those items. The goal of the store is to decide the price of each item to maximize its profit. We refer to this maximization problem as an item pricing problem. We classify the item pricing problems according to how many items the store can sell or how the customers valuate the items. If the store can sell every item i with unlimited (resp. limited) amount, we refer to this as unlimited supply (resp. limited supply). We say that the item pricing problem is single-minded if each customer j∈C wishes to buy a set ej⊆V of items and assigns valuation w(ej)≥0. For the single-minded item pricing problems (in unlimited supply), Balcan and Blum regarded them as weighted k-hypergraphs and gave several approximation algorithms. In this paper, we focus on the (pseudo) degree of k-hypergraphs and the valuation ratio, i. e., the ratio between the smallest and the largest valuations. Then for the single-minded item pricing problems (in unlimited supply), we show improved approximation algorithms (for k-hypergraphs, general graphs, bipartite graphs, etc.) with respect to the maximum (pseudo) degree and the valuation ratio.

  19. Long-Term Counterinsurgency Strategy: Maximizing Special Operations and Airpower

    DTIC Science & Technology

    2010-02-01

    operations forces possess a repertoire of capabilities and attributes which impart them with unique strategic utility. “That utility reposes most...flashlight), LTMs are employed in a similar role to cue aircrews equipped with Night Vision Devices (NVDs). Concurrently, employment of small laptop...Special Operations Forces (PSS-SOF) and Precision Fires Image Generator (PFIG) have brought similar benefit to the employment of GPS/INS targeted weapons

  20. Decision Making Analysis: Critical Factors-Based Methodology

    DTIC Science & Technology

    2010-04-01

    the pitfalls associated with current wargaming methods such as assuming a western view of rational values in decision - making regardless of the cultures...Utilization theory slightly expands the rational decision making model as it states that “actors try to maximize their expected utility by weighing the...items to categorize the decision - making behavior of political leaders which tend to demonstrate either a rational or cognitive leaning. Leaders

  1. Bioengineering and Coordination of Regulatory Networks and Intracellular Complexes to Maximize Hydrogen Production by Phototrophic Microorganisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabita, F. Robert

    2013-07-30

    In this study, the Principal Investigator, F.R. Tabita has teamed up with J. C. Liao from UCLA. This project's main goal is to manipulate regulatory networks in phototrophic bacteria to affect and maximize the production of large amounts of hydrogen gas under conditions where wild-type organisms are constrained by inherent regulatory mechanisms from allowing this to occur. Unrestrained production of hydrogen has been achieved and this will allow for the potential utilization of waste materials as a feed stock to support hydrogen production. By further understanding the means by which regulatory networks interact, this study will seek to maximize themore » ability of currently available “unrestrained” organisms to produce hydrogen. The organisms to be utilized in this study, phototrophic microorganisms, in particular nonsulfur purple (NSP) bacteria, catalyze many significant processes including the assimilation of carbon dioxide into organic carbon, nitrogen fixation, sulfur oxidation, aromatic acid degradation, and hydrogen oxidation/evolution. Moreover, due to their great metabolic versatility, such organisms highly regulate these processes in the cell and since virtually all such capabilities are dispensable, excellent experimental systems to study aspects of molecular control and biochemistry/physiology are available.« less

  2. Research priorities and plans for the International Space Station-results of the 'REMAP' Task Force

    NASA Technical Reports Server (NTRS)

    Kicza, M.; Erickson, K.; Trinh, E.

    2003-01-01

    Recent events in the International Space Station (ISS) Program have resulted in the necessity to re-examine the research priorities and research plans for future years. Due to both technical and fiscal resource constraints expected on the International Space Station, it is imperative that research priorities be carefully reviewed and clearly articulated. In consultation with OSTP and the Office of Management and budget (OMB), NASA's Office of Biological and Physical Research (OBPR) assembled an ad-hoc external advisory committee, the Biological and Physical Research Maximization and Prioritization (REMAP) Task Force. This paper describes the outcome of the Task Force and how it is being used to define a roadmap for near and long-term Biological and Physical Research objectives that supports NASA's Vision and Mission. Additionally, the paper discusses further prioritizations that were necessitated by budget and ISS resource constraints in order to maximize utilization of the International Space Station. Finally, a process has been developed to integrate the requirements for this prioritized research with other agency requirements to develop an integrated ISS assembly and utilization plan that maximizes scientific output. c2003 American Institute of Aeronautics and Astronautics. Published by Elsevier Science Ltd. All rights reserved.

  3. Approximation Algorithms for the Highway Problem under the Coupon Model

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we consider the line highway problem (in which each customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).

  4. Maximized Gust Loads of a Closed-Loop, Nonlinear Aeroelastic System Using Nonlinear Systems Theory

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    The problem of computing the maximized gust load for a nonlinear, closed-loop aeroelastic aircraft is discusses. The Volterra theory of nonlinear systems is applied in order to define a linearized system that provides a bounds on the response of the nonlinear system of interest. The method is applied to a simplified model of an Airbus A310.

  5. Modeling forest stand dynamics from optimal balances of carbon and nitrogen

    Treesearch

    Harry T. Valentine; Annikki Makela

    2012-01-01

    We formulate a dynamic evolutionary optimization problem to predict the optimal pattern by which carbon (C) and nitrogen (N) are co-allocated to fine-root, leaf, and wood production, with the objective of maximizing height growth rate, year by year, in an even-aged stand. Height growth is maximized with respect to two adaptive traits, leaf N concentration and the ratio...

  6. An adaptive large neighborhood search procedure applied to the dynamic patient admission scheduling problem.

    PubMed

    Lusby, Richard Martin; Schwierz, Martin; Range, Troels Martin; Larsen, Jesper

    2016-11-01

    The aim of this paper is to provide an improved method for solving the so-called dynamic patient admission scheduling (DPAS) problem. This is a complex scheduling problem that involves assigning a set of patients to hospital beds over a given time horizon in such a way that several quality measures reflecting patient comfort and treatment efficiency are maximized. Consideration must be given to uncertainty in the length of stays of patients as well as the possibility of emergency patients. We develop an adaptive large neighborhood search (ALNS) procedure to solve the problem. This procedure utilizes a Simulated Annealing framework. We thoroughly test the performance of the proposed ALNS approach on a set of 450 publicly available problem instances. A comparison with the current state-of-the-art indicates that the proposed methodology provides solutions that are of comparable quality for small and medium sized instances (up to 1000 patients); the two approaches provide solutions that differ in quality by approximately 1% on average. The ALNS procedure does, however, provide solutions in a much shorter time frame. On larger instances (between 1000-4000 patients) the improvement in solution quality by the ALNS procedure is substantial, approximately 3-14% on average, and as much as 22% on a single instance. The time taken to find such results is, however, in the worst case, a factor 12 longer on average than the time limit which is granted to the current state-of-the-art. The proposed ALNS procedure is an efficient and flexible method for solving the DPAS problem. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. On Profit-Maximizing Pricing for the Highway and Tollbooth Problems

    NASA Astrophysics Data System (ADS)

    Elbassioni, Khaled; Raman, Rajiv; Ray, Saurabh; Sitters, René

    In the tollbooth problem on trees, we are given a tree T= (V,E) with n edges, and a set of m customers, each of whom is interested in purchasing a path on the graph. Each customer has a fixed budget, and the objective is to price the edges of T such that the total revenue made by selling the paths to the customers that can afford them is maximized. An important special case of this problem, known as the highway problem, is when T is restricted to be a line. For the tollbooth problem, we present an O(logn)-approximation, improving on the current best O(logm)-approximation. We also study a special case of the tollbooth problem, when all the paths that customers are interested in purchasing go towards a fixed root of T. In this case, we present an algorithm that returns a (1 - ɛ)-approximation, for any ɛ> 0, and runs in quasi-polynomial time. On the other hand, we rule out the existence of an FPTAS by showing that even for the line case, the problem is strongly NP-hard. Finally, we show that in the discount model, when we allow some items to be priced below zero to improve the overall profit, the problem becomes even APX-hard.

  8. The futility of utility: how market dynamics marginalize Adam Smith

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2000-10-01

    Economic theorizing is based on the postulated, nonempiric notion of utility. Economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics I show the following. A utility function generally does not exist mathematically due to nonintegrable dynamics when production/investment are accounted for, resolving Mirowski's thesis. Price as a function of demand does not exist mathematically either. All equilibria are unstable. I then explain how deterministic chaos can be distinguished from random noise at short times. In the generalization to liquid markets and finance theory described by stochastic excess demand dynamics, I also show the following. Market price distributions cannot be rescaled to describe price movements as ‘equilibrium’ fluctuations about a systematic drift in price. Utility maximization does not describe equilibrium. Maximization of the Gibbs entropy of the observed price distribution of an asset would describe equilibrium, if equilibrium could be achieved, but equilibrium does not describe real, liquid markets (stocks, bonds, foreign exchange). There are three inconsistent definitions of equilibrium used in economics and finance, only one of which is correct. Prices in unregulated free markets are unstable against both noise and rising or falling expectations: Adam Smith's stabilizing invisible hand does not exist, either in mathematical models of liquid market data, or in real market data.

  9. Approximation Preserving Reductions among Item Pricing Problems

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  10. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  11. Low-Complexity User Selection for Rate Maximization in MIMO Broadcast Channels with Downlink Beamforming

    PubMed Central

    Silva, Adão; Gameiro, Atílio

    2014-01-01

    We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928

  12. Insight into the ten-penny problem: guiding search by constraints and maximization.

    PubMed

    Öllinger, Michael; Fedor, Anna; Brodt, Svenja; Szathmáry, Eörs

    2017-09-01

    For a long time, insight problem solving has been either understood as nothing special or as a particular class of problem solving. The first view implicates the necessity to find efficient heuristics that restrict the search space, the second, the necessity to overcome self-imposed constraints. Recently, promising hybrid cognitive models attempt to merge both approaches. In this vein, we were interested in the interplay of constraints and heuristic search, when problem solvers were asked to solve a difficult multi-step problem, the ten-penny problem. In three experimental groups and one control group (N = 4 × 30) we aimed at revealing, what constraints drive problem difficulty in this problem, and how relaxing constraints, and providing an efficient search criterion facilitates the solution. We also investigated how the search behavior of successful problem solvers and non-solvers differ. We found that relaxing constraints was necessary but not sufficient to solve the problem. Without efficient heuristics that facilitate the restriction of the search space, and testing the progress of the problem solving process, the relaxation of constraints was not effective. Relaxing constraints and applying the search criterion are both necessary to effectively increase solution rates. We also found that successful solvers showed promising moves earlier and had a higher maximization and variation rate across solution attempts. We propose that this finding sheds light on how different strategies contribute to solving difficult problems. Finally, we speculate about the implications of our findings for insight problem solving.

  13. Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.

    PubMed

    Merrick, Jason R W; Leclerc, Philip

    2016-04-01

    Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.

  14. Optimal Resource Allocation in Library Systems

    ERIC Educational Resources Information Center

    Rouse, William B.

    1975-01-01

    Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)

  15. DOT report for implementing OMB's information dissemination quality guidelines

    DOT National Transportation Integrated Search

    2002-08-01

    Consistent with The Office of : Management and Budgets (OMB) Guidelines (for Ensuring and Maximizing the Quality, : Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies) : implementing Section 515 of the Treasury and...

  16. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  17. The integration of behavioral health interventions in children's health care: services, science, and suggestions.

    PubMed

    Kolko, David J; Perrin, Ellen

    2014-01-01

    Because the integration of mental or behavioral health services in pediatric primary care is a national priority, a description and evaluation of the interventions applied in the healthcare setting is warranted. This article examines several intervention research studies based on alternative models for delivering behavioral health care in conjunction with comprehensive pediatric care. This review describes the diverse methods applied to different clinical problems, such as brief mental health skills, clinical guidelines, and evidence-based practices, and the empirical outcomes of this research literature. Next, several key treatment considerations are discussed to maximize the efficiency and effectiveness of these interventions. Some practical suggestions for overcoming key service barriers are provided to enhance the capacity of the practice to deliver behavioral health care. There is moderate empirical support for the feasibility, acceptability, and clinical utility of these interventions for treating internalizing and externalizing behavior problems. Practical strategies to extend this work and address methodological limitations are provided that draw upon recent frameworks designed to simplify the treatment enterprise (e.g., common elements). Pediatric primary care has become an important venue for providing mental health services to children and adolescents due, in part, to its many desirable features (e.g., no stigma, local setting, familiar providers). Further adaptation of existing delivery models may promote the delivery of effective integrated interventions with primary care providers as partners designed to address mental health problems in pediatric healthcare.

  18. A Poisson nonnegative matrix factorization method with parameter subspace clustering constraint for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei

    2017-06-01

    A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.

  19. Optimizing Preseason Training Loads in Australian Football.

    PubMed

    Carey, David L; Crow, Justin; Ong, Kok-Leong; Blanch, Peter; Morris, Meg E; Dascombe, Ben J; Crossley, Kay M

    2018-02-01

    To investigate whether preseason training plans for Australian football can be computer generated using current training-load guidelines to optimize injury-risk reduction and performance improvement. A constrained optimization problem was defined for daily total and sprint distance, using the preseason schedule of an elite Australian football team as a template. Maximizing total training volume and maximizing Banister-model-projected performance were both considered optimization objectives. Cumulative workload and acute:chronic workload-ratio constraints were placed on training programs to reflect current guidelines on relative and absolute training loads for injury-risk reduction. Optimization software was then used to generate preseason training plans. The optimization framework was able to generate training plans that satisfied relative and absolute workload constraints. Increasing the off-season chronic training loads enabled the optimization algorithm to prescribe higher amounts of "safe" training and attain higher projected performance levels. Simulations showed that using a Banister-model objective led to plans that included a taper in training load prior to competition to minimize fatigue and maximize projected performance. In contrast, when the objective was to maximize total training volume, more frequent training was prescribed to accumulate as much load as possible. Feasible training plans that maximize projected performance and satisfy injury-risk constraints can be automatically generated by an optimization problem for Australian football. The optimization methods allow for individualized training-plan design and the ability to adapt to changing training objectives and different training-load metrics.

  20. A simple technique to increase profits in wood products marketing

    Treesearch

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  1. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  2. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  3. Formulation and demonstration of a robust mean variance optimization approach for concurrent airline network and aircraft design

    NASA Astrophysics Data System (ADS)

    Davendralingam, Navindran

    Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is employed to simulate the reflexive nature of airline supply-demand interactions by modeling the aggregate changes in demand that would result from tactical allocations of aircraft to maximize profit. The best yet-to-be-introduced aircraft maximizes profit by minimizing the long term fleetwide direct operating costs.

  4. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  5. Quantum teleportation via quantum channels with non-maximal Schmidt rank

    NASA Astrophysics Data System (ADS)

    Solís-Prosser, M. A.; Jiménez, O.; Neves, L.; Delgado, A.

    2013-03-01

    We study the problem of teleporting unknown pure states of a single qudit via a pure quantum channel with non-maximal Schmidt rank. We relate this process to the discrimination of linearly dependent symmetric states with the help of the maximum-confidence discrimination strategy. We show that with a certain probability, it is possible to teleport with a fidelity larger than the fidelity optimal deterministic teleportation.

  6. Hierarchical trie packet classification algorithm based on expectation-maximization clustering.

    PubMed

    Bi, Xia-An; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.

  7. Addressing the Common Pathway Underlying Hypertension and Diabetes in People Who Are Obese by Maximizing Health: The Ultimate Knowledge Translation Gap

    PubMed Central

    Dean, Elizabeth; Lomi, Constantina; Bruno, Selma; Awad, Hamzeh; O'Donoghue, Grainne

    2011-01-01

    In accordance with the WHO definition of health, this article examines the alarming discord between the epidemiology of hypertension, type 2 diabetes mellitus (T2DM), and obesity and the low profile of noninvasive (nondrug) compared with invasive (drug) interventions with respect to their prevention, reversal and management. Herein lies the ultimate knowledge translation gap and challenge in 21st century health care. Although lifestyle modification has long appeared in guidelines for medically managing these conditions, this evidence-based strategy is seldom implemented as rigorously as drug prescription. Biomedicine focuses largely on reducing signs and symptoms; the effects of the problem rather than the problem. This article highlights the evidence-based rationale supporting prioritizing the underlying causes and contributing factors for hypertension and T2DM, and, in turn, obesity. We argue that a primary focus on maximizing health could eliminate all three conditions, at best, or, at worst, minimize their severity, complications, and medication needs. To enable such knowledge translation and maximizing health outcome, the health care community needs to practice as an integrated team, and address barriers to effecting maximal health in all patients. Addressing the ultimate knowledge translation gap, by aligning the health care paradigm to 21st century needs, would constitute a major advance. PMID:21423684

  8. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  9. Trading strategies for distribution company with stochastic distributed energy resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chunyu; Wang, Qi; Wang, Jianhui

    2016-09-01

    This paper proposes a methodology to address the trading strategies of a proactive distribution company (PDISCO) engaged in the transmission-level (TL) markets. A one-leader multi-follower bilevel model is presented to formulate the gaming framework between the PDISCO and markets. The lower-level (LL) problems include the TL day-ahead market and scenario-based real-time markets, respectively with the objectives of maximizing social welfare and minimizing operation cost. The upper-level (UL) problem is to maximize the PDISCO’s profit across these markets. The PDISCO’s strategic offers/bids interactively influence the outcomes of each market. Since the LL problems are linear and convex, while the UL problemmore » is non-linear and non-convex, an equivalent primal–dual approach is used to reformulate this bilevel model to a solvable mathematical program with equilibrium constraints (MPEC). The effectiveness of the proposed model is verified by case studies.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Thomas W.; Quach, Tu-Thach; Detry, Richard Joseph

    Complex Adaptive Systems of Systems, or CASoS, are vastly complex ecological, sociological, economic and/or technical systems which we must understand to design a secure future for the nation and the world. Perturbations/disruptions in CASoS have the potential for far-reaching effects due to pervasive interdependencies and attendant vulnerabilities to cascades in associated systems. Phoenix was initiated to address this high-impact problem space as engineers. Our overarching goals are maximizing security, maximizing health, and minimizing risk. We design interventions, or problem solutions, that influence CASoS to achieve specific aspirations. Through application to real-world problems, Phoenix is evolving the principles and discipline ofmore » CASoS Engineering while growing a community of practice and the CASoS engineers to populate it. Both grounded in reality and working to extend our understanding and control of that reality, Phoenix is at the same time a solution within a CASoS and a CASoS itself.« less

  11. Intelligent transportation systems : tools to maximize state transportation investments

    DOT National Transportation Integrated Search

    1997-07-28

    This Issue Brief summarizes national ITS goals and state transportation needs. It reviews states experience with ITS to date and discusses the utility of ITS technologies to improve transportation infrastructure. The Issue Brief also provides cost...

  12. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  13. Compressed Secret Key Agreement:Maximizing Multivariate Mutual Information per Bit

    NASA Astrophysics Data System (ADS)

    Chan, Chung

    2017-10-01

    The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components for generating a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion, by offering a more structured achieving scheme and some simpler conjectures to prove.

  14. Alpha-Fair Resource Allocation under Incomplete Information and Presence of a Jammer

    NASA Astrophysics Data System (ADS)

    Altman, Eitan; Avrachenkov, Konstantin; Garnaev, Andrey

    In the present work we deal with the concept of alpha-fair resource allocation in the situation where the decision maker (in our case, the base station) does not have complete information about the environment. Namely, we develop a concept of α-fairness under uncertainty to allocate power resource in the presence of a jammer under two types of uncertainty: (a) the decision maker does not have complete knowledge about the parameters of the environment, but knows only their distribution, (b) the jammer can come into the environment with some probability bringing extra background noise. The goal of the decision maker is to maximize the α-fairness utility function with respect to the SNIR (signal to noise-plus-interference ratio). Here we consider a concept of the expected α-fairness utility function (short-term fairness) as well as fairness of expectation (long-term fairness). In the scenario with the unknown parameters of the environment the most adequate approach is a zero-sum game since it can also be viewed as a minimax problem for the decision maker playing against the nature where the decision maker has to apply the best allocation under the worst circumstances. In the scenario with the uncertainty about jamming being in the system the Nash equilibrium concept is employed since the agents have non-zero sum payoffs: the decision maker would like to maximize either the expected fairness or the fairness of expectation while the jammer would like to minimize the fairness if he comes in on the scene. For all the plots the equilibrium strategies in closed form are found. We have shown that for all the scenarios the equilibrium has to be constructed into two steps. In the first step the equilibrium jamming strategy has to be constructed based on a solution of the corresponding modification of the water-filling equation. In the second step the decision maker equilibrium strategy has to be constructed equalizing the induced by jammer background noise.

  15. An Empirical Study of why our Cognition Toward Environmental Sustainability is Inconsistent with our Behavior: Policy Implications

    NASA Astrophysics Data System (ADS)

    Ho, S. Ping

    2016-04-01

    Raising public awareness of human environmental problems has been considered an effective way to promote public participation in environmental sustainability. From the perspective individual level, such participation mainly include the willingness of adopting less consumptive lifestyles and following the principles of reuse, reduce, and recycle. However, in reality, the development of environmental sustainability falls into the "Enlightenment Fallacy," which asserts that enlightenment does not consequentially translate into meaningful reduction of pollution. We argue that environmental awareness is mainly at the level of cognition, which is built upon knowledge and facts; whereas the behaviors toward sustainability development are largely dominated by economic principles that focus on utility maximization. As such, the Enlightenment Fallacy can be explained by the "Tragedy of Commons" which occurrs in the prevailing capitalism based economic system. This is due to the sad fact assumed in modern Economics that human beings are in general self-interested with unending desires but few moral concerns. Thus, economic individuals, who seek mainly their maximal utility or benefit, will not make significant sacrifices for improving environmental sustainability, which cannot be achieved by only a few individuals. From this perspective, we argue that only those individuals who are less self-interested and have more compassion toward mankind and earth will actively participate in environmental sustainability. In this study, we examine empirically the Enlightenment Fallacy phenomenon and develop an empirical model to test the following four hypotheses concerning the inconsistency between the environmental cognition and the actual behaviors. Policy implications for promoting public participation will be suggested based on our empirical results. Hypothesis 1: Compassion (for mankind) has larger positive impacts than environmental cognition. Hypothesis 2: Social punishment and encouragement has larger positive impacts than environmental cognition. Hypothesis 3: The higher the individuals' need/desire for resource preservation is, the less the individuals' participation in environmental sustainability is. Hypothesis 4: The higher the individuals' compassion is, the less the impact of individuals' need for resource preservation on environmental participation is.

  16. The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators

    NASA Astrophysics Data System (ADS)

    Ahmedov, Anvarjon

    2018-03-01

    In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.

  17. A proof of the log-concavity conjecture related to the computation of the ergodic capacity of MIMO channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurvitis, Leonid

    2009-01-01

    An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.

  18. Statistical mechanics of multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Marzolino, U.; Parisi, G.; Pascazio, S.

    2009-02-01

    We characterize the multipartite entanglement of a system of n qubits in terms of the distribution function of the bipartite purity over all balanced bipartitions. We search for those (maximally multipartite entangled) states whose purity is minimum for all bipartitions and recast this optimization problem into a problem of statistical mechanics.

  19. Manpower and the City.

    ERIC Educational Resources Information Center

    Bolino, August C.

    Stressing the problems of American inner cities, this volume reviews major manpower problems in their urban setting, various Federal training and educational approaches to maximizing the use of manpower, and the directions that these programs may take during the 1970s. Chapter 1 reviews the general economic conditions of American cities.…

  20. Efficient Double Auction Mechanisms in the Energy Grid with Connected and Islanded Microgrids

    NASA Astrophysics Data System (ADS)

    Faqiry, Mohammad Nazif

    The future energy grid is expected to operate in a decentralized fashion as a network of autonomous microgrids that are coordinated by a Distribution System Operator (DSO), which should allocate energy to them in an efficient manner. Each microgrid operating in either islanded or grid-connected mode may be considered to manage its own resources. This can take place through auctions with individual units of the microgrid as the agents. This research proposes efficient auction mechanisms for the energy grid, with is-landed and connected microgrids. The microgrid level auction is carried out by means of an intermediate agent called an aggregator. The individual consumer and producer units are modeled as selfish agents. With the microgrid in islanded mode, two aggregator-level auction classes are analyzed: (i) price-heterogeneous, and (ii) price homogeneous. Under the price heterogeneity paradigm, this research extends earlier work on the well-known, single-sided Kelly mechanism to double auctions. As in Kelly auctions, the proposed algorithm implements the bidding without using any agent level private infor-mation (i.e. generation capacity and utility functions). The proposed auction is shown to be an efficient mechanism that maximizes the social welfare, i.e. the sum of the utilities of all the agents. Furthermore, the research considers the situation where a subset of agents act as a coalition to redistribute the allocated energy and price using any other specific fairness criterion. The price homogeneous double auction algorithm proposed in this research ad-dresses the problem of price-anticipation, where each agent tries to influence the equilibri-um price of energy by placing strategic bids. As a result of this behavior, the auction's efficiency is lowered. This research proposes a novel approach that is implemented by the aggregator, called virtual bidding, where the efficiency can be asymptotically maximized, even in the presence of price anticipatory bidders. Next, an auction mechanism for the energy grid, with multiple connected mi-crogrids is considered. A globally efficient bi-level auction algorithm is proposed. At the upper-level, the algorithm takes into account physical grid constraints in allocating energy to the microgrids. It is implemented by the DSO as a linear objective quadratic constraint problem that allows price heterogeneity across the aggregators. In parallel, each aggrega-tor implements its own lower-level price homogeneous auction with virtual bidding. The research concludes with a preliminary study on extending the DSO level auc-tion to multi-period day-ahead scheduling. It takes into account storage units and conven-tional generators that are present in the grid by formulating the auction as a mixed inte-ger linear programming problem.

  1. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory

    PubMed Central

    Pastor-Bernier, Alexandre; Plott, Charles R.; Schultz, Wolfram

    2017-01-01

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices “as if” they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals’ choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals’ preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved “as if” they had well-structured preferences and maximized utility. PMID:28202727

  2. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory.

    PubMed

    Pastor-Bernier, Alexandre; Plott, Charles R; Schultz, Wolfram

    2017-03-07

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices "as if" they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals' choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals' preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved "as if" they had well-structured preferences and maximized utility.

  3. An improved Four-Russians method and sparsified Four-Russians algorithm for RNA folding.

    PubMed

    Frid, Yelena; Gusfield, Dan

    2016-01-01

    The basic RNA secondary structure prediction problem or single sequence folding problem (SSF) was solved 35 years ago by a now well-known [Formula: see text]-time dynamic programming method. Recently three methodologies-Valiant, Four-Russians, and Sparsification-have been applied to speedup RNA secondary structure prediction. The sparsification method exploits two properties of the input: the number of subsequence Z with the endpoints belonging to the optimal folding set and the maximum number base-pairs L. These sparsity properties satisfy [Formula: see text] and [Formula: see text], and the method reduces the algorithmic running time to O(LZ). While the Four-Russians method utilizes tabling partial results. In this paper, we explore three different algorithmic speedups. We first expand the reformulate the single sequence folding Four-Russians [Formula: see text]-time algorithm, to utilize an on-demand lookup table. Second, we create a framework that combines the fastest Sparsification and new fastest on-demand Four-Russians methods. This combined method has worst-case running time of [Formula: see text], where [Formula: see text] and [Formula: see text]. Third we update the Four-Russians formulation to achieve an on-demand [Formula: see text]-time parallel algorithm. This then leads to an asymptotic speedup of [Formula: see text] where [Formula: see text] and [Formula: see text] the number of subsequence with the endpoint j belonging to the optimal folding set. The on-demand formulation not only removes all extraneous computation and allows us to incorporate more realistic scoring schemes, but leads us to take advantage of the sparsity properties. Through asymptotic analysis and empirical testing on the base-pair maximization variant and a more biologically informative scoring scheme, we show that this Sparse Four-Russians framework is able to achieve a speedup on every problem instance, that is asymptotically never worse, and empirically better than achieved by the minimum of the two methods alone.

  4. Computational health economics for identification of unprofitable health care enrollees

    PubMed Central

    Rose, Sherri; Bergquist, Savannah L.; Layton, Timothy J.

    2017-01-01

    SUMMARY Health insurers may attempt to design their health plans to attract profitable enrollees while deterring unprofitable ones. Such insurers would not be delivering socially efficient levels of care by providing health plans that maximize societal benefit, but rather intentionally distorting plan benefits to avoid high-cost enrollees, potentially to the detriment of health and efficiency. In this work, we focus on a specific component of health plan design at risk for health insurer distortion in the Health Insurance Marketplaces: the prescription drug formulary. We introduce an ensembled machine learning function to determine whether drug utilization variables are predictive of a new measure of enrollee unprofitability we derive, and thus vulnerable to distortions by insurers. Our implementation also contains a unique application-specific variable selection tool. This study demonstrates that super learning is effective in extracting the relevant signal for this prediction problem, and that a small number of drug variables can be used to identify unprofitable enrollees. The results are both encouraging and concerning. While risk adjustment appears to have been reasonably successful at weakening the relationship between therapeutic-class-specific drug utilization and unprofitability, some classes remain predictive of insurer losses. The vulnerable enrollees whose prescription drug regimens include drugs in these classes may need special protection from regulators in health insurance market design. PMID:28369273

  5. Primary school children's communication experiences with Twitter: a case study from Turkey.

    PubMed

    Gunuc, Selim; Misirli, Ozge; Odabasi, H Ferhan

    2013-06-01

    This case study examines the utilization of Twitter as a communication channel among primary school children. This study tries to answer the following questions: "What are the cases for primary school children's use of Twitter for communication?" and "What are primary school children's experiences of utilizing Twitter for communication?" Participants were 7th grade students (17 female, 34 male; age 13 years) studying in a private primary school in Turkey within the 2011-12 academic year. A questionnaire, semi-structured interview, document analysis, and open ended questions were used as data collection tools. The children were invited and encouraged to use Twitter for communication. Whilst participants had some minor difficulties getting accustomed to Twitter, they managed to use Twitter for communication, a conclusion drawn from the children's responses and tweets within the study. However, the majority of children did not consider Twitter as a communication tool, and were observed to quit using Twitter once the study had ended. They found Twitter unproductive and restrictive for communication. Furthermore, Twitter's low popularity among adolescents was also a problem. This study suggests that social networking tools favored by children should be integrated into educational settings in order to maximize instructional benefits for primary school children and adolescents.

  6. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  7. Application of soil block without burning process and calcium silicate panels as building wall in mountainous area

    NASA Astrophysics Data System (ADS)

    Noerwasito, Vincentius Totok; Nasution, Tanti Satriana Rosary

    2017-11-01

    Utilization of local building materials in a residential location in mountainous area is very important, considering local material as a low-energy building material because of low transport energy. The local building materials used in this study are walls made from soil blocks. The material was made by the surrounding community from compacted soil without burning process. To maximize the potential of soil block to the outdoor temperature in the mountains, it is necessary to add non-local building materials as an insulator from the influence of the outside air. The insulator was calcium silicate panel. The location of the research is Trawas sub-district, Mojokerto regency, which is a mountainous area. The research problem is on applying the composition of local materials and calcium silicate panels that it will be able to meet the requirements as a wall building material and finding to what extent the impact of the wall against indoor temperature. The result from this research was the application of soil block walls insulated by calcium silicate panels in a building model. Besides, because of the utilization of those materials, the building has a specific difference between indoor and outdoor temperature. Thus, this model can be applied in mountainous areas in Indonesia.

  8. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  9. Solving a Production Scheduling Problem by Means of Two Biobjective Metaheuristic Procedures

    NASA Astrophysics Data System (ADS)

    Toncovich, Adrián; Oliveros Colay, María José; Moreno, José María; Corral, Jiménez; Corral, Rafael

    2009-11-01

    Production planning and scheduling problems emphasize the need for the availability of management tools that can help to assure proper service levels to customers, maintaining, at the same time, the production costs at acceptable levels and maximizing the utilization of the production facilities. In this case, a production scheduling problem that arises in the context of the activities of a company dedicated to the manufacturing of furniture for children and teenagers is addressed. Two bicriteria metaheuristic procedures are proposed to solve the sequencing problem in a production equipment that constitutes the bottleneck of the production process of the company. The production scheduling problem can be characterized as a general flow shop with sequence dependant setup times and additional inventory constraints. Two objectives are simultaneously taken into account when the quality of the candidate solutions is evaluated: the minimization of completion time of all jobs, or makespan, and the minimization of the total flow time of all jobs. Both procedures are based on a local search strategy that responds to the structure of the simulated annealing metaheuristic. In this case, both metaheuristic approaches generate a set of solutions that provides an approximation to the optimal Pareto front. In order to evaluate the performance of the proposed techniques a series of experiments was conducted. After analyzing the results, it can be said that the solutions provided by both approaches are adequate from the viewpoint of the quality as well as the computational effort involved in their generation. Nevertheless, a further refinement of the proposed procedures should be implemented with the aim of facilitating a quasi-automatic definition of the solution parameters.

  10. Balance in machine architecture: Bandwidth on board and offboard, integer/control speed and flops versus memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischler, M.

    1992-04-01

    The issues to be addressed here are those of balance'' in machine architecture. By this, we mean how much emphasis must be placed on various aspects of the system to maximize its usefulness for physics. There are three components that contribute to the utility of a system: How the machine can be used, how big a problem can be attacked, and what the effective capabilities (power) of the hardware are like. The effective power issue is a matter of evaluating the impact of design decisions trading off architectural features such as memory bandwidth and interprocessor communication capabilities. What is studiedmore » is the effect these machine parameters have on how quickly the system can solve desired problems. There is a reasonable method for studying this: One selects a few representative algorithms and computes the impact of changing memory bandwidths, and so forth. The only room for controversy here is in the selection of representative problems. The issue of how big a problem can be attacked boils down to a balance of memory size versus power. Although this is a balance issue it is very different than the effective power situation, because no firm answer can be given at this time. The power to memory ratio is highly problem dependent, and optimizing it requires several pieces of physics input, including: how big a lattice is needed for interesting results; what sort of algorithms are best to use; and how many sweeps are needed to get valid results. We seem to be at the threshold of learning things about these issues, but for now, the memory size issue will necessarily be addressed in terms of best guesses, rules of thumb, and researchers' opinions.« less

  11. Balance in machine architecture: Bandwidth on board and offboard, integer/control speed and flops versus memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischler, M.

    1992-04-01

    The issues to be addressed here are those of ``balance`` in machine architecture. By this, we mean how much emphasis must be placed on various aspects of the system to maximize its usefulness for physics. There are three components that contribute to the utility of a system: How the machine can be used, how big a problem can be attacked, and what the effective capabilities (power) of the hardware are like. The effective power issue is a matter of evaluating the impact of design decisions trading off architectural features such as memory bandwidth and interprocessor communication capabilities. What is studiedmore » is the effect these machine parameters have on how quickly the system can solve desired problems. There is a reasonable method for studying this: One selects a few representative algorithms and computes the impact of changing memory bandwidths, and so forth. The only room for controversy here is in the selection of representative problems. The issue of how big a problem can be attacked boils down to a balance of memory size versus power. Although this is a balance issue it is very different than the effective power situation, because no firm answer can be given at this time. The power to memory ratio is highly problem dependent, and optimizing it requires several pieces of physics input, including: how big a lattice is needed for interesting results; what sort of algorithms are best to use; and how many sweeps are needed to get valid results. We seem to be at the threshold of learning things about these issues, but for now, the memory size issue will necessarily be addressed in terms of best guesses, rules of thumb, and researchers` opinions.« less

  12. Kolkata Paise Restaurant Problem: An Introduction

    NASA Astrophysics Data System (ADS)

    Ghosh, Asim; Biswas, Soumyajyoti; Chatterjee, Arnab; Chakrabarti, Anindya Sundar; Naskar, Tapan; Mitra, Manipushpak; Chakrabarti, Bikas K.

    We discuss several stochastic optimization strategies in games with many players having large number of choices (Kolkata Paise Restaurant Problem) and two choices (minority game problem). It is seen that a stochastic crowd avoiding strategy gives very efficient utilization in KPR problem. A slightly modified strategy in the minority game problem gives full utilization but the dynamics stops after reaching full efficiency, thereby making the utilization helpful for only about half of the population (those in minority). We further discuss the ways in which the dynamics may be continued and the utilization becomes effective for all the agents keeping fluctuation arbitrarily small.

  13. Robust Rate Maximization for Heterogeneous Wireless Networks under Channel Uncertainties

    PubMed Central

    Xu, Yongjun; Hu, Yuan; Li, Guoquan

    2018-01-01

    Heterogeneous wireless networks are a promising technology in next generation wireless communication networks, which has been shown to efficiently reduce the blind area of mobile communication and improve network coverage compared with the traditional wireless communication networks. In this paper, a robust power allocation problem for a two-tier heterogeneous wireless networks is formulated based on orthogonal frequency-division multiplexing technology. Under the consideration of imperfect channel state information (CSI), the robust sum-rate maximization problem is built while avoiding sever cross-tier interference to macrocell user and maintaining the minimum rate requirement of each femtocell user. To be practical, both of channel estimation errors from the femtocells to the macrocell and link uncertainties of each femtocell user are simultaneously considered in terms of outage probabilities of users. The optimization problem is analyzed under no CSI feedback with some cumulative distribution function and partial CSI with Gaussian distribution of channel estimation error. The robust optimization problem is converted into the convex optimization problem which is solved by using Lagrange dual theory and subgradient algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm by the impact of channel uncertainties on the system performance. PMID:29466315

  14. Increased cardiac output elicits higher V̇O2max in response to self-paced exercise.

    PubMed

    Astorino, Todd Anthony; McMillan, David William; Edmunds, Ross Montgomery; Sanchez, Eduardo

    2015-03-01

    Recently, a self-paced protocol demonstrated higher maximal oxygen uptake versus the traditional ramp protocol. The primary aim of the current study was to further explore potential differences in maximal oxygen uptake between the ramp and self-paced protocols using simultaneous measurement of cardiac output. Active men and women of various fitness levels (N = 30, mean age = 26.0 ± 5.0 years) completed 3 graded exercise tests separated by a minimum of 48 h. Participants initially completed progressive ramp exercise to exhaustion to determine maximal oxygen uptake followed by a verification test to confirm maximal oxygen uptake attainment. Over the next 2 sessions, they performed a self-paced and an additional ramp protocol. During exercise, gas exchange data were obtained using indirect calorimetry, and thoracic impedance was utilized to estimate hemodynamic function (stroke volume and cardiac output). One-way ANOVA with repeated measures was used to determine differences in maximal oxygen uptake and cardiac output between ramp and self-paced testing. Results demonstrated lower (p < 0.001) maximal oxygen uptake via the ramp (47.2 ± 10.2 mL·kg(-1)·min(-1)) versus the self-paced (50.2 ± 9.6 mL·kg(-1)·min(-1)) protocol, with no interaction (p = 0.06) seen for fitness level. Maximal heart rate and cardiac output (p = 0.02) were higher in the self-paced protocol versus ramp exercise. In conclusion, data show that the traditional ramp protocol may underestimate maximal oxygen uptake compared with a newly developed self-paced protocol, with a greater cardiac output potentially responsible for this outcome.

  15. Renal Perfusion in Scleroderma Patients Assessed by Microbubble-Based Contrast-Enhanced Ultrasound

    PubMed Central

    Kleinert, Stefan; Roll, Petra; Baumgaertner, Christian; Himsel, Andrea; Mueller, Adelheid; Fleck, Martin; Feuchtenberger, Martin; Jenett, Manfred; Tony, Hans-Peter

    2012-01-01

    Objectives: Renal damage is common in scleroderma. It can occur acutely or chronically. Renal reserve might already be impaired before it can be detected by laboratory findings. Microbubble-based contrast-enhanced ultrasound has been demonstrated to improve blood perfusion imaging in organs. Therefore, we conducted a study to assess renal perfusion in scleroderma patients utilizing this novel technique. Materials and Methodology: Microbubble-based contrast agent was infused and destroyed by using high mechanical index by Siemens Sequoia (curved array, 4.5 MHz). Replenishment was recorded for 8 seconds. Regions of interests (ROI) were analyzed in renal parenchyma, interlobular artery and renal pyramid with quantitative contrast software (CUSQ 1.4, Siemens Acuson, Mountain View, California). Time to maximal Enhancement (TmE), maximal enhancement (mE) and maximal enhancement relative to maximal enhancement of the interlobular artery (mE%A) were calculated for different ROIs. Results: There was a linear correlation between the time to maximal enhancement in the parenchyma and the glomerular filtration rate. However, the other parameters did not reveal significant differences between scleroderma patients and healthy controls. Conclusion: Renal perfusion of scleroderma patients including the glomerular filtration rate can be assessed using microbubble-based contrast media. PMID:22670165

  16. Model-Based Clustering of Regression Time Series Data via APECM -- An AECM Algorithm Sung to an Even Faster Beat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wei-Chen; Maitra, Ranjan

    2011-01-01

    We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less

  17. On Health Education Becoming a Pedagogy of Global Health.

    ERIC Educational Resources Information Center

    Rittman, Joseph

    1987-01-01

    A review of the status and problems of international health education considers the effects of the economy on health expenditures and problems and the extent of education. Health education can begin to achieve greater bases for global health when it educates maximal health care users of counterproductive expenditures for health in the United…

  18. Over the Horizon: Potential Impact of Emerging Trends in Information and Communication Technology on Disability Policy and Practice

    ERIC Educational Resources Information Center

    Vanderheiden, Gregg

    2006-01-01

    This policy paper explores key trends in information and communication technology, highlights the potential opportunities and problems these trends present for people with disabilities, and suggests some strategies to maximize opportunities and avoid potential problems and barriers. Specifically, this paper discusses technology trends that present…

  19. Results and Implications of a Problem-Solving Treatment Program for Obesity.

    ERIC Educational Resources Information Center

    Mahoney, B. K.; And Others

    Data are from a large scale experimental study which was designed to evaluate a multimethod problem solving approach to obesity. Obese adult volunteers (N=90) were randomly assigned to three groups: maximal treatment, minimal treatment, and no treatment control. In the two treatment groups, subjects were exposed to bibliographic material and…

  20. Lightweight, High Performance, Low Cost Propulsion Systems for Mars Exploration Missions to Maximize Science Payload

    NASA Astrophysics Data System (ADS)

    Trinh, H. P.

    2012-06-01

    Utilization of new cold hypergolic propellants and leverage Missile Defense Agency technology for propulsion systems on Mars explorations will provide an increase of science payload and have significant payoffs and benefits for NASA missions.

  1. Genetic variation in the USDA Chamaecrista fasciculata collection

    USDA-ARS?s Scientific Manuscript database

    Germplasm collections serve as critical repositories of genetic variation. Characterizing genetic diversity in existing collections is necessary to maximize their utility and to guide future collecting efforts. We have used AFLP markers to characterize genetic variation in the USDA germplasm collect...

  2. An Investigation of Generalized Differential Evolution Metaheuristic for Multiobjective Optimal Crop-Mix Planning Decision

    PubMed Central

    Olugbara, Oludayo

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369

  3. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    PubMed

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  4. Runway Operations Planning: A Two-Stage Solution Methodology

    NASA Technical Reports Server (NTRS)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program. Preliminary results from the algorithm implementation on real-world traffic data are included.

  5. Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model

    PubMed Central

    Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong

    2014-01-01

    Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005

  6. A bicriteria heuristic for an elective surgery scheduling problem.

    PubMed

    Marques, Inês; Captivo, M Eugénia; Vaz Pato, Margarida

    2015-09-01

    Resource rationalization and reduction of waiting lists for surgery are two main guidelines for hospital units outlined in the Portuguese National Health Plan. This work is dedicated to an elective surgery scheduling problem arising in a Lisbon public hospital. In order to increase the surgical suite's efficiency and to reduce the waiting lists for surgery, two objectives are considered: maximize surgical suite occupation and maximize the number of surgeries scheduled. This elective surgery scheduling problem consists of assigning an intervention date, an operating room and a starting time for elective surgeries selected from the hospital waiting list. Accordingly, a bicriteria surgery scheduling problem arising in the hospital under study is presented. To search for efficient solutions of the bicriteria optimization problem, the minimization of a weighted Chebyshev distance to a reference point is used. A constructive and improvement heuristic procedure specially designed to address the objectives of the problem is developed and results of computational experiments obtained with empirical data from the hospital are presented. This study shows that by using the bicriteria approach presented here it is possible to build surgical plans with very good performance levels. This method can be used within an interactive approach with the decision maker. It can also be easily adapted to other hospitals with similar scheduling conditions.

  7. Blood detection in wireless capsule endoscopy using expectation maximization clustering

    NASA Astrophysics Data System (ADS)

    Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.

    2006-03-01

    Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.

  8. An Elegant Sufficiency: Load-Aware Differentiated Scheduling of Data Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kettimuthu, Rajkumar; Vardoyan, Gayane; Agrawal, Gagan

    2015-11-15

    We investigate the file transfer scheduling problem, where transfers among different endpoints must be scheduled to maximize pertinent metrics. We propose two new algorithms that exploit the fact that the aggregate bandwidth obtained over a network or at a storage system tends to increase with the number of concurrent transfers—but only up to a certain limit. The first algorithm, SEAL, uses runtime information and data-driven models to approximate system load and adapt transfer schedules and concurrency so as to maximize performance while avoiding saturation. We implement this algorithm using GridFTP as the transfer protocol and evaluate it using real transfermore » logs in a production WAN environment. Results show that SEAL can improve average slowdowns and turnaround times by up to 25% and worst-case slowdown and turnaround times by up to 50%, compared with the best-performing baseline scheme. Our second algorithm, STEAL, further leverages user-supplied categorization of transfers as either “interactive” (requiring immediate processing) or “batch” (less time-critical). Results show that STEAL reduces the average slowdown of interactive transfers by 63% compared to the best-performing baseline and by 21% compared to SEAL. For batch transfers, compared to the best-performing baseline, STEAL improves by 18% the utilization of the bandwidth unused by interactive transfers. By elegantly ensuring a sufficient, but not excessive, allocation of concurrency to the right transfers, we significantly improve overall performance despite constraints.« less

  9. A maximally selected test of symmetry about zero.

    PubMed

    Laska, Eugene; Meisner, Morris; Wanderling, Joseph

    2012-11-20

    The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Melioration as rational choice: sequential decision making in uncertain environments.

    PubMed

    Sims, Chris R; Neth, Hansjörg; Jacobs, Robert A; Gray, Wayne D

    2013-01-01

    Melioration-defined as choosing a lesser, local gain over a greater longer term gain-is a behavioral tendency that people and pigeons share. As such, the empirical occurrence of meliorating behavior has frequently been interpreted as evidence that the mechanisms of human choice violate the norms of economic rationality. In some environments, the relationship between actions and outcomes is known. In this case, the rationality of choice behavior can be evaluated in terms of how successfully it maximizes utility given knowledge of the environmental contingencies. In most complex environments, however, the relationship between actions and future outcomes is uncertain and must be learned from experience. When the difficulty of this learning challenge is taken into account, it is not evident that melioration represents suboptimal choice behavior. In the present article, we examine human performance in a sequential decision-making experiment that is known to induce meliorating behavior. In keeping with previous results using this paradigm, we find that the majority of participants in the experiment fail to adopt the optimal decision strategy and instead demonstrate a significant bias toward melioration. To explore the origins of this behavior, we develop a rational analysis (Anderson, 1990) of the learning problem facing individuals in uncertain decision environments. Our analysis demonstrates that an unbiased learner would adopt melioration as the optimal response strategy for maximizing long-term gain. We suggest that many documented cases of melioration can be reinterpreted not as irrational choice but rather as globally optimal choice under uncertainty.

  11. HPLC studio: a novel software utility to perform HPLC chromatogram comparison for screening purposes.

    PubMed

    García, J B; Tormo, José R

    2003-06-01

    A new tool, HPLC Studio, was developed for the comparison of high-performance liquid chromatography (HPLC) chromatograms from microbial extracts. The new utility makes it possible to create a virtual chromatogram by mixing up to 20 individual chromatograms. The virtual chromatogram is the first step in establishing a ranking of the microbial fermentation conditions based on either the area or diversity of HPLC peaks. The utility was used to maximize the diversity of secondary metabolites tested from a microorganism and therefore increase the chances of finding new lead compounds in a drug discovery program.

  12. Network approach for decision making under risk—How do we choose among probabilistic options with the same expected value?

    PubMed Central

    Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665

  13. Network approach for decision making under risk-How do we choose among probabilistic options with the same expected value?

    PubMed

    Pan, Wei; Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.

  14. Method and system for controlling a gasification or partial oxidation process

    DOEpatents

    Rozelle, Peter L; Der, Victor K

    2015-02-10

    A method and system for controlling a fuel gasification system includes optimizing a conversion of solid components in the fuel to gaseous fuel components, controlling the flux of solids entrained in the product gas through equipment downstream of the gasifier, and maximizing the overall efficiencies of processes utilizing gasification. A combination of models, when utilized together, can be integrated with existing plant control systems and operating procedures and employed to develop new control systems and operating procedures. Such an approach is further applicable to gasification systems that utilize both dry feed and slurry feed.

  15. Martian resource locations: Identification and optimization

    NASA Astrophysics Data System (ADS)

    Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam

    2005-04-01

    The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.

  16. Endogenous patient responses and the consistency principle in cost-effectiveness analysis.

    PubMed

    Liu, Liqun; Rettenmaier, Andrew J; Saving, Thomas R

    2012-01-01

    In addition to incurring direct treatment costs and generating direct health benefits that improve longevity and/or health-related quality of life, medical interventions often have further or "unrelated" financial and health impacts, raising the issue of what costs and effects should be included in calculating the cost-effectiveness ratio of an intervention. The "consistency principle" in medical cost-effectiveness analysis (CEA) requires that one include both the cost and the utility benefit of a change (in medical expenditures, consumption, or leisure) caused by an intervention or neither of them. By distinguishing between exogenous changes directly brought about by an intervention and endogenous patient responses to the exogenous changes, and within a lifetime utility maximization framework, this article addresses 2 questions related to the consistency principle: 1) how to choose among alternative internally consistent exclusion/inclusion rules, and 2) what to do with survival consumption costs and earnings. It finds that, for an endogenous change, excluding or including both the cost and the utility benefit of the change does not alter cost-effectiveness results. Further, in agreement with the consistency principle, welfare maximization implies that consumption costs and earnings during the extended life directly caused by an intervention should be included in CEA.

  17. Achieving Congestion Mitigation Using Distributed Power Control for Spectrum Sensor Nodes in Sensor Network-Aided Cognitive Radio Ad Hoc Networks

    PubMed Central

    Zhuo, Fan; Duan, Hucai

    2017-01-01

    The data sequence of spectrum sensing results injected from dedicated spectrum sensor nodes (SSNs) and the data traffic from upstream secondary users (SUs) lead to unpredictable data loads in a sensor network-aided cognitive radio ad hoc network (SN-CRN). As a result, network congestion may occur at a SU acting as fusion center when the offered data load exceeds its available capacity, which degrades network performance. In this paper, we present an effective approach to mitigate congestion of bottlenecked SUs via a proposed distributed power control framework for SSNs over a rectangular grid based SN-CRN, aiming to balance resource load and avoid excessive congestion. To achieve this goal, a distributed power control framework for SSNs from interior tier (IT) and middle tier (MT) is proposed to achieve the tradeoff between channel capacity and energy consumption. In particular, we firstly devise two pricing factors by considering stability of local spectrum sensing and spectrum sensing quality for SSNs. By the aid of pricing factors, the utility function of this power control problem is formulated by jointly taking into account the revenue of power reduction and the cost of energy consumption for IT or MT SSN. By bearing in mind the utility function maximization and linear differential equation constraint of energy consumption, we further formulate the power control problem as a differential game model under a cooperation or noncooperation scenario, and rigorously obtain the optimal solutions to this game model by employing dynamic programming. Then the congestion mitigation for bottlenecked SUs is derived by alleviating the buffer load over their internal buffers. Simulation results are presented to show the effectiveness of the proposed approach under the rectangular grid based SN-CRN scenario. PMID:28914803

  18. Optimal rail container shipment planning problem in multimodal transportation

    NASA Astrophysics Data System (ADS)

    Cao, Chengxuan; Gao, Ziyou; Li, Keping

    2012-09-01

    The optimal rail container shipment planning problem in multimodal transportation is studied in this article. The characteristics of the multi-period planning problem is presented and the problem is formulated as a large-scale 0-1 integer programming model, which maximizes the total profit generated by all freight bookings accepted in a multi-period planning horizon subject to the limited capacities. Two heuristic algorithms are proposed to obtain an approximate optimal solution of the problem. Finally, numerical experiments are conducted to demonstrate the proposed formulation and heuristic algorithms.

  19. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  20. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer

    PubMed Central

    Yu, Hongyan; Zhang, Yongqiang; Yang, Yuanyuan; Ji, Luyue

    2017-01-01

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively. PMID:28820496

  1. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer.

    PubMed

    Yu, Hongyan; Zhang, Yongqiang; Guo, Songtao; Yang, Yuanyuan; Ji, Luyue

    2017-08-18

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively.

  2. Polynomial algorithms for the Maximal Pairing Problem: efficient phylogenetic targeting on arbitrary trees

    PubMed Central

    2010-01-01

    Background The Maximal Pairing Problem (MPP) is the prototype of a class of combinatorial optimization problems that are of considerable interest in bioinformatics: Given an arbitrary phylogenetic tree T and weights ωxy for the paths between any two pairs of leaves (x, y), what is the collection of edge-disjoint paths between pairs of leaves that maximizes the total weight? Special cases of the MPP for binary trees and equal weights have been described previously; algorithms to solve the general MPP are still missing, however. Results We describe a relatively simple dynamic programming algorithm for the special case of binary trees. We then show that the general case of multifurcating trees can be treated by interleaving solutions to certain auxiliary Maximum Weighted Matching problems with an extension of this dynamic programming approach, resulting in an overall polynomial-time solution of complexity (n4 log n) w.r.t. the number n of leaves. The source code of a C implementation can be obtained under the GNU Public License from http://www.bioinf.uni-leipzig.de/Software/Targeting. For binary trees, we furthermore discuss several constrained variants of the MPP as well as a partition function approach to the probabilistic version of the MPP. Conclusions The algorithms introduced here make it possible to solve the MPP also for large trees with high-degree vertices. This has practical relevance in the field of comparative phylogenetics and, for example, in the context of phylogenetic targeting, i.e., data collection with resource limitations. PMID:20525185

  3. Optimal throughput for cognitive radio with energy harvesting in fading wireless channel.

    PubMed

    Vu-Van, Hiep; Koo, Insoo

    2014-01-01

    Energy resource management is a crucial problem of a device with a finite capacity battery. In this paper, cognitive radio is considered to be a device with an energy harvester that can harvest energy from a non-RF energy resource while performing other actions of cognitive radio. Harvested energy will be stored in a finite capacity battery. At the start of the time slot of cognitive radio, the radio needs to determine if it should remain silent or carry out spectrum sensing based on the idle probability of the primary user and the remaining energy in order to maximize the throughput of the cognitive radio system. In addition, optimal sensing energy and adaptive transmission power control are also investigated in this paper to effectively utilize the limited energy of cognitive radio. Finding an optimal approach is formulated as a partially observable Markov decision process. The simulation results show that the proposed optimal decision scheme outperforms the myopic scheme in which current throughput is only considered when making a decision.

  4. Entry Debris Field Estimation Methods and Application to Compton Gamma Ray Observatory Disposal

    NASA Technical Reports Server (NTRS)

    Mrozinski, Richard B.

    2001-01-01

    For public safety reasons, the Compton Gamma Ray Observatory (CGRO) was intentionally deorbited on June 4, 2000. This deorbit was NASA's first intentional controlled deorbit of a satellite, and more will come including the eventual deorbit of the International Space Station. To maximize public safety, satellite deorbit planning requires conservative estimates of the debris footprint size and location. These estimates are needed to properly design a deorbit sequence that places the debris footprint over unpopulated areas, including protection for deorbit contingencies. This paper details a method for estimating the length (range), width (crossrange), and location of entry and breakup debris footprints. This method utilizes a three degree-of-freedom Monte Carlo simulation incorporating uncertainties in all aspects of the problem, including vehicle and environment uncertainties. The method incorporates a range of debris characteristics based on historical data in addition to any vehicle-specific debris catalog information. This paper describes the method in detail, and presents results of its application as used in planning the deorbit of the CGRO.

  5. Scene text detection by leveraging multi-channel information and local context

    NASA Astrophysics Data System (ADS)

    Wang, Runmin; Qian, Shengyou; Yang, Jianfeng; Gao, Changxin

    2018-03-01

    As an important information carrier, texts play significant roles in many applications. However, text detection in unconstrained scenes is a challenging problem due to cluttered backgrounds, various appearances, uneven illumination, etc.. In this paper, an approach based on multi-channel information and local context is proposed to detect texts in natural scenes. According to character candidate detection plays a vital role in text detection system, Maximally Stable Extremal Regions(MSERs) and Graph-cut based method are integrated to obtain the character candidates by leveraging the multi-channel image information. A cascaded false positive elimination mechanism are constructed from the perspective of the character and the text line respectively. Since the local context information is very valuable for us, these information is utilized to retrieve the missing characters for boosting the text detection performance. Experimental results on two benchmark datasets, i.e., the ICDAR 2011 dataset and the ICDAR 2013 dataset, demonstrate that the proposed method have achieved the state-of-the-art performance.

  6. Pure Insulin Nanoparticle Agglomerates for Pulmonary Delivery

    PubMed Central

    Bailey, Mark M.; Gorman, Eric M.; Munson, Eric J.; Berkland, Cory J.

    2009-01-01

    Diabetes is a set of diseases characterized by defects in insulin utilization, either through autoimmune destruction of insulin-producing cells (Type I) or insulin resistance (Type II). Treatment options can include regular injections of insulin, which can be painful and inconvenient, often leading to low patient compliance. To overcome this problem, novel formulations of insulin are being investigated, such as inhaled aerosols. Sufficient deposition of powder in the peripheral lung to maximize systemic absorption requires precise control over particle size and density, with particles between 1 and 5 μm in aerodynamic diameter being within the respirable range. Insulin nanoparticles were produced by titrating insulin dissolved at low pH up to the pI of the native protein, and were then further processed into microparticles using solvent displacement. Particle size, crystallinity, dissolution properties, structural stability, and bulk powder density were characterized. We have demonstrated that pure drug insulin microparticles can be produced from nanosuspensions with minimal processing steps without excipients, and with suitable properties for deposition in the peripheral lung. PMID:18959432

  7. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  8. Optimization in fractional aircraft ownership

    NASA Astrophysics Data System (ADS)

    Septiani, R. D.; Pasaribu, H. M.; Soewono, E.; Fayalita, R. A.

    2012-05-01

    Fractional Aircraft Ownership is a new concept in flight ownership management system where each individual or corporation may own a fraction of an aircraft. In this system, the owners have privilege to schedule their flight according to their needs. Fractional management companies (FMC) manages all aspects of aircraft operations, including utilization of FMC's aircraft in combination of outsourced aircrafts. This gives the owners the right to enjoy the benefits of private aviations. However, FMC may have complicated business requirements that neither commercial airlines nor charter airlines faces. Here, optimization models are constructed to minimize the number of aircrafts in order to maximize the profit and to minimize the daily operating cost. In this paper, three kinds of demand scenarios are made to represent different flight operations from different types of fractional owners. The problems are formulated as an optimization of profit and a daily operational cost to find the optimum flight assignments satisfying the weekly and daily demand respectively from the owners. Numerical results are obtained by Genetic Algorithm method.

  9. The HST/STIS Next Generation Spectral Library

    NASA Technical Reports Server (NTRS)

    Gregg, M. D.; Silva, D.; Rayner, J.; Worthey, G.; Valdes, F.; Pickles, A.; Rose, J.; Carney, B.; Vacca, W.

    2006-01-01

    During Cycles 10, 12, and 13, we obtained STIS G230LB, G430L, and G750L spectra of 378 bright stars covering a wide range in abundance, effective temperature, and luminosity. This HST/STIS Next Generation Spectral Library was scheduled to reach its goal of 600 targets by the end of Cycle 13 when STIS came to an untimely end. Even at 2/3 complete, the library significantly improves the sampling of stellar atmosphere parameter space compared to most other spectral libraries by including the near-UV and significant numbers of metal poor and super-solar abundance stars. Numerous calibration challenges have been encountered, some expected, some not; these arise from the use of the E1 aperture location, non-standard wavelength calibration, and, most significantly, the serious contamination of the near-UV spectra by red light. Maximizing the utility of the library depends directly on overcoming or at least minimizing these problems, especially correcting the UV spectra.

  10. Employment Trajectories: Exploring Gender Differences and Impacts of Drug Use

    PubMed Central

    Huang, David Y.C.; Evans, Elizabeth; Hara, Motoaki; Weiss, Robert E.; Hser, Yih-Ing

    2010-01-01

    This study investigated the impact of drug use on employment over 20 years among men and women, utilizing data on 7,661 participants in the National Longitudinal Survey of Youth. Growth mixture modeling was applied, and five distinct employment trajectory groups were identified for both men and women. The identified patterns were largely similar for men and women except that a U-shape employment trajectory was uniquely identified for women. Early-initiation drug users, users of “hard” drugs, and frequent drug users were more likely to demonstrate consistently low levels of employment, and the negative relationship between drug use and employment was more apparent among men than women. Also, positive associations between employment and marriage became more salient for men over time, as did negative associations between employment and childrearing among women. Processes are dynamic and complex, suggesting that throughout the life course, protective factors that reduce the risk of employment problems emerge and change, as do critical periods for maximizing the impact of drug prevention and intervention efforts. PMID:21765533

  11. Novel and general approach to linear filter design for contrast-to-noise ratio enhancement of magnetic resonance images with multiple interfering features in the scene

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.

    1992-04-01

    Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.

  12. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  13. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    NASA Astrophysics Data System (ADS)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  14. Multi-source recruitment strategies for advancing addiction recovery research beyond treated samples

    PubMed Central

    Subbaraman, Meenakshi Sabina; Laudet, Alexandre B.; Ritter, Lois A.; Stunz, Aina; Kaskutas, Lee Ann

    2014-01-01

    Background The lack of established sampling frames makes reaching individuals in recovery from substance problems difficult. Although general population studies are most generalizable, the low prevalence of individuals in recovery makes this strategy costly and inefficient. Though more efficient, treatment samples are biased. Aims To describe multi-source recruitment for capturing participants from heterogeneous pathways to recovery; assess which sources produced the most respondents within subgroups; and compare treatment and non-treatment samples to address generalizability. Results Family/friends, Craigslist, social media and non-12-step groups produced the most respondents from hard-to-reach groups, such as racial minorities and treatment-naïve individuals. Recovery organizations yielded twice as many African-Americans and more rural dwellers, while social media yielded twice as many young people than other sources. Treatment samples had proportionally fewer females and older individuals compared to non-treated samples. Conclusions Future research on recovery should utilize previously neglected recruiting strategies to maximize the representativeness of samples. PMID:26166909

  15. Hierarchical trie packet classification algorithm based on expectation-maximization clustering

    PubMed Central

    Bi, Xia-an; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476

  16. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  17. Scheduling in the Face of Uncertain Resource Consumption and Utility

    NASA Technical Reports Server (NTRS)

    Koga, Dennis (Technical Monitor); Frank, Jeremy; Dearden, Richard

    2003-01-01

    We discuss the problem of scheduling tasks that consume a resource with known capacity and where the tasks have varying utility. We consider problems in which the resource consumption and utility of each activity is described by probability distributions. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We first show that while some of these problems are NP-complete, others are only NP-Hard. We then describe various heuristic search algorithms to solve these problems and their drawbacks. Finally, we present empirical results that characterize the behavior of these heuristics over a variety of problem classes.

  18. Improving the Flexibility of Optimization-Based Decision Aiding Frameworks for Integrated Water Resource Management

    NASA Astrophysics Data System (ADS)

    Guillaume, J. H.; Kasprzyk, J. R.

    2013-12-01

    Deep uncertainty refers to situations in which stakeholders cannot agree on the full suite of risks for their system or their probabilities. Additionally, systems are often managed for multiple, conflicting objectives such as minimizing cost, maximizing environmental quality, and maximizing hydropower revenues. Many objective analysis (MOA) uses a quantitative model combined with evolutionary optimization to provide a tradeoff set of potential solutions to a planning problem. However, MOA is often performed using a single, fixed problem conceptualization. Focus on development of a single formulation can introduce an "inertia" into the problem solution, such that issues outside the initial formulation are less likely to ever be addressed. This study uses the Iterative Closed Question Methodology (ICQM) to continuously reframe the optimization problem, providing iterative definition and reflection for stakeholders. By using a series of directed questions to look beyond a problem's existing modeling representation, ICQM seeks to provide a working environment within which it is easy to modify the motivating question, assumptions, and model identification in optimization problems. The new approach helps identify and reduce bottle-necks introduced by properties of both the simulation model and optimization approach that reduce flexibility in generation and evaluation of alternatives. It can therefore help introduce new perspectives on the resolution of conflicts between objectives. The Lower Rio Grande Valley portfolio planning problem is used as a case study.

  19. Feasibility Study of Coal Gasification/Fuel Cell/Cogeneration Project. Scranton, Pennsylvania Site. Project Description,

    DTIC Science & Technology

    1985-11-01

    arranged to maximize thermal output; - Plant will meet PURPA criteria for recognition as a "Qualifying Facility" (QF). 7587A 2 - GFC emissions will be...10. Plant must meet Public Utilities Regulatory Policies Act ( PURPA ) criteria for classification as a "Qualifying Facility" (QF). 11. Visual effect...assessments. 3 The Public Utilities Regulatory Policies Act ( PURPA ) which is administered by the Federal Energy Regulatory Commission (FERC), governs how a

  20. Global Snow from Space: Development of a Satellite-based, Terrestrial Snow Mission Planning Tool

    NASA Astrophysics Data System (ADS)

    Forman, B. A.; Kumar, S.; LeMoigne, J.; Nag, S.

    2017-12-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary - or perhaps contradictory - information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?

  1. Towards the Development of a Global, Satellite-based, Terrestrial Snow Mission Planning Tool

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja

    2017-01-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASAs Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical orbital configuration.One objective the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include:1. What observational records are needed (in space and time) to maximize terrestrial snow experimental utility?2. How might observations be coordinated (in space and time) to maximize utility? 3. What is the additional utility associated with an additional observation?4. How can future mission costs being minimized while ensuring Science requirements are fulfilled?

  2. Towards the Development of a Global, Satellite-Based, Terrestrial Snow Mission Planning Tool

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Kumar, Sujay; Le Moigne, Jacqueline; Nag, Sreeja

    2017-01-01

    A global, satellite-based, terrestrial snow mission planning tool is proposed to help inform experimental mission design with relevance to snow depth and snow water equivalent (SWE). The idea leverages the capabilities of NASA's Land Information System (LIS) and the Tradespace Analysis Tool for Constellations (TAT-C) to harness the information content of Earth science mission data across a suite of hypothetical sensor designs, orbital configurations, data assimilation algorithms, and optimization and uncertainty techniques, including cost estimates and risk assessments of each hypothetical permutation. One objective of the proposed observing system simulation experiment (OSSE) is to assess the complementary or perhaps contradictory information content derived from the simultaneous collection of passive microwave (radiometer), active microwave (radar), and LIDAR observations from space-based platforms. The integrated system will enable a true end-to-end OSSE that can help quantify the value of observations based on their utility towards both scientific research and applications as well as to better guide future mission design. Science and mission planning questions addressed as part of this concept include: What observational records are needed (in space and time) to maximize terrestrial snow experimental utility? How might observations be coordinated (in space and time) to maximize this utility? What is the additional utility associated with an additional observation? How can future mission costs be minimized while ensuring Science requirements are fulfilled?

  3. Impact of Private Health Insurance on Lengths of Hospitalization and Healthcare Expenditure in India: Evidences from a Quasi-Experiment Study.

    PubMed

    Vellakkal, Sukumar

    2013-01-01

    The health insurers administer retrospectively package rates for various inpatient procedures as a provider payment mechanism to empanelled hospitals in Indian healthcare market. This study analyzed the impact of private health insurance on healthcare utilization in terms of both lengths of hospitalization and per-day hospitalization expenditure in Indian healthcare market where package rates are retrospectively defined as healthcare provider payment mechanism. The claim records of 94443 insured individuals and the hospitalisation data of 32665 uninsured individuals were used. By applying stepwise and propensity score matching method, the sample of uninsured individual was matched with insured and 'average treatment effect on treated' (ATT) was estimated. Overall, the strategies of hospitals, insured and insurers for maximizing their utility were competing with each other. However, two aligning co-operative strategies between insurer and hospitals were significant with dominant role of hospitals. The hospitals maximize their utility by providing high cost healthcare in par with pre-defined package rates but align with the interest of insurers by reducing the number (length) of hospitalisation days. The empirical results show that private health insurance coverage leads to i) reduction in length of hospitalization, and ii) increase in per day hospital (health) expenditure. It is necessary to regulate and develop a competent healthcare market in the country with proper monitoring mechanism on healthcare utilization and benchmarks for pricing and provision of healthcare services.

  4. Effective Fund-Raising for Non-profit Camps.

    ERIC Educational Resources Information Center

    Larson, Paula

    1998-01-01

    Identifies and describes strategies for effective fundraising: imagining the possibilities, identifying fund-raising sources, targeting fund-raising efforts, maximizing time by utilizing public relations efforts and involving staff, writing quality proposals and requests, and staying educated on fund-raising topics. Sidebars describe planned…

  5. RFC: EPA's Action Plan for Bisphenol A Pursuant to EPA's Data Quality Guidelines

    EPA Pesticide Factsheets

    The American Chemistry Council (ACC) submits this Request for Correction to the U.S. Environmental Protection Agency under the Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency

  6. Maximizing Educational Opportunity through Community Resources.

    ERIC Educational Resources Information Center

    Maradian, Steve

    In the face of increased demands and diminishing resources, educational administrators at correctional facilities should look beyond institutional resources and utilize the services of area community colleges. The community college has an established track record in correctional education. Besides the nationally recognized correctional programs…

  7. Applying Intermediate Microeconomics to Terrorism

    ERIC Educational Resources Information Center

    Anderton, Charles H.; Carter, John R.

    2006-01-01

    The authors show how microeconomic concepts and principles are applicable to the study of terrorism. The utility maximization model provides insights into both terrorist resource allocation choices and government counterterrorism efforts, and basic game theory helps characterize the strategic interdependencies among terrorists and governments.…

  8. Dishonest Academic Conduct: From the Perspective of the Utility Function.

    PubMed

    Sun, Ying; Tian, Rui

    Dishonest academic conduct has aroused extensive attention in academic circles. To explore how scholars make decisions according to the principle of maximal utility, the author has constructed the general utility function based on the expected utility theory. The concrete utility functions of different types of scholars were deduced. They are as follows: risk neutral, risk averse, and risk preference. Following this, the assignment method was adopted to analyze and compare the scholars' utilities of academic conduct. It was concluded that changing the values of risk costs, internal condemnation costs, academic benefits, and the subjective estimation of penalties following dishonest academic conduct can lead to changes in the utility of academic dishonesty. The results of the current study suggest that within scientific research, measures to prevent and govern dishonest academic conduct should be formulated according to the various effects of the above four variables.

  9. Effects of lung ventilation–perfusion and muscle metabolism–perfusion heterogeneities on maximal O2 transport and utilization

    PubMed Central

    Cano, I; Roca, J; Wagner, P D

    2015-01-01

    Previous models of O2 transport and utilization in health considered diffusive exchange of O2 in lung and muscle, but, reasonably, neglected functional heterogeneities in these tissues. However, in disease, disregarding such heterogeneities would not be justified. Here, pulmonary ventilation–perfusion and skeletal muscle metabolism–perfusion mismatching were added to a prior model of only diffusive exchange. Previously ignored O2 exchange in non-exercising tissues was also included. We simulated maximal exercise in (a) healthy subjects at sea level and altitude, and (b) COPD patients at sea level, to assess the separate and combined effects of pulmonary and peripheral functional heterogeneities on overall muscle O2 uptake ( and on mitochondrial (). In healthy subjects at maximal exercise, the combined effects of pulmonary and peripheral heterogeneities reduced arterial () at sea level by 32 mmHg, but muscle by only 122 ml min−1 (–3.5%). At the altitude of Mt Everest, lung and tissue heterogeneity together reduced by less than 1 mmHg and by 32 ml min−1 (–2.4%). Skeletal muscle heterogeneity led to a wide range of potential among muscle regions, a range that becomes narrower as increases, and in regions with a low ratio of metabolic capacity to blood flow, can exceed that of mixed muscle venous blood. For patients with severe COPD, peak was insensitive to substantial changes in the mitochondrial characteristics for O2 consumption or the extent of muscle heterogeneity. This integrative computational model of O2 transport and utilization offers the potential for estimating profiles of both in health and in diseases such as COPD if the extent for both lung ventilation–perfusion and tissue metabolism–perfusion heterogeneity is known. PMID:25640017

  10. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  11. Multi-input multioutput orthogonal frequency division multiplexing radar waveform design for improving the detection performance of space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Hongyan

    2017-04-01

    This paper addresses the waveform optimization problem for improving the detection performance of multi-input multioutput (MIMO) orthogonal frequency division multiplexing (OFDM) radar-based space-time adaptive processing (STAP) in the complex environment. By maximizing the output signal-to-interference-and-noise-ratio (SINR) criterion, the waveform optimization problem for improving the detection performance of STAP, which is subjected to the constant modulus constraint, is derived. To tackle the resultant nonlinear and complicated optimization issue, a diagonal loading-based method is proposed to reformulate the issue as a semidefinite programming one; thereby, this problem can be solved very efficiently. In what follows, the optimized waveform can be obtained to maximize the output SINR of MIMO-OFDM such that the detection performance of STAP can be improved. The simulation results show that the proposed method can improve the output SINR detection performance considerably as compared with that of uncorrelated waveforms and the existing MIMO-based STAP method.

  12. Information Overload and Viral Marketing: Countermeasures and Strategies

    NASA Astrophysics Data System (ADS)

    Cheng, Jiesi; Sun, Aaron; Zeng, Daniel

    Studying information diffusion through social networks has become an active research topic with important implications in viral marketing applications. One of the fundamental algorithmic problems related to viral marketing is the Influence Maximization (IM) problem: given an social network, which set of nodes should be considered by the viral marketer as the initial targets, in order to maximize the influence of the advertising message. In this work, we study the IM problem in an information-overloaded online social network. Information overload occurs when individuals receive more information than they can process, which can cause negative impacts on the overall marketing effectiveness. Many practical countermeasures have been proposed for alleviating the load of information on recipients. However, how these approaches can benefit viral marketers is not well understood. In our work, we have adapted the classic Information Cascade Model to incorporate information overload and study its countermeasures. Our results suggest that effective control of information overload has the potential to improve marketing effectiveness, but the targeting strategy should be re-designed in response to these countermeasures.

  13. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  14. Essay Review: College Sports since World War II

    ERIC Educational Resources Information Center

    Thelin, John

    2011-01-01

    Scholarly writing on college sports gets better as the problems of college sports get worse. This does not mean that the authors are causing the problems. Rather, the excesses and ills of college sports, past and present, provide such fertile data that historians of higher education enjoy a perverse embarrassment of research riches. This maxim is…

  15. Minimal Solutions to the Box Problem

    ERIC Educational Resources Information Center

    Chuang, Jer-Chin

    2009-01-01

    The "box problem" from introductory calculus seeks to maximize the volume of a tray formed by folding a strictly rectangular sheet from which identical squares have been cut from each corner. In posing such questions, one would like to choose integral side-lengths for the sheet so that the excised squares have rational or integral side-length.…

  16. Connecting Authentic Activities with Multimedia to Enhance Teaching and Learning, an Exemplar from Scottish History

    ERIC Educational Resources Information Center

    Hillis, Peter

    2010-01-01

    Much of the current focus on maximizing the potential of ICT to enhance teaching and learning is on learning tasks rather than the technology. These learning tasks increasingly employ a constructivist, problem-based methodology especially one based around authentic learning. The problem-based nature of history provides fertile ground for this…

  17. An Identification of Problems Relating to Federal Procurement of Library Materials Prepared for Commission on Government Procurement.

    ERIC Educational Resources Information Center

    Federal Library Committee, Washington, DC.

    The Federal Library Committee through the Task Force on Procurement Procedures in Federal Libraries is examining all problems and is recommending policies, procedures, and practices which will maximize the efficient procurement of library materials. It is suggested that the vehicle for this investigation be the appropriate Commission on Government…

  18. Undergraduate Student Task Group Approach to Complex Problem Solving Employing Computer Programming.

    ERIC Educational Resources Information Center

    Brooks, LeRoy D.

    A project formulated a computer simulation game for use as an instructional device to improve financial decision making. The author constructed a hypothetical firm, specifying its environment, variables, and a maximization problem. Students, assisted by a professor and computer consultants and having access to B5500 and B6700 facilities, held 16…

  19. The Shape of a Sausage: A Challenging Problem in the Calculus of Variations

    ERIC Educational Resources Information Center

    Deakin, Michael A. B.

    2010-01-01

    Many familiar household objects (such as sausages) involve the maximization of a volume under geometric constraints. A flexible but inextensible membrane bounds a volume which is to be filled to capacity. In the case of the sausage, a full analytic solution is here provided. Other related but more difficult problems seem to demand approximate…

  20. Optimum oil production planning using infeasibility driven evolutionary algorithm.

    PubMed

    Singh, Hemant Kumar; Ray, Tapabrata; Sarker, Ruhul

    2013-01-01

    In this paper, we discuss a practical oil production planning optimization problem. For oil wells with insufficient reservoir pressure, gas is usually injected to artificially lift oil, a practice commonly referred to as enhanced oil recovery (EOR). The total gas that can be used for oil extraction is constrained by daily availability limits. The oil extracted from each well is known to be a nonlinear function of the gas injected into the well and varies between wells. The problem is to identify the optimal amount of gas that needs to be injected into each well to maximize the amount of oil extracted subject to the constraint on the total daily gas availability. The problem has long been of practical interest to all major oil exploration companies as it has the potential to derive large financial benefit. In this paper, an infeasibility driven evolutionary algorithm is used to solve a 56 well reservoir problem which demonstrates its efficiency in solving constrained optimization problems. Furthermore, a multi-objective formulation of the problem is posed and solved using a number of algorithms, which eliminates the need for solving the (single objective) problem on a regular basis. Lastly, a modified single objective formulation of the problem is also proposed, which aims to maximize the profit instead of the quantity of oil. It is shown that even with a lesser amount of oil extracted, more economic benefits can be achieved through the modified formulation.

  1. Optimal Filling of Shapes

    NASA Astrophysics Data System (ADS)

    Phillips, Carolyn L.; Anderson, Joshua A.; Huber, Greg; Glotzer, Sharon C.

    2012-05-01

    We present filling as a type of spatial subdivision problem similar to covering and packing. Filling addresses the optimal placement of overlapping objects lying entirely inside an arbitrary shape so as to cover the most interior volume. In n-dimensional space, if the objects are polydisperse n-balls, we show that solutions correspond to sets of maximal n-balls. For polygons, we provide a heuristic for finding solutions of maximal disks. We consider the properties of ideal distributions of N disks as N→∞. We note an analogy with energy landscapes.

  2. Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method

    USDA-ARS?s Scientific Manuscript database

    Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...

  3. Biomass for biorefining: Resources, allocation, utilization, and policies

    USDA-ARS?s Scientific Manuscript database

    The importance of biomass in the development of renewable energy, the availability and allocation of biomass, its preparation for use in biorefineries, and the policies affecting biomass are discussed in this chapter. Bioenergy development will depend on maximizing the amount of biomass obtained fro...

  4. The Child and Adolescent Psychiatry Trials Network

    ERIC Educational Resources Information Center

    March, John S.; Silva, Susan G.; Compton, Scott; Anthony, Ginger; DeVeaugh-Geiss, Joseph; Califf, Robert; Krishnan, Ranga

    2004-01-01

    Objective: The current generation of clinical trials in pediatric psychiatry often fails to maximize clinical utility for practicing clinicians, thereby diluting its impact. Method: To attain maximum clinical relevance and acceptability, the Child and Adolescent Psychiatry Trials Network (CAPTN) will transport to pediatric psychiatry the practical…

  5. Why is Improving Water Quality in the Gulf of Mexico so Critical?

    EPA Pesticide Factsheets

    The EPA regional offices and the Gulf of Mexico Program work with Gulf States to continue to maximize the efficiency and utility of water quality monitoring efforts for local managers by coordinating and standardizing state and federal water quality data

  6. Maximizing internal opportunities for healthcare facilities facing a managed-care environment.

    PubMed

    Gillespie, M

    1997-01-01

    The primary theme of this article concerns the pressures on healthcare facilities to become efficient utilizers of their existing resources. This acute need for efficiency has been extremely obvious since the changing reimbursement patterns of managed care have proliferated across the nation.

  7. Designing advanced biochar products for maximizing greenhouse gas mitigation potential

    USDA-ARS?s Scientific Manuscript database

    Greenhouse gas (GHG) emissions from agricultural operations continue to increase. Carbon enriched char materials like biochar have been described as a mitigation strategy. Utilization of biochar material as a soil amendment has been demonstrated to provide potentially further soil GHG suppression du...

  8. Paracrine communication maximizes cellular response fidelity in wound signaling

    PubMed Central

    Handly, L Naomi; Pilko, Anna; Wollman, Roy

    2015-01-01

    Population averaging due to paracrine communication can arbitrarily reduce cellular response variability. Yet, variability is ubiquitously observed, suggesting limits to paracrine averaging. It remains unclear whether and how biological systems may be affected by such limits of paracrine signaling. To address this question, we quantify the signal and noise of Ca2+ and ERK spatial gradients in response to an in vitro wound within a novel microfluidics-based device. We find that while paracrine communication reduces gradient noise, it also reduces the gradient magnitude. Accordingly we predict the existence of a maximum gradient signal to noise ratio. Direct in vitro measurement of paracrine communication verifies these predictions and reveals that cells utilize optimal levels of paracrine signaling to maximize the accuracy of gradient-based positional information. Our results demonstrate the limits of population averaging and show the inherent tradeoff in utilizing paracrine communication to regulate cellular response fidelity. DOI: http://dx.doi.org/10.7554/eLife.09652.001 PMID:26448485

  9. Integrating epidemiology, psychology, and economics to achieve HPV vaccination targets.

    PubMed

    Basu, Sanjay; Chapman, Gretchen B; Galvani, Alison P

    2008-12-02

    Human papillomavirus (HPV) vaccines provide an opportunity to reduce the incidence of cervical cancer. Optimization of cervical cancer prevention programs requires anticipation of the degree to which the public will adhere to vaccination recommendations. To compare vaccination levels driven by public perceptions with levels that are optimal for maximizing the community's overall utility, we develop an epidemiological game-theoretic model of HPV vaccination. The model is parameterized with survey data on actual perceptions regarding cervical cancer, genital warts, and HPV vaccination collected from parents of vaccine-eligible children in the United States. The results suggest that perceptions of survey respondents generate vaccination levels far lower than those that maximize overall health-related utility for the population. Vaccination goals may be achieved by addressing concerns about vaccine risk, particularly those related to sexual activity among adolescent vaccine recipients. In addition, cost subsidizations and shifts in federal coverage plans may compensate for perceived and real costs of HPV vaccination to achieve public health vaccination targets.

  10. Rectal compliance as a routine measurement: extreme volumes have direct clinical impact and normal volumes exclude rectum as a problem.

    PubMed

    Felt-Bersma, R J; Sloots, C E; Poen, A C; Cuesta, M A; Meuwissen, S G

    2000-12-01

    The clinical impact of rectal compliance and sensitivity measurement is not clear. The aim of this study was to measure the rectal compliance in different patient groups compared with controls and to establish the clinical effect of rectal compliance. Anorectal function tests were performed in 974 consecutive patients (284 men). Normal values were obtained from 24 controls. Rectal compliance measurement was performed by filling a latex rectal balloon with water at a rate of 60 ml per minute. Volume and intraballoon pressure were measured. Volume and pressure at three sensitivity thresholds were recorded for analysis: first sensation, urge, and maximal toleration. At maximal toleration, the rectal compliance (volume/pressure) was calculated. Proctoscopy, anal manometry, anal mucosal sensitivity, and anal endosonography were also performed as part of our anorectal function tests. No effect of age or gender was observed in either controls or patients. Patients with fecal incontinence had a higher volume at first sensation and a higher pressure at maximal toleration (P = 0.03), the presence of a sphincter defect or low or normal anal pressures made no difference. Patients with constipation had a larger volume at first sensation and urge (P < 0.0001 and P < 0.01). Patients with a rectocele had a larger volume at first sensation (P = 0.004). Patients with rectal prolapse did not differ from controls; after rectopexy, rectal compliance decreased (P < 0.0003). Patients with inflammatory bowel disease had a lower rectal compliance, most pronounced in active proctitis (P = 0.003). Patients with ileoanal pouches also had a lower compliance (P < 0.0001). In the 17 patients where a maximal toleration volume < 60 ml was found, 11 had complaints of fecal incontinence, and 6 had a stoma. In 31 patients a maximal toleration volume between 60 and 100 ml was found; 12 patients had complaints of fecal incontinence, and 6 had a stoma. Proctitis or pouchitis was the main cause for a small compliance. All 29 patients who had a maximal toleration volume > 500 ml had complaints of constipation. No correlation between rectal and anal mucosal sensitivity was found. Rectal compliance measurement with a latex balloon is easily feasible. In this series of 974 patients, some patient groups showed an abnormal rectal visceral sensitivity and compliance, but there was an overlap with controls. Rectal compliance measurement gave a good clinical impression about the contribution of the rectum to the anorectal problem. Patients with proctitis and pouchitis had the smallest rectal compliance. A maximal toleration volume < 60 ml always led to fecal incontinence, and stomas should be considered for such patients. A maximal toleration volume > 500 ml was only seen in constipated patients, and therapy should be given to prevent further damage to the pelvic floor. Values close to or within the normal range rule out the rectum as an important factor in the anorectal problem of the patient.

  11. Recovery Act: Brea California Combined Cycle Electric Generating Plant Fueled by Waste Landfill Gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galowitz, Stephen

    The primary objective of the Project was to maximize the productive use of the substantial quantities of waste landfill gas generated and collected at the Olinda Landfill near Brea, California. An extensive analysis was conducted and it was determined that utilization of the waste gas for power generation in a combustion turbine combined cycle facility was the highest and best use. The resulting Project reflected a cost effective balance of the following specific sub-objectives: • Meeting the environmental and regulatory requirements, particularly the compliance obligations imposed on the landfill to collect, process and destroy landfill gas • Utilizing proven andmore » reliable technology and equipment • Maximizing electrical efficiency • Maximizing electric generating capacity, consistent with the anticipated quantities of landfill gas generated and collected at the Olinda Landfill • Maximizing equipment uptime • Minimizing water consumption • Minimizing post-combustion emissions • The Project produced and will produce a myriad of beneficial impacts. o The Project created 360 FTE construction and manufacturing jobs and 15 FTE permanent jobs associated with the operation and maintenance of the plant and equipment. o By combining state-of-the-art gas clean up systems with post combustion emissions control systems, the Project established new national standards for best available control technology (BACT). o The Project will annually produce 280,320 MWh’s of clean energy o By destroying the methane in the landfill gas, the Project will generate CO2 equivalent reductions of 164,938 tons annually. The completed facility produces 27.4 MWnet and operates 24 hours a day, seven days a week.« less

  12. Co-Flow Hollow Cathode Technology

    NASA Technical Reports Server (NTRS)

    Hofer, Richard R.; Goebel, Dan M.

    2011-01-01

    Hall thrusters utilize identical hollow cathode technology as ion thrusters, yet must operate at much higher mass flow rates in order to efficiently couple to the bulk plasma discharge. Higher flow rates are necessary in order to provide enough neutral collisions to transport electrons across magnetic fields so that they can reach the discharge. This higher flow rate, however, has potential life-limiting implications for the operation of the cathode. A solution to the problem involves splitting the mass flow into the hollow cathode into two streams, the internal and external flows. The internal flow is fixed and set such that the neutral pressure in the cathode allows for a high utilization of the emitter surface area. The external flow is variable depending on the flow rate through the anode of the Hall thruster, but also has a minimum in order to suppress high-energy ion generation. In the co-flow hollow cathode, the cathode assembly is mounted on thruster centerline, inside the inner magnetic core of the thruster. An annular gas plenum is placed at the base of the cathode and propellant is fed throughout to produce an azimuthally symmetric flow of gas that evenly expands around the cathode keeper. This configuration maximizes propellant utilization and is not subject to erosion processes. External gas feeds have been considered in the past for ion thruster applications, but usually in the context of eliminating high energy ion production. This approach is adapted specifically for the Hall thruster and exploits the geometry of a Hall thruster to feed and focus the external flow without introducing significant new complexity to the thruster design.

  13. Establishing rational networking using the DL04 quantum secure direct communication protocol

    NASA Astrophysics Data System (ADS)

    Qin, Huawang; Tang, Wallace K. S.; Tso, Raylin

    2018-06-01

    The first rational quantum secure direct communication scheme is proposed, in which we use the game theory with incomplete information to model the rational behavior of the participant, and give the strategy space and utility function. The rational participant can get his maximal utility when he performs the protocol faithfully, and then the Nash equilibrium of the protocol can be achieved. Compared to the traditional schemes, our scheme will be more practical in the presence of rational participant.

  14. Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines

    NASA Astrophysics Data System (ADS)

    Govindaraju, Parithi

    A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.

  15. The Examination of the Relation between Teacher Candidates' Problem Solving Appraisal and Utilization of Motivated Strategies for Learning

    ERIC Educational Resources Information Center

    Turgut, Ozden; Ocak, Gurbuz

    2017-01-01

    This study examines the relation between teacher candidates' problem solving appraisal and utilization of motivated strategies for learning. The study has been carried out with 416 teacher candidates. A correlation has been used between problem solving appraisal and utilization of motivated strategies for learning. Besides, regression analysis has…

  16. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift.

    PubMed

    Fleming, Theresa M; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M; Aschieri, Filippo; Bavin, Lynda M; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives' paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming.

  17. Maximizing the Impact of e-Therapy and Serious Gaming: Time for a Paradigm Shift

    PubMed Central

    Fleming, Theresa M.; de Beurs, Derek; Khazaal, Yasser; Gaggioli, Andrea; Riva, Giuseppe; Botella, Cristina; Baños, Rosa M.; Aschieri, Filippo; Bavin, Lynda M.; Kleiboer, Annet; Merry, Sally; Lau, Ho Ming; Riper, Heleen

    2016-01-01

    Internet interventions for mental health, including serious games, online programs, and apps, hold promise for increasing access to evidence-based treatments and prevention. Many such interventions have been shown to be effective and acceptable in trials; however, uptake and adherence outside of trials is seldom reported, and where it is, adherence at least, generally appears to be underwhelming. In response, an international Collaboration On Maximizing the impact of E-Therapy and Serious Gaming (COMETS) was formed. In this perspectives’ paper, we call for a paradigm shift to increase the impact of internet interventions toward the ultimate goal of improved population mental health. We propose four pillars for change: (1) increased focus on user-centered approaches, including both user-centered design of programs and greater individualization within programs, with the latter perhaps utilizing increased modularization; (2) Increased emphasis on engagement utilizing processes such as gaming, gamification, telepresence, and persuasive technology; (3) Increased collaboration in program development, testing, and data sharing, across both sectors and regions, in order to achieve higher quality, more sustainable outcomes with greater reach; and (4) Rapid testing and implementation, including the measurement of reach, engagement, and effectiveness, and timely implementation. We suggest it is time for researchers, clinicians, developers, and end-users to collaborate on these aspects in order to maximize the impact of e-therapies and serious gaming. PMID:27148094

  18. Identifying Epigenetic Biomarkers using Maximal Relevance and Minimal Redundancy Based Feature Selection for Multi-Omics Data.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Maulik, Ujjwal

    2017-01-01

    Epigenetic Biomarker discovery is an important task in bioinformatics. In this article, we develop a new framework of identifying statistically significant epigenetic biomarkers using maximal-relevance and minimal-redundancy criterion based feature (gene) selection for multi-omics dataset. Firstly, we determine the genes that have both expression as well as methylation values, and follow normal distribution. Similarly, we identify the genes which consist of both expression and methylation values, but do not follow normal distribution. For each case, we utilize a gene-selection method that provides maximal-relevant, but variable-weighted minimum-redundant genes as top ranked genes. For statistical validation, we apply t-test on both the expression and methylation data consisting of only the normally distributed top ranked genes to determine how many of them are both differentially expressed andmethylated. Similarly, we utilize Limma package for performing non-parametric Empirical Bayes test on both expression and methylation data comprising only the non-normally distributed top ranked genes to identify how many of them are both differentially expressed and methylated. We finally report the top-ranking significant gene-markerswith biological validation. Moreover, our framework improves positive predictive rate and reduces false positive rate in marker identification. In addition, we provide a comparative analysis of our gene-selection method as well as othermethods based on classificationperformances obtained using several well-known classifiers.

  19. Understanding the factors that effect maximal fat oxidation.

    PubMed

    Purdom, Troy; Kravitz, Len; Dokladny, Karol; Mermier, Christine

    2018-01-01

    Lipids as a fuel source for energy supply during submaximal exercise originate from subcutaneous adipose tissue derived fatty acids (FA), intramuscular triacylglycerides (IMTG), cholesterol and dietary fat. These sources of fat contribute to fatty acid oxidation (FAox) in various ways. The regulation and utilization of FAs in a maximal capacity occur primarily at exercise intensities between 45 and 65% VO 2max , is known as maximal fat oxidation (MFO), and is measured in g/min. Fatty acid oxidation occurs during submaximal exercise intensities, but is also complimentary to carbohydrate oxidation (CHOox). Due to limitations within FA transport across the cell and mitochondrial membranes, FAox is limited at higher exercise intensities. The point at which FAox reaches maximum and begins to decline is referred to as the crossover point. Exercise intensities that exceed the crossover point (~65% VO 2max ) utilize CHO as the predominant fuel source for energy supply. Training status, exercise intensity, exercise duration, sex differences, and nutrition have all been shown to affect cellular expression responsible for FAox rate. Each stimulus affects the process of FAox differently, resulting in specific adaptions that influence endurance exercise performance. Endurance training, specifically long duration (>2 h) facilitate adaptations that alter both the origin of FAs and FAox rate. Additionally, the influence of sex and nutrition on FAox are discussed. Finally, the role of FAox in the improvement of performance during endurance training is discussed.

  20. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    PubMed Central

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  1. Enumerating all maximal frequent subtrees in collections of phylogenetic trees.

    PubMed

    Deepak, Akshay; Fernández-Baca, David

    2014-01-01

    A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.

  2. Optimal control of a harmonic oscillator: Economic interpretations

    NASA Astrophysics Data System (ADS)

    Janová, Jitka; Hampel, David

    2013-10-01

    Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.

  3. [Calculating the optimum size of a hemodialysis unit based on infrastructure potential].

    PubMed

    Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis

    2010-01-01

    To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.

  4. A research on service quality decision-making of Chinese communications industry based on quantum game

    NASA Astrophysics Data System (ADS)

    Zhang, Cuihua; Xing, Peng

    2015-08-01

    In recent years, Chinese service industry is developing rapidly. Compared with developed countries, service quality should be the bottleneck for Chinese service industry. On the background of three major telecommunications service providers in China, the functions of customer perceived utilities are established. With the goal of consumer's perceived utility maximization, the classic Nash equilibrium solution and quantum equilibrium solution are obtained. Then a numerical example is studied and the changing trend of service quality and customer perceived utility is further analyzed by the influence of the entanglement operator. Finally, it is proved that quantum game solution is better than Nash equilibrium solution.

  5. Racial and Ethnic Differences in the Utilization of Prayer and Clergy Counseling by Infertile US Women Desiring Pregnancy.

    PubMed

    Collins, Stephen C; Kim, Soorin; Chan, Esther

    2017-11-29

    Religion can have a significant influence on the experience of infertility. However, it is unclear how many US women turn to religion when facing infertility. Here, we examine the utilization of prayer and clergy counsel among a nationally representative sample of 1062 infertile US women. Prayer was used by 74.8% of the participants, and clergy counsel was the most common formal support system utilized. Both prayer and clergy counsel were significantly more common among black and Hispanic women. Healthcare providers should acknowledge the spiritual needs of their infertile patients and ally with clergy when possible to provide maximally effective care.

  6. Utilizing the School Health Index to Foster University and Community Engagement

    ERIC Educational Resources Information Center

    King, Kristi McClary

    2010-01-01

    A Coordinated School Health Program maximizes a school's positive interaction among health education, physical education, health services, nutrition services, counseling/psychological/social services, health school environment, health promotion for staff, and family and community involvement. The purpose of this semester project is for…

  7. Releases: Is There Still a Place for Their Use by Colleges and Universities?

    ERIC Educational Resources Information Center

    Connell, Mary Ann; Savage, Frederick G.

    2003-01-01

    Analyzes the legal principles, facts, and circumstances that govern decisions of courts regarding the validity of written releases, and provides practical advice to higher education lawyers and administrators as they evaluate the utility of releases and seek to maximize their benefit. (EV)

  8. The future of transportation planning : dynamic travel behavior analyses based on stochastic decision-making styles : final report.

    DOT National Transportation Integrated Search

    2003-08-01

    Over the past half-century, the progress of travel behavior research and travel demand forecasting has been spear : headed and continuously propelled by the micro-economic theories, specifically utility maximization. There is no : denial that the tra...

  9. Innovative Conference Curriculum: Maximizing Learning and Professionalism

    ERIC Educational Resources Information Center

    Hyland, Nancy; Kranzow, Jeannine

    2012-01-01

    This action research study evaluated the potential of an innovative curriculum to move 73 graduate students toward professional development. The curriculum was grounded in the professional conference and utilized the motivation and expertise of conference presenters. This innovation required students to be more independent, act as a critical…

  10. Method of optimizing performance of Rankine cycle power plants

    DOEpatents

    Pope, William L.; Pines, Howard S.; Doyle, Padraic A.; Silvester, Lenard F.

    1982-01-01

    A method for efficiently operating a Rankine cycle power plant (10) to maximize fuel utilization efficiency or energy conversion efficiency or minimize costs by selecting a turbine (22) fluid inlet state which is substantially in the area adjacent and including the transposed critical temperature line (46).

  11. Making It Stick

    ERIC Educational Resources Information Center

    Ewers, Justin

    2009-01-01

    It seems to happen every day. A meeting is called to outline a new strategy or sales plan. Down go the lights and up goes the PowerPoint. Strange phrases appear--"unlocking shareholder value," "technology-focused innovation," "maximizing utility." Lists of numbers come and go. Bullet point by bullet point, the…

  12. Planning and managing market research: Electric utility market research monograph series: Monograph 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitelaw, R.W.

    1987-01-01

    The market research techniques available now to the electric utility industry have evolved over the last thirty years into a set of sophisticated tools that permit complex behavioral analyses that earlier had been impossible. The marketing questions facing the electric utility industry now are commensurately more complex than ever before. This document was undertaken to present the tools and techniques needed to start or improve the usefulness of market research activities within electric utilities. It describes proven planning and management techniques as well as decision criteria for structuring effective market research functions for each utility's particular needs. The monograph establishesmore » the parameters of sound utility market research given trade-offs between highly centralized or decentralized organizations, research focus, involvement in decision making, and personnel and management skills necessary to maximize the effectiveness of the structure chosen.« less

  13. Classification with asymmetric label noise: Consistency and maximal denoising

    DOE PAGES

    Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...

    2016-09-20

    In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less

  14. Classification with asymmetric label noise: Consistency and maximal denoising

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, Gilles; Flaska, Marek; Handy, Gregory

    In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less

  15. Maximizing work integration in job placement of individuals facing mental health problems: Supervisor experiences.

    PubMed

    Skarpaas, Lisebet Skeie; Ramvi, Ellen; Løvereide, Lise; Aas, Randi Wågø

    2015-01-01

    Many people confronting mental health problems are excluded from participation in paid work. Supervisor engagement is essential for successful job placement. To elicit supervisor perspectives on the challenges involved in fostering integration to support individuals with mental health problems (trainees) in their job placement at ordinary companies. Explorative, qualitative designed study with a phenomenological approach, based on semi-structured interviews with 15 supervisors involved in job placements for a total of 105 trainees (mean 7, min-max. 1-30, SD 8). Data were analysed using qualitative content analysis. Superviors experience two interrelated dilemmas concerning knowledge of the trainee and degree of preferential treatment. Challenges to obtaining successful integration were; motivational: 1) Supervisors previous experience with trainees encourages future engagement, 2) Developing a realistic picture of the situation, and 3) Disclosure and knowledge of mental health problems, and continuity challenges: 4) Sustaining trainee cooperation throughout the placement process, 5) Building and maintaining a good relationship between supervisor and trainee, and 6) Ensuring continuous cooperation with the social security system and other stakeholders. Supervisors experience relational dilemmas regarding pre-judgment, privacy and equality. Job placement seem to be maximized when the stakeholders are motivated and recognize that cooperation must be a continuous process.

  16. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  17. A Multi-Objective Optimization Technique to Model the Pareto Front of Organic Dielectric Polymers

    NASA Astrophysics Data System (ADS)

    Gubernatis, J. E.; Mannodi-Kanakkithodi, A.; Ramprasad, R.; Pilania, G.; Lookman, T.

    Multi-objective optimization is an area of decision making that is concerned with mathematical optimization problems involving more than one objective simultaneously. Here we describe two new Monte Carlo methods for this type of optimization in the context of their application to the problem of designing polymers with more desirable dielectric and optical properties. We present results of applying these Monte Carlo methods to a two-objective problem (maximizing the total static band dielectric constant and energy gap) and a three objective problem (maximizing the ionic and electronic contributions to the static band dielectric constant and energy gap) of a 6-block organic polymer. Our objective functions were constructed from high throughput DFT calculations of 4-block polymers, following the method of Sharma et al., Nature Communications 5, 4845 (2014) and Mannodi-Kanakkithodi et al., Scientific Reports, submitted. Our high throughput and Monte Carlo methods of analysis extend to general N-block organic polymers. This work was supported in part by the LDRD DR program of the Los Alamos National Laboratory and in part by a Multidisciplinary University Research Initiative (MURI) Grant from the Office of Naval Research.

  18. Locating an imaging radar in Canada for identifying spaceborne objects

    NASA Astrophysics Data System (ADS)

    Schick, William G.

    1992-12-01

    This research presents a study of the maximal coverage p-median facility location problem as applied to the location of an imaging radar in Canada for imaging spaceborne objects. The classical mathematical formulation of the maximal coverage p-median problem is converted into network-flow with side constraint formulations that are developed using a scaled down version of the imaging radar location problem. Two types of network-flow with side constraint formulations are developed: a network using side constraints that simulates the gains in a generalized network; and a network resembling a multi-commodity flow problem that uses side constraints to force flow along identical arcs. These small formulations are expanded to encompass a case study using 12 candidate radar sites, and 48 satellites divided into three states. SAS/OR PROC NETFLOW was used to solve the network-flow with side constraint formulations. The case study show that potential for both formulations, although the simulated gains formulation encountered singular matrix computational difficulties as a result of the very organized nature of its side constraint matrix. The multi-commodity flow formulation, when combined with equi-distribution of flow constraints, provided solutions for various values of p, the number of facilities to be selected.

  19. Maximizing profitability in a hospital outpatient pharmacy.

    PubMed

    Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A

    1989-07-01

    This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.

  20. Optimum Sensors Integration for Multi-Sensor Multi-Target Environment for Ballistic Missile Defense Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Barhen, Jacob; Glover, Charles Wayne

    2012-01-01

    Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.

  1. Disaggregating reserve-to-production ratios: An algorithm for United States oil and gas reserve development

    NASA Astrophysics Data System (ADS)

    Williams, Charles William

    Reserve-to-production ratios for oil and gas development are utilized by oil and gas producing states to monitor oil and gas reserve and production dynamics. These ratios are used to determine production levels for the manipulation of oil and gas prices while maintaining adequate reserves for future development. These aggregate reserve-to-production ratios do not provide information concerning development cost and the best time necessary to develop newly discovered reserves. Oil and gas reserves are a semi-finished inventory because development of the reserves must take place in order to implement production. These reserves are considered semi-finished in that they are not counted unless it is economically profitable to produce them. The development of these reserves is encouraged by profit maximization economic variables which must consider the legal, political, and geological aspects of a project. This development is comprised of a myriad of incremental operational decisions, each of which influences profit maximization. The primary purpose of this study was to provide a model for characterizing a single product multi-period inventory/production optimization problem from an unconstrained quantity of raw material which was produced and stored as inventory reserve. This optimization was determined by evaluating dynamic changes in new additions to reserves and the subsequent depletion of these reserves with the maximization of production. A secondary purpose was to determine an equation for exponential depletion of proved reserves which presented a more comprehensive representation of reserve-to-production ratio values than an inadequate and frequently used aggregate historical method. The final purpose of this study was to determine the most accurate delay time for a proved reserve to achieve maximum production. This calculated time provided a measure of the discounted cost and calculation of net present value for developing new reserves. This study concluded that the theoretical model developed by this research may be used to provide a predictive equation for each major oil and gas state so that a net present value to undiscounted net cash flow ratio might be calculated in order to establish an investment signal for profit maximizers. This equation inferred how production decisions were influenced by exogenous factors, such as price, and how policies performed which lead to recommendations regarding effective policies and prudent planning.

  2. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  3. Utility franchises reconsidered

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weidner, B.

    It is easier to obtain a public utility franchise than one for a fast food store because companies like Burger King value the profit share and control available with a franchise arrangement. The investor-owned utilities (IOUs) in Chicago and elsewhere gets little financial or regulatory benefit, although they do have an alternative because the franchise can be taken over by the city with a one-year notice. As IOUs evolved, the annual franchise fee has been incorporated into the rate in a move that taxes ratepayers and maximizes profits. Cities that found franchising unsatisfactory are looking for ways to terminate themore » franchise and finance a takeover, but limited-term and indeterminate franchises may offer a better mechanism when public needs and utility aims diverge. A directory lists franchised utilities by state and comments on their legal status. (DCK)« less

  4. Student Use of Out-of-Class Study Groups in an Introductory Undergraduate Biology Course

    PubMed Central

    Rybczynski, Stephen M.; Schussler, Elisabeth E.

    2011-01-01

    Self-formed out-of-class study groups may benefit student learning; however, few researchers have quantified the relationship between study group use and achievement or described changes in study group usage patterns over a semester. We related study group use to performance on content exams, explored patterns of study group use, and qualitatively described student perceptions of study groups. A pre- and posttest were used to measure student content knowledge. Internet-based surveys were used to collect quantitative data on exam performance and qualitative data on study group usage trends and student perceptions of study groups. No relationship was found between gains in content knowledge and study group use. Students who participated in study groups did, however, believe they were beneficial. Four patterns of study group use were identified: students either always (14%) or never (55%) used study groups, tried but quit using them (22%), or utilized study groups only late in the semester (9%). Thematic analysis revealed preconceptions and in-class experiences influence student decisions to utilize study groups. We conclude that students require guidance in the successful use of study groups. Instructors can help students maximize study group success by making students aware of potential group composition problems, helping students choose group members who are compatible, and providing students materials on which to focus their study efforts. PMID:21364102

  5. Ecosocial consequences and policy implications of disease management in East African agropastoral systems.

    PubMed

    Gutierrez, Andrew Paul; Gilioli, Gianni; Baumgärtner, Johann

    2009-08-04

    International research and development efforts in Africa have brought ecological and social change, but analyzing the consequences of this change and developing policy to manage it for sustainable development has been difficult. This has been largely due to a lack of conceptual and analytical models to access the interacting dynamics of the different components of ecosocial systems. Here, we examine the ecological and social changes resulting from an ongoing suppression of trypanosomiasis disease in cattle in an agropastoral community in southwest Ethiopia to illustrate how such problems may be addressed. The analysis combines physiologically based demographic models of pasture, cattle, and pastoralists and a bioeconomic model that includes the demographic models as dynamic constraints in the economic objective function that maximizes the utility of individual consumption under different level of disease risk in cattle. Field data and model analysis show that suppression of trypanosomiasis leads to increased cattle and human populations and to increased agricultural development. However, in the absence of sound management, these changes will lead to a decline in pasture quality and increase the risk from tick-borne diseases in cattle and malaria in humans that would threaten system sustainability and resilience. The analysis of these conflicting outcomes of trypanosomiasis suppression is used to illustrate the need for and utility of conceptual bioeconomic models to serve as a basis for developing policy for sustainable agropastoral resource management in sub-Saharan Africa.

  6. Opportunities to overcome the current limitations and challenges for efficient microbial production of optically pure lactic acid.

    PubMed

    Abdel-Rahman, Mohamed Ali; Sonomoto, Kenji

    2016-10-20

    There has been growing interest in the microbial production of optically pure lactic acid due to the increased demand for lactic acid-derived environmentally friendly products, for example biodegradable plastic (poly-lactic acid), as an alternative to petroleum-derived materials. To maximize the market uptake of these products, their cost should be competitive and this could be achieved by decreasing the production cost of the raw material, that is, lactic acid. It is of great importance to isolate and develop robust and highly efficient microbial lactic acid producers. Alongside the fermentative substrate and concentration, the yield and productivity of lactic acid are key parameters and major factors in determining the final production cost of lactic acid. In this review, we will discuss the current limitations and challenges for cost-efficient microbial production of optically pure lactic acid. The main obstacles to effective fermentation are the use of food resources, indirect utilization of polymeric sugars, sensitivity to inhibitory compounds released during biomass treatments, substrate inhibition, decreased lactic acid yield and productivity, inefficient utilization of mixed sugars, end product inhibition, increased use of neutralizing agents, contamination problems, and decreased optical purity of lactic acid. Furthermore, opportunities to address and overcome these limitations, either by fermentation technology or metabolic engineering approaches, will be introduced and discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Decision technology.

    PubMed

    Edwards, W; Fasolo, B

    2001-01-01

    This review is about decision technology-the rules and tools that help us make wiser decisions. First, we review the three rules that are at the heart of most traditional decision technology-multi-attribute utility, Bayes' theorem, and subjective expected utility maximization. Since the inception of decision research, these rules have prescribed how we should infer values and probabilities and how we should combine them to make better decisions. We suggest how to make best use of all three rules in a comprehensive 19-step model. The remainder of the review explores recently developed tools of decision technology. It examines the characteristics and problems of decision-facilitating sites on the World Wide Web. Such sites now provide anyone who can use a personal computer with access to very sophisticated decision-aiding tools structured mainly to facilitate consumer decision making. It seems likely that the Web will be the mode by means of which decision tools will be distributed to lay users. But methods for doing such apparently simple things as winnowing 3000 options down to a more reasonable number, like 10, contain traps for unwary decision technologists. The review briefly examines Bayes nets and influence diagrams-judgment and decision-making tools that are available as computer programs. It very briefly summarizes the state of the art of eliciting probabilities from experts. It concludes that decision tools will be as important in the 21st century as spreadsheets were in the 20th.

  8. Computational health economics for identification of unprofitable health care enrollees.

    PubMed

    Rose, Sherri; Bergquist, Savannah L; Layton, Timothy J

    2017-10-01

    Health insurers may attempt to design their health plans to attract profitable enrollees while deterring unprofitable ones. Such insurers would not be delivering socially efficient levels of care by providing health plans that maximize societal benefit, but rather intentionally distorting plan benefits to avoid high-cost enrollees, potentially to the detriment of health and efficiency. In this work, we focus on a specific component of health plan design at risk for health insurer distortion in the Health Insurance Marketplaces: the prescription drug formulary. We introduce an ensembled machine learning function to determine whether drug utilization variables are predictive of a new measure of enrollee unprofitability we derive, and thus vulnerable to distortions by insurers. Our implementation also contains a unique application-specific variable selection tool. This study demonstrates that super learning is effective in extracting the relevant signal for this prediction problem, and that a small number of drug variables can be used to identify unprofitable enrollees. The results are both encouraging and concerning. While risk adjustment appears to have been reasonably successful at weakening the relationship between therapeutic-class-specific drug utilization and unprofitability, some classes remain predictive of insurer losses. The vulnerable enrollees whose prescription drug regimens include drugs in these classes may need special protection from regulators in health insurance market design. © The Author 2017. Published by Oxford University Press.

  9. Acceptable regret in medical decision making.

    PubMed

    Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M

    1999-09-01

    When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.

  10. Action Being Character: A Promising Perspective on the Solution Concept of Game Theory

    PubMed Central

    Deng, Kuiying; Chu, Tianguang

    2011-01-01

    The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations. PMID:21573055

  11. Action being character: a promising perspective on the solution concept of game theory.

    PubMed

    Deng, Kuiying; Chu, Tianguang

    2011-05-09

    The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations.

  12. Myofibrillar and collagen protein synthesis in human skeletal muscle in young men after maximal shortening and lengthening contractions.

    PubMed

    Moore, Daniel R; Phillips, Stuart M; Babraj, John A; Smith, Kenneth; Rennie, Michael J

    2005-06-01

    We aimed to determine whether there were differences in the extent and time course of skeletal muscle myofibrillar protein synthesis (MPS) and muscle collagen protein synthesis (CPS) in human skeletal muscle in an 8.5-h period after bouts of maximal muscle shortening (SC; average peak torque = 225 +/- 7 N.m, means +/- SE) or lengthening contractions (LC; average peak torque = 299 +/- 18 N.m) with equivalent work performed in each mode. Eight healthy young men (21.9 +/- 0.6 yr, body mass index 24.9 +/- 1.3 kg/m2) performed 6 sets of 10 maximal unilateral LC of the knee extensors on an isokinetic dynamometer. With the contralateral leg, they then performed 6 sets of maximal unilateral SC with work matched to the total work performed during LC (10.9 +/- 0.7 vs. 10.9 +/- 0.8 kJ, P = 0.83). After exercise, the participants consumed small intermittent meals to provide 0.1 g.kg(-1).h(-1) of protein and carbohydrate. Prior exercise elevated MPS above rest in both conditions, but there was a more rapid rise after LC (P < 0.01). The increases (P < 0.001) in CPS above rest were identical for both SC and LC and likely represent a remodeling of the myofibrillar basement membrane. Therefore, a more rapid rise in MPS after maximal LC could translate into greater protein accretion and muscle hypertrophy during chronic resistance training utilizing maximal LC.

  13. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGES

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  14. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    PubMed

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  15. A History of Educational Facilities Laboratories (EFL)

    ERIC Educational Resources Information Center

    Marks, Judy

    2009-01-01

    The Educational Facilities Laboratories (EFL), an independent research organization established by the Ford Foundation, opened its doors in 1958 under the direction of Harold B. Gores, a distinguished educator. Its purpose was to help schools and colleges maximize the quality and utility of their facilities, stimulate research, and disseminate…

  16. Fusing corn nitrogen recommendation tools for an improved canopy reflectance sensor performance

    USDA-ARS?s Scientific Manuscript database

    Nitrogen (N) rate recommendation tools are utilized to help producers maximize corn grain yield production. Many of these tools provide recommendations at field scales but often fail when corn N requirements are variable across the field. Canopy reflectance sensors are capable of capturing within-fi...

  17. "Bundling" in Learning.

    ERIC Educational Resources Information Center

    Spiegel, U.; Templeman, J.

    1996-01-01

    Applies the literature of bundling, tie-in sales, and vertical integration to higher education. Students are often required to purchase a package of courses, some of which are unrelated to their major. This kind of bundling policy can be utilized as a profit-maximizing strategy for universities exercising a degree of monopolistic power. (12…

  18. Method of optimizing performance of Rankine cycle power plants. [US DOE Patent

    DOEpatents

    Pope, W.L.; Pines, H.S.; Doyle, P.A.; Silvester, L.F.

    1980-06-23

    A method is described for efficiently operating a Rankine cycle power plant to maximize fuel utilization efficiency or energy conversion efficiency or minimize costs by selecting a turbine fluid inlet state which is substantially on the area adjacent and including the transposed critical temperature line.

  19. Timber and Amenities on Nonindustrial Private Forest Land

    Treesearch

    Subhrendu K. Pattanayak; Karen Lee Abt; Thomas P. Holmes

    2003-01-01

    Economic analyses of the joint production timber and amenities from nonindustrial private forest lands (NIPF) have been conducted for several decades. Binkley (1981) summarized this strand of research and elegantly articulated a microeconomic household model in which NIPF owners maximize utility by choosing optimal combinations of timber income and amenities. Most...

  20. A Pragmatic Study of Barak Obama's Political Propaganda

    ERIC Educational Resources Information Center

    Al-Ameedi, Riyadh Tariq Kadhim; Khudhier, Zina Abdul Hussein

    2015-01-01

    This study investigates, pragmatically, the language of five electoral political propaganda texts delivered by Barak Obama. It attempts to achieve the following aims: (1) identifying the speech acts used in political propaganda, (2) showing how politicians utilize Grice's maxims and the politeness principle in issuing their propaganda, (3)…

Top